Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
XL: An Extensible Programming Language (xlr.sourceforge.io)
179 points by PaulHoule on Feb 21, 2024 | hide | past | favorite | 86 comments


Author of the project here... Weird to see this generates such a discussion when the project is something like 20 years old, and has been quasi-dead for a while now ;-)

An interesting derivative of XL is Tao3D, which shows what you can do with it. https://tao3d.sourceforge.net

A FOSDEM workshop about Tao3D (which includes initiation to XL): https://www.youtube.com/watch?v=uE9LwSuZD64

The design philosophy, and why it's not "just another Lisp" is here: https://xlr.sourceforge.io/Concept%20Programming%20Presentat...


I never heard about XL and it looks very interesting, the DSL capabilities remind me of Rebol/Red, but it looks like XL has a proper type system and compile-time optimisations.

What is the current status the project? Is there any ongoing work and plans?


The current state is "on backburner by lack of time". Projects like https://grenouillebouillie.wordpress.com/2022/03/07/a-theory... and https://github.com/c3d/DB48X-on-DM42/tree/stable have been consuming most of my spare time cycles.

Here are factors playing a role in my current thinking:

1/ the LLVM debacle. I just can't follow them changing the APIs all the time. I gave up on LLVM for now.

2/ Rust being the first language introducing a concept that was not trivial to introduce via an XL library, lifetimes. I think that I nailed a design now, but it annoyed me for a while, and the design is not implemented.

3/ I spent some time documenting where I wanted to go, notably the type system. As a result, I found that the language was becoming complicated, which annoys me. I'm trying to get back to super-simple roots, but I have no clear path towards this goal yet.

4/ I want to to unify the self-compiling compiler and the dynamic one. The self-compiler only compiles an older dialect of the language.


Sounds great, I hope you find a way to resolve those! I will play around with the current version for now, it's been a while since I encountered a new interesting language.


20y old but it feels fresh and super powerful to me, maybe I just haven't seen anything like it before. It would be great to have this stuff in other high level languages.


Well, quite an ambitious and impressive piece of work. And as I mentioned elsewhere here, reading your history made me feel a lot better about not having a ton to show after 3-4 years, haha.


Makes me think of this paper "Open, extensible object models" by Ian Piumarta and Alessandro Warth:

https://www.piumarta.com/software/id-objmodel/objmodel2.pdf

The focus is on the object runtime where almost any operation including method lookup can be changed inside the end user environment.


I just posted a bit of a love letter to Inversion of Control into the related submission: coding felt like I was building each brick from the bottom up, but IoC let me feel like there was a competent responsible place where all the objects & providers were, where they could be flexibly built out & instrumented. With incredible APIs for dynamically modifying or introspecting what this world of entities & the runtime machinery/factories/&c was. https://news.ycombinator.com/item?id=39460840

Spring Framework & other Java IoC/DI stuff was pretty amazing. I kept finding new cool layers of depth as I dove into the model.

I love the idea of programming languages that might better integrate some of these ideas, of instrumentability. I'm not sure if it's actually necessary though; maybe having all these code assemblers living in userland libraries is fine.

Thanks for the link! Not sure if I've run into this VPRI / Warth combo paper, & worth revisiting either way.


How does this compare to http://flexipl.info/

Iirc backend and goals seem quite similar.


I would really like to see more discussions about the difficulties using LLVM can create for smaller language projects. The entire PL ecosystem is hyped up to revolve around the project, but rarely do you see language designers and solo/small group devs talk specifically about problems relying on LLVM. The history section of XL’s page cites non-commitment to code compatibility between releases as their big struggle, which is a change from the more common complaints of inherent complexity and compilation time. I’d love to see more discussion of this, and maybe, see a focus on smaller backends and code generation tools for compilation, like QBE.

Anyone with recommendations for reading would be welcome to dump those here.


I specifically addressed some of these topics in this talk: https://www.youtube.com/watch?v=Xzo7GIoDgAo


I realized I forgot to share my most memorable piece of code ever: llvm-crap (Compatibility Restoration Adaptive Protocol)

https://github.com/c3d/xl/blob/master/src/llvm-crap.h



For sure this.

I have a little compiler from 10+ years ago I was working on. And I recall after I had put it down, and tried picking it back up just maybe a year later(?) The interfaces I was using in llvm all changed. It really took the wind out of my sails for jumping back in.


From what I can read the author got really unlucky with some kind of radical API changes. Maybe at that time the LLVM team was a bit less serious with deprecations ?

I use LLVM since v9, nowadays I'm stuck on v15 (that's not because of LLVM btw).

Between the two versions there's been a radical change too, i.e "opaque pointers", but the transition was rather smooth as we were provided, for a long time, the two versions of the functions affected by the change. Maybe the LLVM team got more serious since the author experienced the said difficulties ?

Other thing I note is that the author uses the CPP API. I use the C one which exposes only a high-level subset of the CPP one. This encourages a saner use of LLVM, a more concrete separation between the front-end and the mid-end, although sometimes there are limitations.

A simple example of what encourages the C API, especially since opaque ptrs are added, is not to rely on LLVM to retrieve the IR type of an IR value. That should always be done using the AST, eg with an `.ir` field in your nodes.

Another one I remark, after a brief overview of LLVM-CRAP, is that the author had to change the internal data structure used, depending on the LLVM version [0]. Using the C API that would never had happened. The C API essentially allows to create block refs, instructions refs, value refs, type refs, contexts. Then you choose the containers you want to use to hold them. No need to switch to another stdcpp one, even if internally LLVM does so.

[0]: https://github.com/c3d/xl/blob/master/src/llvm-crap.cpp#L265


Seems cool. But I'm not convinced extensible languages benefit from adding a variety of syntactic forms. As your ability to extend the feature-set increases, the need for a single regular syntactic form increases. Lisp may not be appealing to beginners because of its parenthesis, but they absolutely are what makes it work so well for its intended use cases.

Really, all use cases. I'm actually a fan of the parenthesis. Lisp feels like aesthetic perfection to me. Languages like Haskell and XL here just look like line noise to my brain.


IMO, there’s a wide unexplored design space between the minimalism of Lisp and richness of other languages. A programming language inspired by something like KDL (https://github.com/kdl-org/kdl) has the potential to be in a very sweet spot between the two. "Everything is a node" instead of "everything is a list" is only slightly more complicated, but also vastly more readable that a soup of parenthesis.


Where would you put Lua on this spectrum?

Also, I do not find Lisp unreadable - you train your brain to ignore the parens and look at the indentation instead pretty quickly. And you can always tell Emacs to gray out the parens if you really want. ;-)


> you train your brain to ignore the parens and look at the indentation instead pretty quickly

But I don’t want to train my brain to ignore useless syntactic noise, I want my brain to be guided by helpful syntactic guides.


Good point, but if you start treating indentation as part of the syntax, we all know where it would lead...



Agree about Haskell... as far as I'm aware there is actually no declarative/easily-readable definition of the Haskell syntax that is also complete, especially when it comes to the indentation rules, and the syntax is basically defined by the very (ironically) imperatively-defined GHC parser[0].

I prefer a syntax like in Pure[1], where the ambiguous, hard to parse indentation-based syntax is replaced by explicit semicolons (Yeah, you can use braces/semicolons in Haskell as well, but most code doesn't).

[0] https://github.com/ghc/ghc/blob/master/compiler/GHC/Parser/L...

[1] https://agraef.github.io/pure-lang/


Just wanted to say, thanks for the link to pure... I've been thinking a lot recently about languages based on term rewriting and hadn't seen it somehow.


> Lisp may not be appealing to beginners because of its parenthesis

You may have been referring to Lisp beginners, as opposed to overall programming beginners, but I’ve heard anecdotally that overall programming beginners take to Lisp better than those with experience in Algol style languages.

I could see that. And will prob be testung that hypothesis out soon.

And I too have become a fan of the parentheses.


This is a bit anecdotal, but my freshman high school programming course, in 1999, was taught in Dr Scheme. I had already been programming in Q Basic and C++ at that point, and found it _maddening_. Not because of the perens really, editor was fine, but because of the lack of structured constructs for control flow that I was used to. Tail recursion, while now quite natural to me, just did not make sense at the time. My classmates had less trouble getting that bit because they had no preconceived notion of how control flow worked I think, and also didn’t expect parts of the language to exist that either didn’t or were hidden in the teaching mode we had to use it in. For most of a decade I listed that as my least favorite language of all time, until going back to find out that many of the things that were missing were actually there, they had just been hidden from us, pretty sure by imposing the “beginning student” mode: https://docs.racket-lang.org/drracket/htdp-langs.html

Simplifying for teaching is good. Making useful things completely unavailable… not sure I agree.


I've always been curious about teaching total beginners Lisp. If the adage is true that learning a variety of languages will round you out in the long run, then experimenting with the S-expression, functional-lite model of Lisps as a first exposure should be low or zero-cost.

And then we get to observe: If instead of modeling instructions and memory semantics, we first teach someone to model abstractions and computations, how do they think about programming differently even when eventually exposed to C and its lineage?


Though the pool of students was biased towards those who excelled in high school which post 1980..1990 probably means some prior exposure (i.e., not your "total beginners"), this was a big part of the original idea behind SICP-6.001 at MIT. AFAIK, they gave up on that experiment 15 years ago and moved to Python: http://lambda-the-ultimate.org/node/3312

Being faculty / wordy, there is probably much written about it attempting to draw conclusions. There were definitely some "smart, but total beginner programmers" in the mix, although I don't know how scientifically separated the performance of the "two 'some's" might have been.


Relatively recently I found the SICP lecture series on YouTube from the early 80’s. I’d read most of the book already but it was so much better with Sussman and Abelson teaching it.

The audience for the series I’m thinking of is adult professionals- programmers I believe. And I’d heard they were from DEC or HP or some such. So in addition to the material itself, you have the blessed quirkiness of its two authors, alongside a whole lot of late 70s/early 80s attire and rudimentary graphics. Quite a feast… lol.


I'm starting to be increasingly certain that making a new (sub)language to express and solve your problems is the highest form of programming.

You'd think the ability to define custom syntax would be the most important, but paradoxically lisp makes it a lot easier simply because there is no syntax.


Sure, Structure and Interpretation of Computer Programs (SICP) calls that "meta-linguistic abstraction", and there are many complex systems that contain a Domain Specific Language (DSL), or a complete programming language (e.g. E(macs)LISP).

That is not much different from a mathematician that creates a new notation like "∞" or "∫_d_" because existing notation is capable but not convenient (to cluttered, shorthand needed to focus on domain concepts) for what needs to be done.


So, “there is no syntax” is one of those things that isn’t really true of most lisps: it’s more true to say they have programmable syntax and the default syntax is minimal.


Exactly this. In addition, I've found that in almost all Lisps, macros are a niche feature that are not needed to solve many problems; however, because the actual invocation of macros and normal functions is the same S-expression form, almost all Lisp programs end up as some sort of DSL. Especially when you have arbitrary heterogeneous data structures like Clojure.

Instead of dealing with chains of single assignments as in older C-like languages, you've got a bunch of compact calls fully in-context.


There are lisps without parens too, it's not strictly required for parsing and evaluation. I think the main purpose of parens is to highlight the syntax tree for developer manipulations with macros


> There are lisps without parens too

Are there any that come close to the usage of lisps with proper s-expressions? I've seen them come and go, but no "lisp without parens" that seem to stick around for longer period. I guess that should say something? Maybe we just haven't found the right way of exposing it though.

Personally, when I've given it a try, the indentation-based syntax always makes it hard to use without faults. Maybe it's just a "getting used to" thing, but it seems like a mistake to depend on invisible characters for syntax.

FWIW, I hardly ever write macros (mostly use Clojure/Script), so not sure the main purpose of the parens is macro writing for me. S-expressions with something like parinfer just makes programming a lot more enjoyable and easy, especially compared to all the C-syntax-like languages.


> Are there any that come close to the usage of lisps with proper s-expressions?

If you consider Julia a Lisp, which I would argue you should, it may be the most widely-used Lisp currently. The only way you'll see an s-expression is is to call Meta.show_sexpr.

If you consider Dylan a Lisp, you should consider Julia a Lisp as well. If you don't consider Dylan a Lisp, I have to conclude that you consider the s-expressions to be part of the definition of a Lisp. Which isn't an unreasonable taxonomy, but isn't the one I use.


> If you consider Julia a Lisp

Maybe, but would you consider Julia a Lisp without parentheses, which was the context here? Because after looking at their documentation (https://docs.julialang.org/en/v1/manual/methods/ for example), it seems to have as many parens as any other lisp, just postfix notation rather than prefix notation, `f(Float32(2.0), 3.0)` vs `(f (Float32 2.0), 3.0)`.

Arguing about what makes or doesn't a lisp is a conversation I don't think will lead to any useful results, I feel like that's ongoing since the 60s or something. Out of my league :)


Given you said

> lisps with proper s-expressions

It's off-putting for you to fly in and pretend to be surprised that Julia uses parentheses in a way consistent with other languages which are syntactically Algolic. Apologies for expecting a serious conversation, didn't know who I was dealing with.


The root context of the conversation you injected yourself into, was about lisps without parens. It's OK to misunderstand, but please don't blame me for your own misunderstanding.

Have a nice day!


I am pretty sure logo is mostly a lisp without parenthesis. I find it a lot more readable than most lisp dialects. I am not sure why it never caught on.


I was about to post exactly the same… haha.

I’ve been doing a recent deep dive in languages as part of my own overly ambitious language attempt and only recently found out that Logo was in fact Lisp based.

I’m guessing that the reason it might not have taken off is because, unfortunately, it was looked upon as a ‘kid language’.

The goals of Papert and others seem a far cry from where tech has landed today.

edit: for an egregious typo


It is. A piece of trivia: AFAIR, you say

  SUM 2 3
in Logo to add two numbers, but

  [SUM 2 3 4]
to add more. (Though I learned Logo more than three decades ago, so I might be mistaken.)


I've been working on something centered around extensibility, or metaprogramming, coming from a strictly imperative angle, with the belief that anything else (functional, relational/logic based, whatever) can be built on top of that.

A few guiding principles are:

- simplicity above all, with as few fundamental elements as possible

- the parser is a separate issue, just write your own syntax to avoid the most divisive bikeshed element of PL design, or pick the C like or ALGOL like one out of the box. You very likely want your own syntax anyway as you write extensions.

- every language element, from modules down to function calls, are first class, ie have an (implementing) type, can be stored in variables and used in expressions, be introspected and evaluated/deployed.

- runs at compile time, compiles at run time (code generation/partial evaluation/dynamic code)

- generates C, Java, Python and various bytecodes to maximise interoperability, code availability and deployability

- has no standard runtime or standard library of its own, is entirely parasitic on other environments

Even if it ends up being completely useless, it's a really interesting exercise in design.


That's interesting, because I've been 'working on' something with all those same bullet-points. I'd be interested to hear more about your take on this!

Some notes on my lack of progress: http://www.nuke24.net/plog/43.html


Anyone know of other languages using pattern matching as a foundation?

I’ve come across Refal[1], who’s creator Valentin Turchin seems like someone who hasn’t gotten the recognition he maybe deserves. As an aside, his son Peter Turchin does some interesting work in a different direction that’s def topical these days.

I know SNOBOL gets labeled as one.

Would Prolog count? I haven’t actually written any myself. They def make a strong appearance in Erlang, though it isn’t the first thing that comes to mind if describing. There’s XSLT of course.

Any others?

I think if one were only able to pick one metaphor for a language, I think a strong case could be made for:

pattern => behavior

[1]https://en.wikipedia.org/wiki/Refal


Check out Pure. https://agraef.github.io/pure-lang/

This was what convinced me to switch to LLVM. I now regret that decision, but at the time, it sounded like a lot of fun.


Thx, I’ll check it out.


Prolog is based on "unification" in large part, which can be considered advanced pattern matching.


Are you using pattern matching in the sense of Standard ML and the ML family of languages?


Probably but I don’t know any ML… lol. I’ll def check it out though so thx!


I'd like C, but where I could use some syntax like:

   typedef _Load("libvector.so")(int) vector_int;
to add arbitrary "builtin" types to the language. The idea being that "libvector.so" would use some spec-defined API to extend the core C language with the new features. This would replace a slew of features in the core C++ language (templates, classes, any sort of reflection, etc.), with some sort of 3rd party dylib you load. The idea is to keep the core language simple in the sense that it doesn't contain its own extension mechanism as syntax.

I think my only worries would be how to handle life-time analysis, i.e., the moral equivalent of destructors, copy constructors, move-semantics, etc; maybe the dylib ('libvector.so', in this case) has to register stack-clean-up, copying, and move-semantics code-generation for the compiler?



Hey, neat! However, this line of research (which I'm passingly familiar with) is about extending the grammar of the host language (for Xoc: C), along with the semantics (for Xoc: the Zeta interpreter). I think I'm interested in a slightly rotated design space: the C grammar is left alone[1]; instead, an extension of the C spec defines a specific API for (effectively) code injection; and, finally, we limit ourselves to new types and functions, possibly with non-standard sizing (`sizeof`/`alignof`/...) information.

[1] Modulo the introduction of `_Load("...")(__args__)`, and a few other function-like invocations.


Those interested in XL may also find the much older GalaxC by John Beetem interesting: https://community.element14.com/technologies/fpga-group/b/bl...


Why do you think it's older? XL started as LX around 1993, and IIRC self-compiled in 2004 or so.


I am looking at a 1989 paper by Beetem & Beetem [1] but then it was called Galaxy.. I agree "much older" might be a bit.. much. Haha.

EDIT: Joking aside, I would love to hear a compare & contrast between Galaxy/GalaxC & LX/XL, though!!

EDIT2: and via that first link, I think you can get to a full source code distribution compile-able with C on 32-bit Linux. Might have its own woes like LLVM hosting, but those might also be more solvable as part of a 64-bit port.

[1] https://ieeexplore.ieee.org/abstract/document/28124/


Quite an ambitious offering by the author. Kudos to him/her.

I have my own little (overly) ambitious language effort in a different direction and tend to beat myself up a bit since I feel like I should have more to show after almost 4 years.

Then I noticed in the author’s history section towards the end that they’d been working on this sine the early 90’s and felt better about that.


Other than here, are there any other places online where folks writing their own languages tend to congregate?

Having recently joined that tribe, I think it does take a particular worldview and type to be willing to undertake that. I suppose folks writing an OS perhaps even more so.


http://lambda-the-ultimate.org/ is one of the only PL-centric websites with discussion I can think of.


Ah yes. I know it but haven’t been there in a long, long while. Thx.



Thx. I’ll check them out.


Following the "is" operator sections, I half expected it to build up to prolog facts, and then solve the zebra puzzle.


Maybe a good DSL language. But I will avoid adding such complexity to my daily programming language at all costs.


What does "is is is" mean in this language?

Maybe it's a quine?

Can you define an "isn't" operator?


Let's ask the interpreter / compiler:

  ./xl -nobuiltins -parse /tmp/glop.xl -style debug -show
  (infix is
   is
   is
  )
So what it sees is a definition of a name 'is' as itself.

Now, that name is very unlikely to be usable because `is` as an infix is so central to everything. As a matter of fact, it looks like even "is is 2" actually crashes the current implementation.

Oh well. I wonder what a sensible error message on this would be. Probably: "Bill Clinton denied to comment on the meaning of that statement".


> "Bill Clinton denied to comment on the meaning of that statement".

Oh man, that’d be hilarious.

I’ve been hacking around with my own sorta Forth/Postscript like attempt at a language that runs on WebAssembly and does graphics in the browser.

Odds are it’ll never see the light of day. But if it does, you’ve now triggered all kinds of ideas for ridiculous ideas for error messages. Like starting them of with ‘hey bruh…’ or trotting out good ol’ Mr Clippy from Microsoft.


“is is is” would be called BillClintonLang, no?


Or TrumpLang.

"It is what it is".

https://www.google.com/search?q=Trump+it+is+what+it+is

2nd hit onwards.


Oh man. I’d missed that Trumpism.


This seems to be Prolog with a large dose of syntactic sugar.


There is no backtracking, but being able to implement Prolog on an XL basis was an important part of the original design of the "runtime" version of XL.


More Lisp than Prolog… I don’t see any constraint-solving here, just macros.


I don't see them as macros either, it's more like pattern matching and replacing patterns with expressions. Maybe there is a term for that?

And it's not compile-time only, I'm pretty sure you can't do this as a macro:

>loop Body is { Body; loop Body }


> it's more like pattern matching and replacing patterns with expressions

Scheme macros use pattern-matching.

> loop Body is { Body; loop Body }

This, though, is definitely strange. I’m not quite sure how it works.


It's just a case of a pattern definition including references to itself, which are then linked to its definition before use. In the same way function definitions often contain reference themselves, which are linked to complete the definition.

   uint64_t alignBitsLeft(uint64_t x) {
      return ((x >> 63) == 1)) ? x : alignBitsLeft(x << 1);
   }
The pattern "alignBitLeft(uint64_t)" definition starts in line 1, but is not completed until line 3. Yet line 2 references that pattern, before the definition is complete.

This works, because of a linking step at definition completion, where the "alignBitsLeft" reference gets linked to the now complete "alignBitsLeft" definition.

Similarly for:

> loop Body is { Body; loop Body }

The "loop Body" definition beings, including a reference to itself. Once the whole definition has been built up, the "loop Body" reference gets linked back to the "loop Body" definition. Ready for use.

--

Another way to look at this is memoization (at compile or run time). When you evaluate the second line of this:

  n is 1;
  loop { print n; n is n + 1; }
You get this:

  { print n; n is n + 1; loop { print n; n is n + 1; }}
If we cache that definition before evaluating it, when we get to the embedded loop clause, we don't need to compute it again. We have already computed that exact loop pattern before and can reuse the result (over and over).


You can do exactly that, and this is how this is defined in the built-in library

https://github.com/c3d/xl/blob/fast/src/builtins.xl#L222

How it works is what I tried to document in the document above. The trick is to have some fixed semantics on how rewrites are done, but a lot of freedom in how this is implemented. And this is far from perfect ATM.

For example, if you write "X is 2", this can be implemented as a constant, as a function returning a constant, or as a macro. However, if you write "X is seconds" in Tao3D, where "seconds" returns the current number of seconds in the clock, then it can no longer be a constant value. You still can use macro replacement (or inlining) or turn it into a function.

More details here: https://xlr.sourceforge.io/#compiling-xl


Obligatory "lisp has had this for ages".

Herein, macros seem to be invented from first principles, having only been exposed to imperative languages like C++. It is interesting to see macros here invented in the context of a language that is not homoiconic. I wonder how it stacks up against Rust macros.



"What is homoiconicity then? Typical definitions state that it is simply “code as data”, will point to a relationship between a program’s structure and syntax or note that the program source is expressed in a primitive data-type of the language. In the below, we will show that none of these definitions make much sense."

https://www.expressionsofchange.org/dont-say-homoiconic/


See the code around here to answer that question:

Definition of 'if' statement: https://github.com/c3d/xl/blob/fast/src/builtins.xl#L155

  // If-then-else statement
  if [[true]]  then True else False   is True
  if [[false]] then True else False   is False

  if [[true]]  then True              is True
  if [[false]] then True              is false
Definition of loops (https://github.com/c3d/xl/blob/fast/src/builtins.xl#L222)

  // Loops
  while Condition loop Body is
      if Condition then
          Body
          while Condition loop Body
  until Condition loop Body               is { while not   Condition loop Body }
  loop Body                               is { Body; loop Body }
  for Var in Low..High loop Body is
      Var := Low
      while Var < High loop
          Body
          Var := Var + 1


Obligatory "FORTH has had this for ages". :-)


Is Forth really homoiconic? I’m asking as a recent fan of Forth. I would’ve thought to describe it as having strong, interactive metaprogramming but not necessarily homoiconic. Unless ‘code = data, data = code’ because they’re all bytes… haha.

Postscript would count as homoiconic, I’d think. Though I haven’t actually seen programs that so this.


I think Forth counts, in a "Turing-complete" sense of homoiconicity. A Forth program in a classic indirect threaded system can self-modify, reach in and change core interpreter words during execution. But it isn't obeying any formal soundness principle, since it's just a thin layer over the machine.

Compiled Forths might opt to wall off some of those options behind the compiler interface, in effect making the language slightly less far-reaching.


I could see that. Which I guess would put assembly into the homoiconic bucket. I think it fits the concept but sure isn’t the same as what people think of with Lisp, etc… lol.


Had this discussion on Discord as well. Like most things Forth it depends how you use it. If we consider the classic indirect threaded systems, then every field in a definition is an address called an execution token (XT). In most systems you can convert an XT back into a text label. This allows pretty simple de-compiling for example.

It is trivial to collect those Xts as data and execute them later.

CREATE XT-LIST ] THESE WORDS COULD BE FORTH CODE [

XT-LIST now contains six XTs. They can now be read one XT at a time and executed.

So if one accepts that those XTs are data, BUT they are also code since they can be "executed" by the Forth VM then I think we can say Forth is homoiconic.

However if that doesn't suffice it is perfectly legal to pass text strings to EVALUATE.

: TEST S" THESE WORDS COULD BE FORTH CODE" EVALUATE ;

That's all I got.


Aha, that’s pretty awesome. I’m familiar with XT’s but hadn’t thought of using them that way. And didn’t know that you can convert back to text.

I’m going to have to do more Forth. And I have a sudden urge to go write some silly self-modifying assembly, if current processors even allow that anymore.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: