I’m hoping by the end of the year. All of the “difficult” things are finished (control flow, syntax transformers, call/cc, dynamic wind, exceptions, libraries, etc) and it’s just a matter of filling missing base library functions. If there’s something in particular that you need you’re welcome to file and issue or post a message on the discord and I’ll prioritize it.
That being said, Steel is excellent and I highly recommend it if you just need R5RS with syntax transformers
Tangential, but I've been wanting to dive back into FP for quite sometime; for context I used Haskell at a payments corp ~10 years back, working mostly with Typescript, Zig and Nim for the past couple of years, realizing I am basically trying to do FP in most of these languages.
Is Racket a good language to pick up and re-learn my concepts + implement some tools? Or are there some other languages that would be better to both brush up and learn the syntax of, I do not want to fight the syntax but rather express functions as seamlessly as I can.
Racket is a rich and powerful language, but it is also designed with certain specific ideas in mind. You can learn more about the "zen" of Racket here:
Thank you for the response professor, really appreciate it from one of the creators of the language itself;
I did give your document a read and my (naive) understanding is you basically create DSLs for each sub-part of the problem you're trying to solve?
>A LOP-based software system consists of multiple, cooperating components, each written in domain-specific languages.
and
>cooperating
multi-lingual components must respect the invariants that each
participating language establishes.
So basically you're enforcing rules/checks at the language level rather than compile time?
How would you recommend a complete novice attain this sort of state of mind/thought process while working in this language? Because my thoughts go simply to creating types and enforcing type-checking coupled with pure functions to avoid successful-fail at runtime programs.
Also how would one navigate the complexity of multiple abstractions while debugging?
The paper also mentions a web-server language (footnote 27), if I use racket will I be productive "out of the box" or is the recommended path to take is writing a web server language first.
Thank you again for taking the time to respond, and please do forgive me for these naive questions.
They will give you a sense of how one uses LOP productively.
You do not need to write a "web server language"! To the contrary, the Web server provides several languages to give you a trade-off between ease and power in writing server-side Web applications. So you can just write regular Racket code and serve it through the server. The server also comes with some really neat, powerful primitives (orthogonal to LOP) — like `send/suspend` — that make it much easier to write server-based code.
Thank you for the recommendation, I actually had some experience with Purescript, but I have been reading this book for the past two days, and it has been invaluable. Seems perfect for what I was trying to accomplish.
Without the explicit recur it's far too easy to misidentify a tail call and use recursion where it's not safe.
Recur has zero inconvenience. It's four letters, it verifies that you are in a tail position, and it's portable if you take code to a new function or rename a function. What's not to love?
Tails calls are especially useful in languages with macros. You don't know what context you are in, you just generate the call that makes sense. If the call happens to be in tail-position, you get the benefit of it.
Moreover, you can design cooperating macros that induce and take advantage of tail-position calls.
Here's a simple example that motivates tail-calls that are not tail-recursive:
> Evaluates the exprs in order, then, in parallel, rebinds the bindings of
the recursion point to the values of the exprs.
(def factorial
(fn [n]
(loop [cnt n
acc 1]
(if (zero? cnt)
acc
(recur (dec cnt) (* acc cnt))
; in loop cnt will take the value (dec cnt)
; and acc will take the value (* acc cnt)
))))
Thanks for the pointers. Trampolining is an old idea for obtaining tail-calls. It's a kind of folk-wisdom that has been rediscovered many times, as the related work here shows:
Usually the trampoline is implemented automatically by the language rather than forcing the author to confront it, though I can see why Clojure might have chosen to put the burden on the user.
> Clojure is not the product of traditional research
> and (as may be evident) writing a paper for this setting
> was a different and challenging exercise.
> I hope the paper provides some insight into why
> Clojure is the way it is and the process and people
> behind its creation and development.
Ah, I didn't know there was a HOPL paper! Some day I will have time to run a course reading HOPL papers. Some day I will have the time to read HOPL papers myself (-:. Thanks for the pointer.
In addition to the general sibling comments, I can personally attest that Shriram knows what the Y combinator is and has been teaching students about it for at least 25 years. My own lecture notes from one of his classes about the lambda calculus and the Y combinator were for a long time on the front page of google results for info about either!
In such ecosystems, for long-term, evolving production work (when you don't know all your eventual needs upfront), you need to have the institutional capability to build from scratch whatever components you might need. Just in case whatever you need later doesn't yet exist in the ecosystem.
Then you need to retain the personnel who give you that capability. Because they are rare, in a field in which 99%+ of developers only glue together NPM or PyPI packages. (And many use Web search or, now, Claude Code to do the glue part.)
If I founded a startup doing mostly Web-like server backend work, I'd consider doing it in Racket or another Scheme, and then using that as a carrot to be able to hire some of the most capable programmers. (And not having to bother with resume spamming noise from hardly any of the 99%+ developers, who will be pounding the most popular resume tech stack keywords instead, because their primary/sole goal is employability.)
The correlation is likely causal in both directions.
They're niche because they're doing weird, interesting things. Like creating their own VMs to support funky features. So nobody wants to depend on them: low bus-factor.
They can do weird, interesting things because they don't have a large user-base that will yell at them about how they're breaking prod.
This isn't meant to be a good programming mechanism, it's meant to be an illustration of how to use the macro system.
But also, if you're processing non-linear data, you're going to want to do with a recursive function anyway. E.g., when dealing with a tree. Code below; can't seem to get multi-line code-formatting so it looks hideous:
Recursion just ends up using the call stack as a stack data structure. I would much rather use an actual stack data structure, that will be easier to debug and have better locality since there isn't an entire call frame overhead to put one value into the stack.
You’d be right if this was 1950. Since then literally all hardware, and compilers, have this specific use case so optimized that you’ll likely see the opposite if you benchmark it.
Prove it. You can put your stack data structure on the stack anyway. A balanced tree isn't going to have more depth than your memory address bit length. Why would copying a single value be slower than pushing an entire call frame to the stack? Locality is what matters and there is no truth to what you're saying.
More important is the debugability. If you have a normal data structure you can see the full stack of values. If you use recursion you have to unwind through multiple call frames and look at each one individually.
Recursion is for people who want to show a neat clever trick, it isn't the best way to program.
I'm sure you could optimize the explicit stack based one a bit more to reach parity with a significantly more complex program.
But might as well let ~75 years of hardware, OS, and compiler advancements do that for you when possible.
> Why would copying a single value be slower than pushing an entire call frame to the stack
Because that's not what happens. The stack arithmetic is handled in hardware increasing IPC significantly, and the 'frame' you are talking about it almost the same same size as a single value in the happy path when all the relevant optimizations work out.
> More important is the debugability
Debugging recursive programs is pretty neat with most debuggers. No, you don't unwind through anything manually, just generate a backtrace.
One reason it's slower is because your stack doesn't reserve any memory and grows to 40441 at its max size then shrinks back down again. Stack uses a dequeue by default which stores elements in chunks which likely causes lots of memory allocations (and deallocations) which don't happen in the recursive version. Also at n=80,000 your recursive version blows the stack.
The stack arithmetic is handled in hardware increasing IPC significantly, and the 'frame' you are talking about it almost the same same size as a single value in the happy path when all the relevant optimizations work out.
The program stack isn't magically special, it isn't going to beat writing a single value to memory, especially if that memory is some constant sized array already on the stack.
Debugging recursive programs is pretty neat with most debuggers. No, you don't unwind through anything manually, just generate a backtrace.
No matter what kind of debugger it is you're still going to be looking at a lot of information that contains the values you're looking for instead of just looking at the values directly in an array.
Recursion gets used because it's quick, dirty and clever, not because it's the best way to do it.
You don't seem to understand yet how complex it will be. My guess is ~10x the number of lines of code. It'll be significantly less readable, let alone debuggable.
(btw changing from stack to vector and reserving memory outright for the allocations has virtually no change in performance.)
> The program stack isn't magically special
This is what you're missing. Yes, it is magical because the hardware optimizes for that path. That's why it's faster than what you'd think from first principles.
> it isn't going to beat writing a single value to memory
If you examine the kernel trace from this, you'll find that it has the exact same memory usage and bandwidth (and about twice the ipc). Magical, yes.
You're trying to say that pushing a function call frame is faster than writing a single value to memory and incrementing an index, and when asked to prove it you made something that allocates and deallocates chunks of memory to a linked list where every access takes multiple pointer dereferences.
(Just me suggesting other alternatives right now)