I really like Julia as a language but I have struggled to adopt it and be productive in it. Part of it is because of the JIT runtime and a sub-par LSP (at least when I last tried).
To those who regularly write Julia code, what is your workflow? The whole thing with Revise.jl did not suit me honestly. I have enjoyed programming in Rust orders of magnitude more because there's no run time and you can do AOT. My intention is not write scripts, but high performance numerical/scientific code, and with Julia's JIT-based design, rapid iteration (to me at least) feels slower than Rust (!).
The boring answer is that I don’t use huge dependencies that takes minutes to compile, and I don’t lean on the LSP - I tend to put more effort in reading the code.
In my experience you really gotta work with the tools the language gives you. Julia gives you Revise, so it’s a bit of a handicap not using it. Maybe analogous to writing Rust without an LSP.
I get that leaning on the LSP can become a habit, and also that the Julia LSP is quite poor, but I find it wild that rapid iteration for you is faster in Rust. I write Rust as well and can’t imagine how that would be the case.
A lot of people have focussed on the LSP in their replies when it is was only one of the problems I mentioned.
rust-analyzer is a great LSP and paired with clippy it can teach you the language itself. Also, writing numerical code is extremely easy in Rust. I can write code and just run cargo run to see the output. Julia, on the other hand, forced a REPL-based workflow which never has made sense to me. REPL-based workflow makes sense when you just want to do some script stuff. But when writing a code which will run for a long duration on a HPC? I don't get it. Part of the problem is I'm not "holding it correctly", but again, out of the box experience isn't good. You define a struct and later add or remove a field from it. Often you'll get an error because Revise.jl didn't recompile things. It was a sub-par experience and I was hoping to people would share their dev workflow in more detail
And yet Julia is used for large-scale simulations on giant HPC machines and Rust is not.
Recent versions of Revise let you redefine structs in the REPL.
You are not forced to use the REPL, ever. It’s a fantastic convenience, however.
My dev workflow is to write my code in Neovim, sometimes with a REPL attached to the editor to try out code snippets. I don’t need or use LSPs. I do enjoy the Aerial plugin, which pops up an outline of my code for easy navigation.
Well, my workflow uses Revise.jl. I develop either in Jupyter notebooks or in the REPL, prototyping code there and then moving functions to files when they're ready. In that context, rapid iteration is fairly fast.
Nowadays I often use Claude Code, working with a Julia REPL in a tmux or zellij session via send-keys. I'll have it prototype and try to optimize an algorithm there, then create a notebook to "present its results", then I'll take the bits I like and add them to the production codebase.
How do you develop a program which will run for longer duration on HPCs. How do you quickly modify struct definitations, how do you define imports (using vs include syntax is so confusing!)
REPL-based workflow doesn't make sense to me other than scripting work.
Re: REPL use, you just use it to run code and look at results. e.g. for TDD – you can modify your code files normally in the IDE, changes get picked up by revise, and then you re-run the tests in the REPL.
For long-running jobs, I basically follow the same process as in any other language: make the functions I want to run, test them locally on a small dataset that runs relatively quickly, then launch them on the remote machines with the full data.
Revise.jl has struct redefinition now, but before that I would just use NamedTuples while iterating, then make a struct when I was ready to move something to production.
`using` is for importing modules, `include` is for specific files. At work, we currently have a monorepo, with one top-level OurProject.jl file that uses `using` to import external packages, and `include` for all the internal files.
> How do you develop a program which will run for longer duration on HPCs.
The main strategy is to have a way of parameterize the program to bring the runtime down to seconds-minutes on a laptop. E.G. for PDEs, you may be running the HPC version on a giant mesh, but you can run the same algorithm on your local computer on a much coarser mesh.
> How do you quickly modify struct definitations
Thankfully on 1.12 this has been solved. You can redefine structs while keeping the REPL up.
> how do you define imports (using vs include syntax is so confusing!)
Yeah julia messed this up. The basic rule is that include and using are basically the same.
yup the LSP is bad, there is a new lsp being rewritten based on JET.jl a static code analyzer , this should be faster than the old lsp which kind of runs by loading all the modules into a julia instance and queries it for symbols and docs ( im not 100% sure but i think thats how it works)
Exactly ! The new LSP is getting ready https://github.com/aviatesk/JETLS.jl/ with one of the compiler devs working hard on it. I tried it with VSCode, Zed and Helix and it's more than fine already.
I hope julia developper tools will one day match the best of what other programming languages have to offer.
Just an FYI...Claude is actually really good at building LSP servers [1].
If you want a better Julia LSP, you might just be able to get Claude or Codex to build one for you. I've been impressed with the TLA+ bindings it generated.
What's the problem with the JIT runtime? Why is rapid iteration slower with JIT? Just-in-time compilation isn't inherently slower and is normally faster than AOT for dynamic languages and even static languages that have some dynamic features like dynamic dispatch
Maybe not what they meant, but Rust sometimes makes it tempting to just copy things rather than fighting the borrow checker. Whereas in C++ you're free to just pass pointers around and not worry about it until / unless your code crashes or gets exploited.
Speaking authoritatively from my position as an incompetent C++ / Rust dev.
I see. Fortunately, I'm aware of that and I don't use clone (unless I intend to) as much. Borrow checker is usually not a problem when writing scientific/HPC code.
Because passing pointers isn't as ergonomic in Rust, I do things in arena-based way (for example setting up quadtrees or octrees). Is that part of the issue when it comes to memory bandwidth?
Stable Rust doesn't have a local allocator construct yet, you can only change the global allocator or use a separate crate to provide a local equivalent.
Right. I have seen Zig where one needs to specify allocators as well. I'm sorry I'm not well versed enough to know how it makes things better for HPC though?
For now my plan is to write fairly similar style code as one may write in C++/Fortran through MPI bindings in Rust.
if you're using thread level parallelism, there is always a benefit to having a per-thread allocator so that you don't have to take global locks to get memory, they become highly contended.
if you take that one step further and only use those objects on a single core, now your default model is lock-free non-shared objects. at large scale that becomes kind of mandatory. some large shared memory machines even forgo cache consistency because you really can't do it effectively at large scale anyways.
but all of this is highly platform dependent, and I wouldn't get too wrapped up around it to begin with. I would encourage you though to worry first about expressing your domain semantics, with the understanding that some refactoring for performance will likely be necessary.
if you have the patience and personally and within the project, it can be a lot of fun to really get in there and think about the necessary dependencies and how they can be expressed on the hardware. there's a lot of cool tricks, for example trading off redundant computation to reduce the frequency of communication.
There's a lot of useful advice here that'll surely come in handy to me later. For now, yeah I'm just going to try to make things work. So far I have mostly written intra-node code for which rayon has been adequate. I haven't gotten around to test the ergonomics of rs-mpi. But it feels quite an exciting prospect for sure.
Why and how do you think it applies to broader domains?
Children learning in schools should not become product managers. If they are, what exactly is the "product" that they are "managing"? Reducing everything to and looking everything from a corporate viewpoint is bizarre.
I'm not saying this should be every single domain. This isn't about products or management, instead I would frame it like this: I notice that multiple cases where we are worried about the impact of AI are basically just about the replacement of certain activities that some humans already aren't doing in today's society. If we are worried we will be less good at doing job X once we don't do job X anymore, why are we not worried about people who never did job X in the first place? If we are worried about people not doing jobs anymore, why are not worried for the human development of people wealthy enough not to work anymore for the rest of their days? I would not assume someone who won the lottery is going to have their life become uninteresting or see some cognitive decline. It could probably happen, but you can also see a path where the person just chooses to do the activities they always wanted to do, where they keep learning and exploring without the burden of usual life constraints. People already play chess when machines have beaten us for decades, just because they enjoy it.
Regarding education I think AI is a huge revolution waiting to happen. Usual courses have become boring? Have future super powerful AI generate per student highly personalized programs, create bespoke video games where succeeding can only happen once the student has validated all the notions you wanted them to validate etc.
>If we are worried we will be less good at doing job X once we don't do job X anymore, why are we not worried about people who never did job X in the first place? If we are worried about people not doing jobs anymore, why are not worried for the human development of people wealthy enough not to work anymore for the rest of their days?
None of this is equivalent to the topic of discussion. The point is that even in a world of division of labour and shared expertise, there is no atrophy in general populace because someone is trying to become expert in something. The whole point is that the brain is being put in use to do something. If not in X, then in Y. If none of the alphabets are available, where do you put your brain in use to?
>I would not assume someone who won the lottery is going to have their life become uninteresting or see some cognitive decline. It could probably happen, but you can also see a path where the person just chooses to do the activities they always wanted to do, where they keep learning and exploring without the burden of usual life constraints. People already play chess when machines have beaten us for decades, just because they enjoy it.
Again, please play attention to the main idea of the article linked. Most of cognitive development happens in the early formative years. Yes, learning itself never stops, but the primary period of it during perhaps the first 25 years of someone's life. You NEED to make mistakes and learn from them during this period. If you are offloading work that your brain was supposed to do here, it's extremely worrying.
>Regarding education I think AI is a huge revolution waiting to happen. Usual courses have become boring? Have future super powerful AI generate per student highly personalized programs, create bespoke video games where succeeding can only happen once the student has validated all the notions you wanted them to validate etc.
I think there is some truth to it, but you need to regulate how much AI can assist a student. It can be a patient teacher but it shouldn't replace their cognitive abilities. That is the whole point.
And they're as deterministic as as the underlying thing they're abstracting... which is kinda what makes an abstraction an abstraction.
I get that people love saying LLMs are just compilers from human language to $OUTPUT_FORMAT but... they simply are not except in a stretchy metaphorical sense.
That's only true if you reduce the definition of "compiler" to a narrow `f = In -> Out`. But that is _not_ a compiler. We have a word for that: function. And in LLM's case an impure one.
Reminded me of the anecdote mentioned in the classic "Real Programmer Don't Use Pascal"
> Some of the most awesome Real Programmers of all work at the Jet Propulsion Laboratory in California. Many of them know the entire operating system of the Pioneer and Voyager spacecraft by heart. With a combination of large ground-based FORTRAN programs and small spacecraft-based assembly language programs, they are able to do incredible feats of navigation and improvisation -- hitting ten-kilometer wide windows at Saturn after six years in space, repairing or bypassing damaged sensor platforms, radios, and batteries. Allegedly, one Real Programmer managed to tuck a pattern-matching program into a few hundred bytes of unused memory in a Voyager spacecraft that searched for, located, and photographed a new moon of Jupiter.
> The current plan for the Galileo spacecraft is to use a gravity assist trajectory past Mars on the way to Jupiter. This trajectory passes within 80 +/-3 kilometers of the surface of Mars. Nobody is going to trust a PASCAL program (or a PASCAL programmer) for navigation to these tolerances.
The article is satirical so I am not sure how true is this, but over its history, the maintainers of these probes have done truly remarkable stuff like this.
> "Many of them know the entire operating system of the Pioneer and Voyager spacecraft by heart"
is that actually true? During the voyager memory problems of 2023, I seem to recall that there were significant issues uploading entirely new programs to it because there was so little documentation around the internal workings of the hardware and software, and creating a virtual machine to actually test on was a significant achievement
To those who regularly write Julia code, what is your workflow? The whole thing with Revise.jl did not suit me honestly. I have enjoyed programming in Rust orders of magnitude more because there's no run time and you can do AOT. My intention is not write scripts, but high performance numerical/scientific code, and with Julia's JIT-based design, rapid iteration (to me at least) feels slower than Rust (!).
reply