Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OMG, YES! The JIT is here... and holy crap, if you haven't tried it yet... it's awesome (everything feels snappier). I'm particularly stoked for the receive optimizations and process aliases.

BEAM just gets better and better. It's a good time to be an Erlanger/Elixirist...



I just ran programs from computer language benchmarks with OTP24. Every program ran considerably faster almost 30% speed up.


I was hoping our `mix test` would be faster, but it doesn't appear to be. It would be nice if this got some attention fro the Elixir team.


`mix test` _is_ fast for me... I've only seen slow tests when folks are misusing timeouts (generally speaking); what problem are you seeing?


The delay is all in compiling exs files. It uses Kernel.ParallelCompiler to compile every .exs file, so it's very CPU/core dependent. On my weaker laptop, `mix test` takes nearly 10 seconds to just start.

I've looked into this in more details in the past. We've had success just writing our own test runner and avoiding exs files. But re-implementing things like running tests based on line number, or integrating with external tools (like excoveralls) has been a dealbreaker.


FWIW, I recently pushed a commit to master that made loading of Elixir's test suite 33% faster (from 15s to 10s): https://github.com/elixir-lang/elixir/commit/2eb03e4a314c0e6...

Unfortunately, it is a bit too large (and too late) for v1.12, but if loading times have been problematic for you, it would be awesome if you could try master out and let us know in the issues tracker (or in the commit) if you see any improvements.


Can we expect it for 1.13?


Yes.


What made your test runner faster? We would be very interested in porting those optimizations to ex_unit.


Are you sure that you're not including application startup in that 10 second measurement? I've seen code bases where horde or some other clustering was enabled in the test env and causing a ~5 second delay on application startup.


Really? I'm surprised that is your test bottleneck. I've mostly seen it be actually slow tests and things that can't be asynced.


It's noteworthy that the JIT doesn't (yet?) do runtime optimization or specialization, so gains should be moderate. The very low end of double digits.

Not comparable to going from a Javascript interpreter to v8.

But it's a great starting point.


From my recollection (interviewed OTP team on this stuff once) they don't really intend to go there either. They are a very small team comparatively and maintaining that kind of runtime optimization would likely be unwieldy. (https://devchat.tv/elixir-mix/emx-114-just-in-time-for-otp-2...)

It won't be anything like V8, entirely correct, but it brings some VM code to native performance, beating NIFs in some cases.

I think some RabbitMQ tests reported 30% increase in throughput which is pretty wild.


Seems like WhatsApp / Facebook would benefit by contributing some resources towards this...

https://twitter.com/garazdawi/status/1385263924803735556


Arguably, yes :)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: