Hacker News new | past | comments | ask | show | jobs | submit login

Author here; I have a lot of other posts in my personal blog about this, but: the current trends in VC-backed tech companies are about minimizing risk and following fashion, rather than any technical merit or experimentation. Said another way: if an Elixir company dies, it's "damn, shouldn't have picked Elixir!" If a Python company dies, it's "startups are hard," with no investigation behind what Python cost you.

I go into it a bit here https://morepablo.com/2023/05/where-have-all-the-hackers-gon... and here https://morepablo.com/2023/06/creatives-industries.html

Elixir has real technical downsides too, but honestly they never come up when someone advocates against it. And this is fine, building companies and engineering culture is a social game at the end of the day.




Could you maybe share your perception of the technical downsides of Elixir?


The libraries out there lack the breadth and maturity of some of the other ecosystems (as a simple example).

Or at least they did for some of my corners of the world.


Sure! I find most responses (like the other one on this comment) talk about the social as it relates to the technical (what I call "atmosphere" in [this blog post][1]). I'll avoid that, since a) I think it's kind of obvious, and b) somewhat overblown. If Python has 20 CSV libraries and Elixir has 2, but they work, are you really worse off? I'll instead try to talk about "soil" and "surface" issues: the runtime, and assuming engineers know the languages already and what they allow for expressivity. Here we go!

--- Most of the BEAM isn't well-suited for trends in today's immutable architecture world (Docker deploys on something like Kubernetes or ECS). Bootup time on the VM can be long compared to running a Go or OCaml binary, or some Python applications (I find larger Python apps tend to spend a ton of time loading modules). Compile times aren't as fast as Go, so if a fresh deploy requires downloading modules and compile-from-scratch, that'll be longer than other stacks. Now, if you use stateful deploys and hot-code reloading, it's not so bad, but incorporating that involves a bit more risk and specific expertise that most companies don't want to roll into. Basically, the opposite of this article https://ferd.ca/a-pipeline-made-of-airbags.html

Macros are neat but they can really mess up your compile times, and they don't compose well (e.g. ExConstructor and typed_struct and Ecto Schemas all operate on Elixir Structs, but you can't use all three)

If your problem is CPU-bound, there are much better choices: C++, Rust, C. Python has a million libraries that use great FFI so you'll be fine using that too. Ditto memory-bound: there are better languages for this.

This is also not borne from direct experience, but: my understanding is the JVM has a lot more knobs to tune GC. The BEAM GC is IMO amazing, and did the right thing from the beginning to prevent stop-the-world pauses, but if you care about other metrics (good list in this article https://blog.plan99.net/modern-garbage-collection-911ef4f8bd...) you're probably better off with a JVM language.

While the BEAM is great at distribution, "distributed Erlang" (using the VM's features instead of what most companies do, and ad-hoc it with containers and infra) makes assumptions that you can't break, like default k-clustering (one node must be connected to all other nodes). This means you can distribute to some number of nodes, but it's hard to use Distributed Erlang for hundreds or thousands of nodes.

Deployment can be mixed, depending on what you want. BEAM Releases are nice but the lack some of the niceness of direct binaries. Libraries can work around this (like Burrito https://github.com/burrito-elixir/burrito).

If you like static types, Dialyzer is the worst of the "bolted-on" type checkers. mypy/pyright/pyre, Sorbet, Typescript are all way better, since Dialyzer only does "success typing," and gives way worse messages.

   [1]: https://morepablo.com/2023/05/where-have-all-the-hackers-gone.html


> --- Most of the BEAM isn't well-suited for trends in today's immutable architecture world (Docker deploys on something like Kubernetes or ECS). Bootup time on the VM can be long compared to running a Go or OCaml binary, or some Python applications (I find larger Python apps tend to spend a ton of time loading modules). Compile times aren't as fast as Go, so if a fresh deploy requires downloading modules and compile-from-scratch, that'll be longer than other stacks. Now, if you use stateful deploys and hot-code reloading, it's not so bad, but incorporating that involves a bit more risk and specific expertise that most companies don't want to roll into. Basically, the opposite of this article

I don't necessarily disagree with your reading of the trends, but if following the trends means losing the best tool for the job, maybe it's not the tool that's in the wrong. Re: deploy time, I don't think there's a need for deployed servers to fetch and compile modules --- you wouldn't do that for a Java or C server, you'd build an archive once and deploy the archive. I guess if you're talking about the speed of the build pipeline, I'd think the pieces in the build could be split into meaningful chunks so builds could be in parallel and/or only on the parts where the dependencies changed. I imagine BEAM startup itself isn't particularly fast at the moment, because I haven't seen a lot of changelogs about speeding it up, but I'm not sure there's a huge amount of perceived need there? If you're really going to abandon all hotloading, you could also abandon all dynamic loading and preload your whole app (but that might complicate your build pipeline).

> This is also not borne from direct experience, but: my understanding is the JVM has a lot more knobs to tune GC. The BEAM GC is IMO amazing, and did the right thing from the beginning to prevent stop-the-world pauses, but if you care about other metrics (good list in this article https://blog.plan99.net/modern-garbage-collection-911ef4f8bd...) you're probably better off with a JVM language.

The BEAM GC is really so different than a JVM GC that it's hard to compare them. I guess you can still measure and compare throughput, but JVM has to deal with all sorts of datastructure complexity that comes when you don't have the restrictions of a language with immutable data. You can't make a reference loop in a process heap, so the BEAM GC doesn't have to find them; only a process can access its heap, and the BEAM GC runs in the process context, so there's no concurrent access to guard against (otoh, a process stops to GC, so it's locally 100% stop the world, it's just the world is very small). Etc, the specifics are so different, which one is better is hard to say, the world views are just so different. But yes, there's significantly fewer knobs, because there's a lot less things to change.

> assumptions that you can't break, like default k-clustering (one node must be connected to all other nodes). This means you can distribute to some number of nodes, but it's hard to use Distributed Erlang for hundreds or thousands of nodes.

That's certainly the default, and some things might assume it, but you absolutely can change the behavior, especially with the pluggable epmd and dist protocol support. You will certainly break assumptions though, and some things wouldn't work: I'd expect the global module's locks to not function properly, and if you forward messages between nodes, you can make things fail in different ways ... In a fully meshed cluster, if a process on node A sends a gen_server:call to a process on node B, that process can forward the message to a gen_server on node C, and when the process on node C replies, it sends directly to node A; if your cluster is setup so that nodes A and C can't communicate directly, the reply would be lost, unless the process on node B that forwarded the request to node C arranges to forward the reply back to node A. If you do that sort of forwarding, you also lose out on the process monitoring that gen_server:call typically does: if a gen_server crashes, or the connection to the node is ended, calls fail 'immediately', at the cost of extra traffic to setup the monitors, although at WhatsApp, we didn't use that... the cost to setup and teardown monitors was too great.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: