Took me a long time to figure out what gets lost with Erlang is:
a) Ubiquity — everything understands HTTP. Erlang nodes only talk to Erlang (or a compatible runtime)
b) Because there are no middleware standards (REST, GraphQL, OAuth, etc.), you must build or integrate your own abstractions
c) Giving up infrastructure (reverse proxies, load balancers, CDNs), You handle distribution yourself or through OTP design
d) Interoperability with browsers and APIs, requiring bridging via something like Cowboy or gRPC gateway
setcookie secret in Erlang does not create or use an HTTP cookie, SSL certificate, or HTTPS connection. It sets the Erlang distribution cookie, a shared secret string used for node authentication within the Erlang runtime system (the BEAM VM).
Erlang’s built-in distributed networking subsystem allows them to connect to each other if:
1) They can reach each other over TCP (default port 4369 for the EPMD — Erlang Port Mapper Daemon — plus a dynamically assigned port for the node-to-node connection).
2) They share the same cookie value (here "secret").
The author's insight "No HTTP. No REST API", reframes the reality that Erlang moves things down the ISO OSI model - HTTP being on layer 7 and TCP being on layer 4. Erlang therefore misses the "benefits" of operating on a higher ISO layer, but gains powerful advantages of operating on layer 4:
i) True concurrency
ii) Transparent message passing
iii) Fault tolerance
iv) Soft real-time guarantees
v) Persistent cluster connections
Erlang’s design assumes a trusted, closed system of cooperating nodes, not a public Internet of clients. In other words, Erlang doesn’t live below Layer 7 — it IS its own Layer 7.
> a) Ubiquity — everything understands HTTP. Erlang nodes only talk to Erlang (or a compatible runtime)
OTP includes an http client and server. And ERTS includes an http mode for sockets. You may prefer 3rd party http servers (yaws and Cowboy are popular) or clients, but you have options that come with Erlang.
[No comment on b; I'm not sure I aprechiate the concept of standardized middleware]
> c) Giving up infrastructure (reverse proxies, load balancers, CDNs), You handle distribution yourself or through OTP design
You can put all this stuff between the users and you Erlang cluster. Within your Erlang cluster, I don't think it makes sense to leave the cluster and go back in... If you had a large server process in [language of choice], you probably wouldn't send request out to a load balancer to come back into the same process. If you have an n-tier system you may use a load balancer for requests to the other tier... In Erlang, the simplest analog is processes that serve the same request queue would join a pg group, and processes that want to send a request send to one of the members of the group.
> d) Interoperability with browsers and APIs, requiring bridging via something like Cowboy or gRPC gateway
If you want to talk http, you need something that talks http; there's at least 3 reasonable options, plus an http parser for sockets so you can do it yourself without as much fiddly bits. I guess I don't understand what you're looking for here.
This reads as if it isn't trivial to have an HTTP API for your public API in Erlang/Elixir, which is weird. Sure there isn't an included HTTP API for Erlang processes, but why exactly would you want one? They're not for the public internet, as their an implementation detail of your system. The majority of what they're capable of just isn't relevant to the public internet.
Unfortunately very little is trivial for me. Personally I have found the real value of Erlang to be internally between trusted nodes of my own physical infrastructure as a high-level distributed "brain" or control plane for health monitoring, config distribution (env vars, static config files, etc), smart failover decisions etc. Keep the “outside view” (HTTP, SMTP, DNS) all standards-based OSI, internally mapped to daemons each of which is individually robust (HAProxy, MySQL Cluster, Apache/Node.js, Postfix, PowerDNS etc.). Then use an Erlang/Elixir service as a live config and state authority, replicating state across infrastructure, pushing updates in real time, and having my legacy PHP/Python/JavaScript/etc code query this config via a simple HTTP/JSON API into the Erlang service. I'm not all the way there yet, but what works is most encouraging.
This stands to reason. If you need to bridge different languages together like in your case, they need to speak a common tongue. REST/GrahQL/gRPC solve this problem in different ways. There is no technical limitation keeping you from serving HTTP traffic from Erlang/Elixir, but from my own experience it isn't a pleasant experience. JavaScript or Python are dead simple, until you realise that 64-bit integers are not a thing in JS, and need to be handled as strings. Similarly, tuples will give you hell in Python.
On the other hand, if you don't need to cross that boundary, the BEAM will very happily talk to itself and let you send messages between processes without having to even think about serialisation or whether you're even on the same machine. After all, everything is just data with no pointers or cyclic references. That's more that can be said for most other languages, and while Python's pickle is pretty close, you can probably even share Erlang's equivalent of file descriptors across servers (haven't tried, correct me if I'm wrong), which is pretty insane when you think about it.
> I have found the real value of Erlang to be internally between trusted nodes of my own physical infrastructure as a high-level distributed "brain" or control plane
I think this is pretty high praise, considering it's about as old as C and was originally designed for real-time telephone switches.
> There is no technical limitation keeping you from serving HTTP traffic from Erlang/Elixir, but from my own experience it isn't a pleasant experience.
I would be interested in what was unpleasant? I've run inets httpd servers (which I did feel maybe exposed too much functionality), and yaws servers and yaws seems just fine. maybe yaws_api is a bit funky, too. I don't know the status of ACME integration, which I guess could make things unpleasant; when I was using it for work, we used a commercial CA, and my current personal work with it doesn't involve TLS, so I don't need a cert.
> you can probably even share Erlang's equivalent of file descriptors across servers (haven't tried, correct me if I'm wrong)
Ports are not network transparent. You can't directly send to a port from a different node. You could probably work with a remote Port with the rpc server, or some other service you write to proxy ports. You can pass ports over dist, and you can call erlang:node(Port) to find the origin node if you don't know it already, but you'd definitely need to write some sort of proxy if you want to receive from the port.
Perhaps I was a little harsh, this was a few years back when I was evaluating Elixir for a client, but ended up going back to a TS/Node.js stack instead. While the Phoenix documentation is stellar, I found it difficult to find good resources on best practices. I was probably doing something stupid and ran into internal and difficult to understand exceptions being raised on the Erlang side, from Cowboy if I recall. In another case, I was trying to validate API JSON input, the advice I got was to use Ecto (which I never really groked) or pattern match and fail. In JS, libraries like Zod and Valibot are a dream to work with.
The result was a lot of frustration, having been thoroughly impressed by Elixir and Phoenix in the past, knowing that I already knew how to achieve the same goal with Node.js with less code and would be able to justify the choice to a client. It didn't quite feel "there" to pick up and deploy, whereas SvelteKit with tRPC felt very enabling at the time and was easily picked up by others. Perhaps I need another project to try it out again and convince me otherwise. Funnily enough, a year later I replaced a problematic Node.js sever with Phoenix + Nerves running on a RPi Zero (ARM), flawless cross-compilation and deployment.
No, they aren't. You have to use BigInt, which will throw an error if you try to serialise it to JSON or combine it with ordinary numbers. If you happen to need to deserialise a 64-bit integer from JSON, which I sadly had to do, you need a custom parser to construct the BigInt from a raw string directly.
In case you didn't already know of it; CloudI is a cloud framework built with Erlang providing many of the features that you mention - https://cloudi.org/ See the FAQ for overview.
Incidentally, it makes Erlang-built systems robust. I used to run yaws-based web servers (and still do). One laughs at the logs, the feeble adversarial attempts to run this exploit or that. Nothing fits, nothing penetrates, nothing is even remotely relatable.
This is such a conflicting comment for me because I agree with so much but also have so many quibbles. That said I think that the other comments cover most things, I'll just comment on b: I don't think this is a problem that a language should solve or needs to solve, since there is a new flavor of the week of network protocols every few years. off the top of my head
- REST (mentioned, but what kind of REST? Rails style REST? Just plain http resource endpoints?
- GraphQL (mentioned)
- gRPC
- SOAP
- JSON-RPC
- Thrift
- CGI (ok not really in the same category as the above)
- Some weird adhoc thing someone created at 3am for "efficiency"
There's probably not as much advantage to HTTP as you think.
The simplest RPC protocol is where you connect to a TCP socket, send a newline-terminated string per request, and get a similar response back. You don't need HTTP for that - you might still want to use JSON. What does HTTP give you in addition?
It's presumably still not something Erlang directly supports.
HTTP2 offers lots of nice features like stream multiplexing and socket re-use. I guess also encoding standards? Less of an issue in this day and age where everything can work with utf-8.
Presumably the fact that you can interoperate with other systems not part of BEAM is desirable too.
You also get this from newline-delimited-request-response. Multiplexing: send an ID with each request, and return the same ID in the response. Socket re-use: just keep reading until the client closes the socket. Encoding standards: you're the one designing it, so just say it's always UTF-8.
Lower layers of the protocol stack ossify faster in our minds than in reality.
> a) Ubiquity — everything understands HTTP. Erlang nodes only talk to Erlang (or a compatible runtime)
You're sort of confusing the purpose of Erlang distribution, so I would turn this on its head with the following questions:
Do the Python, Ruby, FORTRAN, Lua, etc, etc, runtimes provide built-in, standardized, network-transparent RPC, [0] or do you have to roll your own using some serialization library and network transport library that might not be The Thing that everyone else who decided to solve the problem uses? Do note that rolling your own thing that is the "obvious" thing to do is still rolling your own thing! Someone else might make a totally reasonable choice that makes their hand-rolled thing incompatible with yours!
I think this confusion influences the rest of your list. Erlang has several HTTP servers written for it [1], but it is not -itself- an HTTP server, nor does it use HTTP for its network-transparent RPC, service discovery, and monitoring protocol.
> The author's insight "No HTTP. No REST API", reframes the reality...
With respect, you're confused. The authors insight can be reworded as
> I can do RPC with no hassle. With just three symbols, I say 'Send this data over there.' and the runtime handles it entirely automatically, regardless of whether 'over there' is running in the same Erlang VM as the calling code or on a computer on the other side of the globe. With equivalent ease, I can say 'When someone sends me data, call this function with the transmitted data.' and -again- the runtime handles the messy details.
When I was first introduced to Erlang's distribution system, it also took me quite a while to detangle it from what I knew about the Web World. Reading through the 'Distribunomicon' chapter of Learn You Some Erlang [2] helped to knock those erroneous associations out of my head.
[0] ...let's not even talk about the service discovery and process/node monitoring features...
[1] Some of which totally do support middleware gunk.
[2] If you've not read LYSE, do note that it was first published in 2010 (and got an update in 2014 to cover the then-very-new Map datatype). Because the folks who work on Erlang tend to think that keeping old code working is a very virtuous thing, the information in it is still very useful and relevant. However, it's possible that improvements to Erlang have made some of the warnings contained within irrelevant in the intervening years.
It's a Galapagos island full of stuff that evolved independently.
Concurrency is important, but if you are not familiar with concurrency primitives like mutexes, condition variables, barriers, semaphores, etc. and skipped directly to the actor model, that's a bit like you care a lot about concurrency while not caring at the same time.
Functional programming is great, but your CPU has registers and the way it works is closer to the imperative paradigm.
a) Ubiquity — everything understands HTTP. Erlang nodes only talk to Erlang (or a compatible runtime)
b) Because there are no middleware standards (REST, GraphQL, OAuth, etc.), you must build or integrate your own abstractions
c) Giving up infrastructure (reverse proxies, load balancers, CDNs), You handle distribution yourself or through OTP design
d) Interoperability with browsers and APIs, requiring bridging via something like Cowboy or gRPC gateway
setcookie secret in Erlang does not create or use an HTTP cookie, SSL certificate, or HTTPS connection. It sets the Erlang distribution cookie, a shared secret string used for node authentication within the Erlang runtime system (the BEAM VM).
Erlang’s built-in distributed networking subsystem allows them to connect to each other if:
1) They can reach each other over TCP (default port 4369 for the EPMD — Erlang Port Mapper Daemon — plus a dynamically assigned port for the node-to-node connection).
2) They share the same cookie value (here "secret").
The author's insight "No HTTP. No REST API", reframes the reality that Erlang moves things down the ISO OSI model - HTTP being on layer 7 and TCP being on layer 4. Erlang therefore misses the "benefits" of operating on a higher ISO layer, but gains powerful advantages of operating on layer 4:
i) True concurrency
ii) Transparent message passing
iii) Fault tolerance
iv) Soft real-time guarantees
v) Persistent cluster connections
Erlang’s design assumes a trusted, closed system of cooperating nodes, not a public Internet of clients. In other words, Erlang doesn’t live below Layer 7 — it IS its own Layer 7.