This reads as if it isn't trivial to have an HTTP API for your public API in Erlang/Elixir, which is weird. Sure there isn't an included HTTP API for Erlang processes, but why exactly would you want one? They're not for the public internet, as their an implementation detail of your system. The majority of what they're capable of just isn't relevant to the public internet.
Unfortunately very little is trivial for me. Personally I have found the real value of Erlang to be internally between trusted nodes of my own physical infrastructure as a high-level distributed "brain" or control plane for health monitoring, config distribution (env vars, static config files, etc), smart failover decisions etc. Keep the “outside view” (HTTP, SMTP, DNS) all standards-based OSI, internally mapped to daemons each of which is individually robust (HAProxy, MySQL Cluster, Apache/Node.js, Postfix, PowerDNS etc.). Then use an Erlang/Elixir service as a live config and state authority, replicating state across infrastructure, pushing updates in real time, and having my legacy PHP/Python/JavaScript/etc code query this config via a simple HTTP/JSON API into the Erlang service. I'm not all the way there yet, but what works is most encouraging.
This stands to reason. If you need to bridge different languages together like in your case, they need to speak a common tongue. REST/GrahQL/gRPC solve this problem in different ways. There is no technical limitation keeping you from serving HTTP traffic from Erlang/Elixir, but from my own experience it isn't a pleasant experience. JavaScript or Python are dead simple, until you realise that 64-bit integers are not a thing in JS, and need to be handled as strings. Similarly, tuples will give you hell in Python.
On the other hand, if you don't need to cross that boundary, the BEAM will very happily talk to itself and let you send messages between processes without having to even think about serialisation or whether you're even on the same machine. After all, everything is just data with no pointers or cyclic references. That's more that can be said for most other languages, and while Python's pickle is pretty close, you can probably even share Erlang's equivalent of file descriptors across servers (haven't tried, correct me if I'm wrong), which is pretty insane when you think about it.
> I have found the real value of Erlang to be internally between trusted nodes of my own physical infrastructure as a high-level distributed "brain" or control plane
I think this is pretty high praise, considering it's about as old as C and was originally designed for real-time telephone switches.
> There is no technical limitation keeping you from serving HTTP traffic from Erlang/Elixir, but from my own experience it isn't a pleasant experience.
I would be interested in what was unpleasant? I've run inets httpd servers (which I did feel maybe exposed too much functionality), and yaws servers and yaws seems just fine. maybe yaws_api is a bit funky, too. I don't know the status of ACME integration, which I guess could make things unpleasant; when I was using it for work, we used a commercial CA, and my current personal work with it doesn't involve TLS, so I don't need a cert.
> you can probably even share Erlang's equivalent of file descriptors across servers (haven't tried, correct me if I'm wrong)
Ports are not network transparent. You can't directly send to a port from a different node. You could probably work with a remote Port with the rpc server, or some other service you write to proxy ports. You can pass ports over dist, and you can call erlang:node(Port) to find the origin node if you don't know it already, but you'd definitely need to write some sort of proxy if you want to receive from the port.
Perhaps I was a little harsh, this was a few years back when I was evaluating Elixir for a client, but ended up going back to a TS/Node.js stack instead. While the Phoenix documentation is stellar, I found it difficult to find good resources on best practices. I was probably doing something stupid and ran into internal and difficult to understand exceptions being raised on the Erlang side, from Cowboy if I recall. In another case, I was trying to validate API JSON input, the advice I got was to use Ecto (which I never really groked) or pattern match and fail. In JS, libraries like Zod and Valibot are a dream to work with.
The result was a lot of frustration, having been thoroughly impressed by Elixir and Phoenix in the past, knowing that I already knew how to achieve the same goal with Node.js with less code and would be able to justify the choice to a client. It didn't quite feel "there" to pick up and deploy, whereas SvelteKit with tRPC felt very enabling at the time and was easily picked up by others. Perhaps I need another project to try it out again and convince me otherwise. Funnily enough, a year later I replaced a problematic Node.js sever with Phoenix + Nerves running on a RPi Zero (ARM), flawless cross-compilation and deployment.
No, they aren't. You have to use BigInt, which will throw an error if you try to serialise it to JSON or combine it with ordinary numbers. If you happen to need to deserialise a 64-bit integer from JSON, which I sadly had to do, you need a custom parser to construct the BigInt from a raw string directly.
In case you didn't already know of it; CloudI is a cloud framework built with Erlang providing many of the features that you mention - https://cloudi.org/ See the FAQ for overview.