Caddy uses more RAM and has no inherent benefit when running a project in production professionally. Caddy is easier for self hosted cause it automatically handles HTTPS where Nginx does not.
I'm a pretty hardcore nginx fan with a decent chunk of rather low level professional experience with it in complex setups. The reason I'd still pick nginx over Caddy for prod is that I know really well how a broader tool set will work together, e.g. keepalived and whatnot.
Caddy kind of feels like a cute toy in comparison with nginx, but it's not, by now it's been seriously battle tested and has proven itself to be quite resource efficient and resilient.
You might use it to implement redundancy in the load balancer layer of your system, perhaps your firewall round-robins incoming connections between two IP:s where you have nginx proxying to share load between two mirrored clusters, and those IP:s are virtual and handled by keepalived that will shuffle in a backup virtual server if the one currently serving becomes unhealthy or needs to be switched out due to a config rollout or something.
It's a really neat way to be able to just throw more virtual servers at the problems in availability, redundancy, load balancing and so on. I think it does some ARP messaging to achieve this.
Edit: I've applied it with ProxySQL in one case, it was an application on a trajectory from a simple rig with one virtual web server, one virtual database server to a highly available and resilient system. When I left we had a master-master-cluster with ProxySQL in front, with three ProxySQL-machines keepalived, so in case one went out for some reason there where two more in the stack to fill in. When you aren't sure what kind of peak load you're going to handle it's nice to know that when the alarm comes you have one fresh machine buying you some time while you figure out what you need to do to the third before it is shuffled into service.
I think that OP really meant that in his/her mental model of things, answers already known and battle tested for numerous cases [by using Nginx] - making HA with keepalived? Check. Making logging to be buffered to save iops? Check. Implementing ratelimits and custom logic? Check.
Keepalived is not really bound to Nginx by any means and should perfectly fine work with Caddy too.
Caddy “uses” more ram because Go is a garbage collected language. It can free tons of it, but that costs cpu cycles. It generally won’t spend too much time freeing memory until the system is under pressure.
Because it’s Go, you can tune garbage collection to be super aggressive, at the expense of speed.
Citing that particular blog post isn't making the point you think it makes. To quote:
> The most striking piece of new knowledge for me was learning about failure modes. Nginx will fail by refusing or dropping connections, Caddy will fail by slowing everything down.
Do you want your clients failing to load your website at all? Is this the best approach to serving users?
> Do you want your clients failing to load your website at all? Is this the best approach to serving users?
There are good reasons for picking either. Large services under sudden load sometimes implement queueing, which is just failing but stylish.
For my blog posts, I'd rather throw an error than have people wait for thirty seconds. The contents aren't that important and the end result will probably look bad because of missing CSS anyway.
For API services, I'd want things to slow down rather than fail, unless failure is explicitly documented in the API and can be handled somewhat gracefully.
Related reading:
https://blog.tjll.net/reverse-proxy-hot-dog-eating-contest-c...