I don't understand why all these free open source projects are using nginx when it hides so many great features behind the premium option. HAProxy also has a lua module and you even has monitoring abilities for free unlike nginx
edit: a project like Kong could easily be done with haproxy as far as I know and you can actually monitor your server. https://getkong.org
I have been using nginx since forever for all my personal projects. I cannot say I have missed any feature not in opensource edition. In fact, I didn't even know they had a premium edition for a long time.
What killer feature do normal projects require that is not in opensource edition?
I wrote something out of our need for better monitoring/metrics for Nginx here: https://luameter.com/
And quite a few people are using it for per-vhost metrics use case just like you said. Also if you prefer something simpler and open source I also wrote this https://github.com/lebinh/ngxtop
Because Nginx "just works" and I don't think it's a bad thing that they are trying to make a living from all their hard work. Especially when the premium features are mostly stuff that businesses need.
I am more than willing to pay for NGINX Plus. It supports the authors and company that provides this amazing software. Expecting everything for free and open source is what I despise about the software industry. This anti-capitalist mentality causes great companies with amazing products to go out of business (see RethinkDB) and then everybody both entrepreneurs and developers lose.
Slava > And our users clearly thought of us as an open-source developer tools company, because that’s what we really were. Which turned out to be very unfortunate, because the open-source developer tools market is one of the worst markets one could possibly end up in. Thousands of people used RethinkDB, often in business contexts, but most were willing to pay less for the lifetime of usage than the price of a single Starbucks coffee (which is to say, they weren’t willing to pay anything at all).
I wish there was a paid version without the heavy duty support cost built into it . The pricing scale up is massive. The cost of a single license in the lowest paid tier is 2500$.
I understand it's because of the support cost - but it makes the adoption very hard. You cannot go from 0$ to 10,000$ overnight. And that's the problem.
If your business cannot afford (let's say) $10,000/year for Nginx Plus then your business is probably still in the phase where you replace that financial cost with labour and you roll your own similar setup using other tools, and you debug and fix the problems yourselves.
And if you're operating at the scale where this doesn't make sense, and you still can't afford $10k/year, then maybe you need VC money? ;-)
> If your business cannot afford (let's say) $10,000/year for Nginx Plus
At my previous business we wanted to use nginx plus, so we asked them for a quote saying we had between 10 and 50 servers running nginx per day (autoscaled depending on load)
We got back a short email saying it's $2500 per server, so $125000/year.
Considering our total hosting costs at this point were around $30-40k, that is absolutely absurd.
>your business is probably still in the phase where you replace that financial cost with labour and you roll your own similar setup using other tools, and you debug and fix the problems yourselves
That is stretch of a statement - maybe you are not entirely incorrect. But the onus is on both sides. I'm also at that stage where I can migrate to traefik or caddy relatively easily... but I dont mind paying 100$ to nginx for something familiar and an almost negligible incremental cost to them.
I am not claiming a magic bullet, but there's a reason why SAAS companies with easy switchability have a gradually increasing payment plan. Instead of heartburn inducing 0-10K USD.
Are they going to compete with Nginx premium features? From http://openresty.org:
> OpenResty® is not an Nginx fork. It is just a software bundle. Most of the patches applied to the Nginx core in OpenResty® have already been submitted to the official Nginx team and most of the patches submitted have also been accepted. We are trying hard not to fork Nginx and always to use the latest best Nginx core from the official Nginx team.
If it wasn't for "open source mentality", the internet would have ossified in the 90s. People wouldn't be able to put up all the interesting individual sites that they do, and there'd be only a few major companies that could provide hosting. So many of the things we take for granted have grown out of side-projects that would never have taken off with a high barrier to entry that is commercial-grade software. Certainly the web wouldn't be as advanced as it is now, as 'vested interests' would have won far more frequently than 'pragmatism'. "Which DB do you use? MSSQL or Oracle?" - RethinkDB would never have gotten off the ground.
> because the open-source developer tools market is one of the worst markets one could possibly end up in
Well, a production-quality database isn't really a 'developer tool', but even then, there are dev tools that have carved out a comfortable niche for themselves, such as SublimeText.
The problem isn't the open source mentality, the problem for RethinkDB was competing in a saturated market with a major competitor who had already won a lot of mindshare and gained a lot of traction.
For us at $day_job, consistency was also a nice bonus (we recently moved our web load balancers from Varnish to nginx plus, because now we have a single software package which does our SSL termination with HTTP/2, HTTP / TCP / UDP load balancing, serves static assets, caches, and is also the host for our application server itself, Passenger). Of course all of these functions happen across dozens of servers, but we get to use a single config syntax to make all of that happen and only need to track security issues, etc, for a single package.
Sure, we could accomplish all of that with some combination of haproxy and/or varnish and/or apache and/or nginx and/or various Ruby/Rack app servers... but we don't have to. Nginx does an acceptably good job of all of those things for our needs right now, and the support they provide for the license cost is pretty reasonable.
I think its because nginx was (and for most, still is) generational [compared to apache] in terms of core features and ease of configuration. I can see how caddy would be the next alternative though: more – "modern" if you may – features (money quote: built-in letsencrypt) but more importantly even simpler configuration.
edit: This is also where your use case makes it easy to see why nginx (or caddy) won't cut it.
Neat, thanks for mentioning Caddy [0]. I hadn't heard about it before now, and just from looking over their docs it looks very promising.
Their website is superb, with simple explanations of what the tool does and transparent pricing for their paid options.
The only thing I'm initially a tiny bit weary about is their claim of it being production ready. Being used by thousands of sites is an amazing feat, but how big are the loads on these sites? Production-ready can mean completely different things depending on what you're building. I think it'd be a nice improvement to see a bit more data backing up that claim. Just from a quick google search it looks like there's a decent number of articles comparing it with nginx, so I guess I"ll stat digging in a bit deeper.
Yeah, "production-ready" is definitely true about Caddy, but I know saying a program is "production-ready" is like saying a program is "secure" -- what does that even mean??
I like to tell people it's production-ready if it's a good fit for your website and workflow. Many people are using Caddy in production. There's so much variance it's hard to pin it down in a simple phrase, so we have to settle with a term that's "good enough", which I think the most fitting is "production-ready."
And upfront, I'll say: Careful with chasing benchmarks. They don't transfer well (or at all), and most people (including me) don't know how to run them properly. Just test with your own staging setups and see if it works well enough for what you need.
Anyway, thanks for your feedback about the site -- definitely taking it into account.
I think Caddy is doing a lot of things pretty great (including having a very large amount of cool features and a very simple configuration syntax).
However, honestly, I don't dare to use it on "big" production sites yet (besides maybe my blog), because for example packaging [0] still isn't tackled or clear at this point in time (and if Caddy wants to also gain traction on "big" production websites, this matters, because these are often not just run by devs, but sysops/devops people that will, very likely, never wget a binary because [insert one or more of valid reasons]). I think Matt (mholt) said that they will work this out when Caddy will hit 1.x, but for me it is a showstopper atm. Also if you read the topic from top to bottom you will notice that there seems to be no clear vision, not even for a base-caddy OS package or PPA.
Before this comment is considered negative, please know that I am actually a Contributor to the Caddy project (yet I have only contributed to the Init/Service Scripts), and a passive reader of the packaging discussion mentioned earlier, trying to come up with a great idea there to tackle that one point that I think will make Caddy's growth accelerate very quickly for "big" production websites.
I agree with you 100%, this is the primary reason why we haven't deployed caddy for at least internal services where I work. I don't want to have to upgrade "everything and caddy" every time the update window rolls up.
>The only thing I'm initially a tiny bit weary about is their claim of it being production ready. Being used by thousands of sites is an amazing feat, but how big are the loads on these sites?
Well, you can always add more boxes.
People have been putting horrendously performing Ruby or other small-time servers in production for very mainstream sites.Compared to things like these, Caddy will soar...
caddy doesn't scale or handle high traffic as well as nginx does from my benchmarks and understandably as caddy author has stated, he's focusing on features first then performance. benchmarks caddy vs nginx - caddy uses up to 3x times more cpu and memory resources and 1/3rd the performance of nginx https://community.centminmod.com/threads/caddy-http-2-server...
Informations about HAProxy on multi-core hardware (TLS de- and encryption are CPU bound) are a bit discouraging. You can assign workers to cores, but there's no shared memory and therefore the official docs warn about some inconsistencies that might occur. There doesn't seem to be a consensus wether to use the multi-core feature or not. Also nginx supports http/2.
Regarding the status of the backendservers - I use monitoring with health checks for every backend server anyway.
That are the reasons I chose nginx as reverse proxy recently.
We've had great success with HAProxy on multi-core without the warnings. The Stack Exchange operations did a blog post about an easy way to do it. Here's a sample config that does multi-core processing really well: https://gist.github.com/joliver/40291313605971545a0c1f6a3040...
There is definitely a single right consensus on how to do high performance TLS load balancing (irrelevant of the software):
- Create 1 process per core and pin to a core.
- In extreme cases, pin the NIC interrupt to a dedicated core. (If you have multiple NIC, one dedicated core per NIC).
HAProxy lets you do that very easily. That's why it's configured the way it's configured, because it's the options you need to make a high performance load balancer.
Shared memory is not necessarily and would kill the performances dramatically. The "inconsistencies" you might have heard about mean that the stats in the stats page are approximate, nothing important.
I wonder if that's because it may have to decode all the data going over the pipe to figure out where headers are? As it stands HAProxy only looks at and modifies HTTP headers and passes the body right on through without touching or modifying it.
I agree with you in principle, ideally there should be no feature separation but supported versions.
But given the current culture of even large profitable companies just using open source projects without giving back - see the HN story about Github and Redis - and let alone contributing but even giving credit, some open source companies have no way to earn any revenue. They may then have to resort to this to support their development which ultimately benefits all users.
Nginx is a highly regarded, widely used and high performance web server. As a long time user I would not grudge them trying to make some money to support and improve the project. As is Haproxy. I think there is room to accommodate both models and we are just incredibly lucky to have 2 such high performance projects available as open source.
I wanted to like HAproxy but the lack of support for in-place zero downtime config reloads without resorting to witchcraft was a showstopper. Use case for us a service discovery load balancer for Mesos/Marathon where frontend/backend config can change as often as every few seconds. Never had any issues with nginx.
- reads service information and current backend instances out of service catalog (either Marathon or Consul)
- writes nginx configuration
- runs nginx configtest
- tells nginx to reload its configuration
Nixy referenced above takes service information directly from Marathon event stream but I usually have Consul so I use homebaked consul-template + nginx solution.
Ok, I will go tell the 200 or so developers that they can only deploy to test/staging once an hour or so instead of, like .. whenever they need.
This is a PaaS supporting many micro-services. There is a hold-off to not reload more than once a second just as an insurance against runaway processes but beyond that nginx handles this without any sweat.
Frequent reloads are currently in the non-prod cluster only but I don't see any reason why it would be an issue in prod as well.
That's why we switched to nginx as it is perfectly fine reloading configuration without dropping existing connections. HAproxy can not do that. There is a black magic approach to having two HAproxies listening on the same port so there would always be something listening and connections could be drained, but it's dicey.
There exists Tengine, a fork of nginx, which IIRC features some of nginx paid features for free. I wonder how closely it tracks nginx and what the differences are
I'm not entirely sure what this is about. Are you saying the list of backends is dynamic and retrieved from DNS? Why would you do this? Why are the backends changing so frequently? This seems like a major design issue. Why not have one backend that's anycasted with equal priority and you can pop backends in and out as you please?
Also, why wouldn't you just solve this with SRV records? Just patch Varnish to support SRV properly and magic you get everything you ever wanted including failover and priorities. That's probably a more sane request.
nginx is nearly optimal for content serving (see Netflix presentations) and TLS origination or termination. It's actually somewhat weak in the features they charge for.
that's the biggest weakness of nginx.... the whole compile stuff in a right way to get things/features really running there. And don't tell me it's easy to recompile nginx... you have to know a lot of nginx and switches to get the compiler setup well done.
Then don't write a compile-recipe from scratch, but take e.g. your distro's packaging script & just add the switch you need. This is easy and relatively fool-proof.
Apache is also a giant pain in the ass to get http2 support, you have to basically chuck out the distro packages and do everything from scratch (including OpenSSL) unless you're on a bleeding edge distro.
Not to mention that http2 is buggy in 2.4.25 (fixes in 2.4.26 will be out in May). And yes, without a bleeding edge distro good luck.
Building apache from source is not only very straightforward but a great way to maximimize httpd performance (all modules static, only include the modules you need). OpenSSL on the other hand, that's distro specific, and definitely more involved.
or just script it to do all the leg work which is what I do with my LEMP installer for TLS v1.3 support and optional nginx compile against LibreSSL or OpenSSL or later BoringSSL https://community.centminmod.com/posts/48818/
You may be interested in NixOS' way. This is all what you need in your configuration.nix to get a webserver running with an SSL certificate from Let's Encrypt, including automatic refreshes before expired and everything:
That PPA looks pretty outdated—I recommend using the one provided by nginx itself. Note, though, that none of these builds will (even the official ones) will have TLSv1.3 support as this needs the yet-to-be-released OpenSSL 1.1.1, which no current Linux distribution uses.
Recently released Ubuntu Zesty (17.04) uses OpenSSL 1.0.2, and upcoming Debian Stretch (9) release uses 1.1.0.
Yes, with the caveat that TLS 1.3 is not finished. So nobody (including nginx) has a completed TLS 1.3, but they all have drafts which are presumed to be close to the final thing.
Builds of at least Firefox and Chrome / Chromium with TLS 1.3 exist in the wild if you're interested in testing this out.
In particular you should look at this if you have middle boxes such as a "transparent" TLS proxy or "traffic inspection" feature in a corporate or educational network. It is very likely these boxes have defects regarding TLS 1.3 because all of them are garbage built by idiots. If you find out early you might be able to plan for what to do about that before the proverbial impact with a fan.
edit: a project like Kong could easily be done with haproxy as far as I know and you can actually monitor your server. https://getkong.org