Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why are reverse proxies (in memory unsafe languages, no less) so popular? They are unnecessary in most cases, and bring more conplexity and hinder transparency more than the alternatives.


Quite a few reasons:

- SSL: You can put a bunch of things behind a reverse proxy and you can put all your SSL stuff in one place, which makes it a lot easier to deal with, secure and manage.

- Load balancing: If your application needs more than one host, you need some way of distributing requests to multiple hosts. A reverse proxy is one of the easiest ways of achieving this (and comes with above SSL handling as well)

- Caching: They can be very good for caching dynamic but rarely changing resources like news articles. They can also take care of requests for static assets so that your application servers don't have to.

- Multiple apps on a single IP: At the other end of the spectrum, if you have for example a home server, only one application can listen on a given port and you might want to run multiple applications responding to different hostnames. A reverse proxy lets you do this.


> Why are reverse proxies so popular?

From the point of view of SSL, they make it easy to bolt SSL support onto existing infrastructure with minimal changes to everything else.

In some cases the separation of duties is useful too: if I change the back-end of one of my applications from Apache+PHP on one server to a new implementation in node.js elsewhere, I don't have to worry about implementing SSL (and caching if the proxy is also used for that purpose) in the new implementation or even needing to change DNS and other service pointers, I can just direct the proxy to the new end-point(s).

For larger organisations (or individual projects) this separation of responsibility might also be beneficial for human resource deployment and access control: keeping the proxy within the security/infrastructure team for the most part and the app deployment/development with specific teams.

> They are unnecessary in most cases

I agree. But that doesn't necessarily mean they are not beneficial in a number of cases.

> and bring more conplexity

Though also spread out that complexity, which can help in the management of scaling up and the maintenance of larger scale once there.

Obviously the utility of this very much depends on the project/team/organisation - it is very much a "your mileage will vary" situation.


I've often pondered about whether it is useful to use one or not. I came to the conclusion that it's always a nice add-on, and eventually it always helps. You have highly efficient static file serving, adding HTTP-Headers is a no-brainer, same goes for typical stuff like HTTP->HTTPS redirect, basic logging, error pages independent of your app... Also don't forget that it's an additional layer in the setup that adds extra security because it's an extra layer. ;-)

It would be nice to have one in Go or Rust if it had all of nginx' features, performance and documentation/community support.


Another reason: binding to a privileged port (< 1024).

I trust nginx to do the right thing, more than some other application.


No, you don't need any special privileges to bind to a low port, just set the right capability:

# setcap cap_net_bind_service=+ep /usr/sbin/httpd

You should never run servers with privileges, period.


Worth noting that this allows the httpd executable to bind to a low port. Meaning that any call to httpd now has this privilege. A reverse proxy is much more targeted.

For httpd this may not matter as much because it's only used as a serve. But if you use node, giving all node scripts the ability to bind to a low port is uncomfortable.


I agree with you but it’s worth acknowledging that setting capabilities and making sure they persist across updates (e.g. your example breaks the first time a package update is installed) isn’t always trivial, especially in bureaucratic enterprise IT environments, and although the risk is lower an attacker could still potentially find interesting things to do using other low ports unless you’ve also setup something like authbind to limit it to just port 80/443.


On top of other comments, I'll add you can do them more securely. High-assurance security often used proxies to enable support for legacy apps since the proxies could be clean-slated using rigorous techniques. The legacy systems were often closed-source, way too complicated, or (eg Microsoft) deliberately obfuscated. They already mentioned SSL/TLS. Another example is putting a crypto proxy in front of Microsoft Outlook that communicates with a mail guard. Can scan, encrypt, etc email with little or no work on the client.

Can do (improvements here) with little to no change to (existing app) is the recurring pattern.


They tend to be more performant in handling connections / request queueing, HTTPS termination, and serving static files among other things.


They're popular because, for better or worse, Microservices are popular.

If you want to serve Microservices from a common host name as if they were a single application, e.g. for public users of an API, you need some sort of mapping between internal and external URLs.

Service discovery is another approach but that's probably only applicable to internal service usage.


They where popular way before microservices, only less dynamic. Mostly from an operational perspective they give a single entry point which can be better secured and monitored. It also allows to easily decouple unstable backends from your user so even though functionality is broken (404), the user experience doesn't have to suffer (serve stale cache, respond with a proper branded error page which helps the user forward (instead of a cryptic app specific error page).


How about just returning the service url along with the API auth token. This would enable load balancing and failover too.


Yes, but that'd require a more complex process on the client.

At least you'd have to send an additional request, e.g. instead of just calling

https://someserviceprovider.com/serviceA

you'd first have to call

https://someserviceprovider.com/serviceA

which for example would return

{ "url": "https://someserviceprovider.com:8081", "auth_token": "..." }

and only then could you call the actual service under https://someserviceprovider.com:8081


From the readme:

> By using it, you can run any existing web application over HTTPS, with only one extra line of configuration.

You can do authentication, ssl, authorization etc. all in one place.

Downsides:

- difficult to scale

- no deep defense

- CSRF exposure because applications are not separated by domain

I’m having good experiences with this approach


Because they're used to terminate SSL and do load balancing. More recently they do HTTP/2, Brotli and other newer tech that non-specialist HTTP servers don't yet do.


Can you name a reverse proxy which written in safe language?



Can you name a safe language?


Virtually anything other than C or C++?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: