New to the list is Server-Side Request Forgery (SSRF), where you trick the remote server to fetch a sensitive URL on an attackers behalf (eg, internal service or cloud metadata URL from the context of an internal server), a language-agnostic defense is using something like Stripe's Smokescreen [1] which acts as a SOCKS proxy your app connects to when requesting URLs that should be quarantine'd, and it does the enforcement of access to internal/external IPs or not.
This hit home for me. On a recent penetration test (via an external auditor), an app I'm responsible for was found to have a pretty bad SSRF vulnerability via a server-side PDF rendering component.
Luckily it was a bit obscure to find, had never been exploited, and we patched it within a few hours, but it was the most significant vulnerability found in anything I've been involved in.
Not come across Smokescreen (very cool) but this would have been one of a number of additional measures we could have put in place to avoid our vulnerability. I'm going to seriously consider using something like that going forward for all outbound server initiated requests.
SSRFs are great fun and used on pentests a lot. One of my favourites was where you could hit the cloud metadata service from an application and potentially get credentials back.
We put together an HTTPS MITM proxy so we can log and filter also HTTP methods and URLs (or even content) for egress access from our infrastructure. An HTTP connect proxy only sees host names and the IPs they resolve to.
It not easy to prevent data exfiltration if you allow connections to, say, S3 and the attacker can just send arbitrary data to their personal bucket.
We built something similar at both Uber and Snap. Thanks for sharing this link to an open source equivalent! I wish it had existed a few years ago when I had looked. Oh well!
> "We built something similar at both Uber and Snap. Thanks for sharing this link to an open source equivalent! I wish it had existed a few years ago when I had looked. Oh well!"
Why not just use a firewall? The technology has been around since the 80s?
If you're running on AWS (EC2, Lambda, ECS, EKS, etc), for example, you can query `http://169.254.169.254/latest/meta-data/` and it'll return a valid AWS access token. (That's how attaching IAM permissions to an EC2 box works.)
That's being replaced with v2[0] but, at the time when I was building these SSRF proxies, that didn't exist.
Beyond that case, it's also pretty common to have sidecar processes running on the same machine in modern Kubernetes deployments. Having an additional firewall proxy is too expensive for certain high performance environments, so it's commonly assumed that traffic to sidecars is trusted. (Mutual TLS is being used more frequently now, but that's non-trivial to deploy because key management is a PITA)
Interesting, it's worth noting that the scheme can sometimes also be used to cause SSRF to a different protocol which might not use http, like ftp or gopher, s3,...
SSRF are fun, sometimes the leak credentials directly also - when server is based on a trusted subsystem the auth headers might leak outside.
The problem described here is solved by using a firewall, where certain machines/processes are either allowed or disallowed to communicate with other machines/processes based on a set of rules. What else is there to it?
As a practical example, your service may receive a URL from the user to load as input, and you want it to not load the local cloud metadata endpoint (that holds the EC2 instance profile access token, for example), but at the same time, other parts of your code still need to access that endpoint to get the latest credentials.
The point is being able to place a particular (but not all) HTTP(s) requests in a sandbox when you don’t want to allow it “privileged” access to endpoints.
If you simply firewall the metadata end point (or other microservice your app needs) then none of your app code that needs it will work either.
> "If you simply firewall the metadata end point (or other microservice your app needs) then none of your app code that needs it will work either."
Just use a local on-box proxy with a firewall (or a dedicated virtual NIC with a firewall, doesn't matter, it's practically the same thing). Have your specific part of the code issue calls that pass through that specific proxy (or the virtual NIC). Apply whatever firewall rules you need.
This solution involves literally zero lines of in-house code to keep and maintain. It builds on the same industry-standard tools we've developed for the last 40 years. Provides all the flexibility and visibility you'll ever need. It's modular, and can extend to accommodate new requirements as they come.
But I guess it just doesn't look as fancy on your CV though.
Network firewalls don't usually work well as a strong control in this scenario, because if the application is hosted in AWS (or GCP, Azure, etc.) then IP addresses of the systems the app is connecting to are constantly changing, can number in the hundreds or thousands, and can often be anywhere in the address space (whether that's private or the public blocks allocated to the provider), so you pretty much need an allow-all rule to all of the subnets that an attacker would care about anyway, because trying to maintain a list of specific IPs is impractical.
There are use cases for network firewalls in cloud environments,but this isn't one of them.
[1] https://github.com/stripe/smokescreen