People will stop deploying WAFs when the compliance standards are rewritten to not require them. They are prominent in lots of installations because the box ticking exercise of compliance frameworks, namely PCI or HIPAA, require a WAF-like component to reach compliance. It took long enough for them to be written in that now everyone knows they need one. It will be even longer for them to be phased out, and no one with risk assessors wants to be the first to remove them. Too much risk they say, regardless of how strenuously the tech component say they're unneeded.
We are still waiting on compliance standards to update their password change policies to reflect what most people have been saying for over a decade, that frequently changing passwords are a security risk..
NIST discourages it, but most of the US government still requires it. In fact, I used to have a 60-day password rotation policy with such brilliant requirements as "no more than three characters of one class sequentially."
I soon realised that "ShitFuck!", thus, was a valid password -- and off I went. Meanwhile, a teammate who had previously been a technical writer contracting with the NSA told of the "waterfall method" -- not of software developments, but of passwords. It was common in his world, apparently, and you could always hear he was typing his password by the staccato of QazWsxEdc123, etc.
The PCI-DSS website itself requires that you change password every 12 months. At the same time the period for recertifying PCI-DSS is ... every 12 months. I have a systematic way to create a new password each time, which probably isn't secure.
Got into a quarrel about this with our IT director at a previous company. I wish there was a security best practices FAQ I could link to for this sort of stuff.
1) WAF do far more than just prevent SQL injections.
2) Many companies don't own the software they run and so they can't guarantee that it is free of SQL injections or that the version of ORM libraries are secure. WAF protect against this.
3) Auto-scaling is just as much about high availability than performance. Database indexes do not help with the former.
Look, if you want a "real" WAF capability you buy something like Imperva and manage the care and feeding of a team of say 2-4 ppl who understand web app vulnerabilities in depth, AND know the tool. Your SOC/NOC will need training and procedures too. Fully loaded an average enterprise will pay > $1M a year to maintain the capability if you look at the TCO carefully.
There are environments where this makes sense. Banks are the classic - they have the money, they care about the risk, lots of places have systems still on java 8 and take > 1 month to deploy code fixes.
The biggest practical problem in my experience is that it's almost impossible to keep good people maintaining an enterprise WAF. Anyone with a deep enough understanding of web app security and network infra to do the job properly will get bored in the role and leave. WAFs are usually barely maintained for this reason.
Another major issue is alert handling - there's nobody with enough context. The SOC staff usually understand nothing (not the vuln, not web, not the app). The devs don't understand the vuln and the security team don't understand the app so tickets go round in circles with nobody able to understand if an alert is an FP or an actual issue. Eventually alerts start getting quietly dropped on the floor.
Can confirm. I work in financial services, we employ Imperva for WAF + DDoS.
Initial onboard was 2 FTE for ~6 months.
We are now at probably .5-.75 FTE. Onboarding new sites, responding to “waf broke the site”, type things and doing exceptions (for pen tests and what not).
Not sure of the TCO, but I shudder to think what would happen if we didnt have it.
Most enterprises have moved public facing sites to the cloud. And so are making use of the cloud provided WAF solutions all of which are trivial enough for someone to manage part-time.
Also in 20+ years in enterprises have never heard of firewalls being left unmaintained. I don't know how that would pass security audits, why such a critical piece of security architecture would end up in this state or how any in-house development would work given the constant need to make changes to it.
I do agree lightweight cloud WAFs can have positive ROI. You have to be realistic about their capabilities however and what you can expect from a part-timer managing one.
You end up choosing between a heavyweight "advanced" WAF (either cloud or on-prem) that requires lots of tuning and response, which most orgs can't do properly, or a lightweight cloud WAF with vendor rules that doesn't do much but hey at least it's easy to look after.
Re: enterprise WAF maintenance, I know this is anecdotal but I was a pentester for 10 years and I could count the "non-dysfunctional enterprise WAF setups" I saw on one hand. Were you in a role where you talked to the staff who looked after them or dealt with the alerts? Often the org lacked the insight to understand there was a problem. "We bought the magic security box, we applied the patches, what's the issue?"
I did not downvote you, but as the article explains, WAFs don't protect against anything assuming the attacker bothers to spend five minutes bypassing them with one of a thousand well-known tricks.
It's not about competency but the fundamental nature of WAFs. It's like using a regular expression to sanitize input parameters you concatenate into a SQL query, except you also can't make it specific to the SQL dialect used. It doesn't matter how competent you are or how much money you spend on engineering the regular expression.
8KB is a small amount of data. Many apps will need more, so will not be able to blanket block these. Even 64k, what appears to be the absolute limit, might not be enough.
Continuing, not blocking, is the default when not using the console, making this insecure by default.
> 1) WAF do far more than just prevent SQL injections.
They largely don't prevent SQL injections.
> 2) Many companies don't own the software they run and so they can't guarantee that it is free of SQL injections or that the version of ORM libraries are secure. WAF protect against this.
Only if the WAF somehow understands the internals of that software better than that software itself. Which, sure, sometimes happens, but there's no systematic reason to believe it. Why should the WAF have a better hit rate than the makers of the actual application? Does the WAF vendor offer a guarantee that systems behind it wil never be hacked?
The code hasn't, but it would be a lie to say that the application hasn't, and as a leader of ops teams while I can't directly influence code security & quality, I can damn sure influence overall security by demanding a WAF.
Defense in depth is an important concept in security for a very good reason.
It's basically fizzbuzz. If you have a WAF, it proves you know how to add a WAF. Presumably even one you know how to configure when security needs change.
Compliance auditors are mostly there to underwrite posture, not actual risk.
Auditors pretty much only certify that you have told them you are doing what you are supposed to be doing. They are not logging in to your servers and verifying that an AWF is running in front of your web server. They are not probing your network from the outside to see if an AWF is blocking their activity.
Just as financial auditors are only confirming that your financial statements match what your accounting department tells them.
If you lie to your auditors, there's a good chance they won't catch it because that's not what they are looking for.
I've audited a lot of networks for compliance, and we always actually check that the protections that are meant to be in place are in place. I don't think I've done an audit where I wasn't using nmap to some degree.
You'd think that, but I had an auditor that had guidance requesting confirmation of security cameras on the servers. They wanted to drive to the AWS data center to see the cameras. If the drives got stolen, you'd better make sure AWS shares that video with you.
Ok, but what if someone bypasses the cameras on your servers with looped footage? Did the auditor think about having security cameras on the security cameras?
The article notes the problem of the attack surface of WAF products themselves, and raises questions about the security posture of the products and the vendors. Compliance standards need to address the risks and weigh them against the benefits of WAFs. If, as the author argues, they have reached a point where they do more harm than good, the compliance bodies need to be honest about that and reevaluate recommendations.
Here is a big lesson that I "learned": actual security does not matter for the purpose of avoiding risks and fines. What matters is the ability to say "we got hacked despite following the accepted best practice" or "we can shift the blame HERE". Having a paid-for WAF ticks both boxes.
Only if the WAFs themselves become a significant attack vector, to the extent where blame for breaches affects the bottom lines of the WAF vendors, the situation will start to change.
First off, most large applications need something on the edge to help deal with volumetric attacks. If you've already got something there, adding a light WAF engine isn't exactly a huge ask. It's (almost) free.
Also, WAFs let you "fast patch" against vulnerabilities. The Log4j example the author gives as a negative is actually a positive. Your vendor can help prevent you from being attacked while you have time to respond. Those rules given in the example are bad, for sure, and probably have false positives - but, a few days of a slightly higher rate of false positives while you patch is probably worth it to most organizations.
Lastly, WAFs let you increase "risk scores" of requests and IPs, which lets you turn up captchas and other roadblocks against malicious IPs. This raises the bar from the floor to somewhere about knee height. Not a lot, but one more thing for an attacker to have to step over.
I do agree, though, that people treat WAFs as a magic solution or make them really heavy. Against application attacks, I view them as a tool of medium importance. Also, there are also better and worse vendors out there. Personally, in a WAF, I view "less as more".
The more rules and complexity, the more problems you're going to have. Adding rules should be temporary, and there's very few reasons to have blocking rulesets for long running issues.
I think ultimately acknowledging that there are no magic solutions, only a variety of options that can each contribute to reducing the probability of something bad happening, is critical to approaching security issues effectively.
My first experience with WAFs was part of a check-the-box security/compliance process, and I thought they were dumb. Easy to work around! Just regexes! With time I've come to appreciate that they're pretty low effort to operate, can be fairly lightweight, and wind up being one more thing for an attacker to deal with, in a world where each thing they have to deal with decreases their chances of success and increases the chance of someone noticing.
If we assume that basically everything can be bypassed by a sufficiently motivated attacker, the best approach is defense-in-depth where there are multiple barriers they need to traverse, and little opportunity to do so "quietly". WAFs can be evaded with clever approaches, sure, but getting to that point means they initially triggered a block, and have to make additional requests to test their evasion payload, each of which increases the signal we have to block more aggressively, trigger an alarm, and get a human involved.
We use AWS WAF for a bunch of this which is necessary because the other AWS components are so dumb. We definitely don't want to overcomplicate our WAF setup. Good enough is good enough.
Most large companies have too many developers and too many teams to expect/assume that each team will do the right thing for security when putting something in production on the public Internet.
Why? Because most software developers are bad at security (I said most not all).
So yes do all the things at the bottom of this article! Teach security-by-design to all your teams. Make sure they know what OWASP is at least. Make sure you test all the things. Either own or rent red teams.
But if you're a big enough company, you probably also need something centralized like a WAF, because you want defense in depth.
WAFs are far from perfect, but in my experience they are better than not having one in 2023.
This is my personal opinion but any developer building software that runs on the public Internet should not need to be incentivized to care about security.
Let me rephrase: software development (the action of engineers and the whole process in general) is actively insensitvised to not care about security.
The consequences of poor security are often way, way lower than the costs of doing it properly. Add on to that, that security problems are contingent risks that only "pay out" in a small number of cases and you have a recipe for low expected value for investment into security.
Software engineers often want to develop a secure product, but they don't know what they don't know, and their employer is not interested in investment in their security capabilities, both the companies security capabilities and the capabilities of their employees.
So, I guess I hear you on this, although I think it's a generalization..
There's certainly a trope among developers that "the business" doesn't give us time to do work properly, including security.. and that's it's just "ship features fast"..
My take on this is that it's on us, as professional developers and particularly technical leaders, to force that issue and advocate for better practices.
I've advocated for better technical practices at several jobs I've been at, and in many cases there was no intentional desire to incentivize bad practices, it was simply that the leaders weren't aware of all the requirements, and of the consequences of cutting too many corners.
When framed in the context of business value and risk, it's not as hard as you think to introduce better standards to most software development teams. Smart leaders are open to listening and changing if it means better outcomes.
That said, if your technical leadership isn't interested in supporting this kind of improvement or helping you advocate, then maybe it's time to dust off the CV if you personally care about it.
I think WAF is really a bigger set of tools now (bot protection, IP reputation, L7 DDoS/rate limiting, API restrictions) than just signatures. Virtual patching is also incredibly important and there's really no other security tool that gives you the granularity to restrict something like the values of some param on a specific path of your app, but only when some cookie exists.
I don't think the performance concerns here are accurate. I think these days most people are using vendors own cloud infra (Akamai, Cloudflare, F5, Imperva, etc), but even if you are using WAF on-prem, F5 and Imperva sell purpose built hardware that have no problem handling tons of requests. Most WAF's also have weighted signatures these days and won't just fire on ${jndi. "${jndi" might give 5pts, while "org.apache.*" gives another 5, and maybe their threshold is set for 10 for blocking.
I have plenty of issues with WAF's and I would invest a lot more in developer training, but I think they still have their place.
Hallelujah. Also, with many single-phase apps, WAFs don't make any sense - the HTML/CSS content is just served statically, so the potential vulnerabilities are in the API, which IMO is much easier to harden. Without going into too much of a tangent, this is one reason I'm a big fan of GraphQL. It's strong typing and support for custom scalar types means malformed content gets rejected before it even gets to your code. For example, most injection attacks require the use of some "special" characters like < or ;, but many field types have no need to support those characters, so instead of just typing "strings" everywhere, you can have things like Email or Date or SSN or Name scalar types that are more restrictive in the characters they allow.
Pretty much every SQL injection attack is going to need to be injecting single quotes with some uncompliant lack of parameterization someone put together.
Simply using parameterized queries solves this problem, no amount of semicolons can escape it.
Yes, totally agree. There are many good SQL libraries now that used things like tagged templates (e.g. sql`SELECT * FROM foo WHERE bar = ${zed}`) that make it virtually impossible to not use parametrized queries.
But my primary point is that I still believe it makes sense to type string inputs as restrictively as possible, not just for SQL injections but also for other types of potential vulnerabilities. E.g. if you're taking a date string that you expect to be in YYYY-MM-DD format, it's best to type that string as such as furthest out as the edge as possible.
All of that is possible with plain old REST APIs as well. For example, FastAPI and pydantic will do this sort of parsing and validating at the application edge.
I think of WAFs as an extra safety net. Defense in depth.
The author complained about the performance cost of WAFs in general, but not all WAFs have be structured like ModSecurity. They could for example be based on something like https://github.com/intel/hyperscan and perf is at a very different level.
Or even do what what CloudFlare did [1] and transpile all the slow ModSecurity rules to Lua and deploy OpenResty at the edge. Run them in nginx+luajit.
> I think of WAFs as an extra safety net. Defense in depth.
The WAF itself is a complex codebase written in a performance-critical domain, so they're generally implemented in memory-unsafe languages. If the services behind the WAF are implemented at all competently, you're probably increasing the attack surface by more by adding the WAF than you're saving.
I cannot argue with this too much. A WAF will protect you from unsophisticated attackers - at great cost.
In my group, I am considered to be the WAF SME. Enough that I wrote a training course to get into ruleset tuning.
What I see a lot is customers who are security-focused demanding "OWASP Top 10" protection and then, somehow, not understanding that it is not 10 rules you enable on the WAF. These are people often with application security and other credentials.
Most people I have seen running WAF's are in "Set it and forget it" mode. Tune the rules until it no longer blocks legitimate traffic and call it a day. I think few really understand what it is, and the why of using them.
Another funny anecdote: I had one of these customers talk about how amazing Akamai WAF was, because it never had false positives. Never? Really? That looks like a red flag to me, but they were not concerned.
Similarly I've met a worrying amount of people, often with titles like "Senior Architect", thinking that adding Sonarqube to your pipeline will magically eliminate all security bugs and you'll no longer have to think about security again
Show me an alternative that I can deploy despite developers having both the lack of knowledge and complete indifference of security. They don't care and they aren't forced to conform to any security standards, so a WAF is literally the only thing I can do to try to improve things. I can't rewrite all their code for them. Management hears about a WAF and makes that a requirement and moves on.
If software development was a professional trade group, we could make membership require security training, and industry standards for ensuring security. But that'll never happen. It would mean them giving themselves more work to do, and we all know how lazy devs are.
PCI-DSS does not mandate the use of a WAF. It is one of two ways you can fulfill requirement 6.5 or 6.6. WAF + OWASP Top Ten ruleset is typically easier to get evidence for your auditor, but you can show that continuous scanning using a DAST scanning engine to meet requirements.
I would have a WAF installed with very few highly tuned rules against mostly SQLi. Why? Because the damage of letting that through and praying that the developer or web-app framework does it right are significant. The rules for SQLi are pretty easy to get right and dropping that traffic before it gets to your web server is a reasonable thing.
I would have a WAF installed with no rules too. It is nice to have something there where you can drop in a Log4J rule and get protection relatively quickly for attacks of that nature. There have been a number of these over the years and a small performance penalty seems worth the big picture safety net.
I am against the pricey models that the cloud vendors push. WAF can get expensive. They typically are bundled with other cloud services, but hey, if you've gotten that far, you are probably outsourcing most things to the cloud provider anyway.
I do not like WAF pragmatically because it lets the developer off the hook in many ways. There is something there doing their work for them and another reason for some developers to not understand or care about the security of their applications. Something else will do it for me whether I know this or not.
Seems like Coraza also has some more recent benchmarks, since from what I can tell they more or less aim to replace ModSecurity in some regards: https://coraza.io/docs/reference/benchmarks/
I wonder whether anyone has undertaken the effort to compare the performance of the self-hosted WAF options in 2023, or at least in the past few years.
Personally, I think the performance tradeoff might sometimes be worth it if security does indeed improve, the Swiss cheese model (defense in depth) and all that: https://en.wikipedia.org/wiki/Swiss_cheese_model
Bit by a WAF this week. A client needed to enable web payments and selected a lesser known payment processor with slightly lower fees? I built an integration test to submit test payment card along with a fake name and a real address. The request failed with no diagnostic information. My request looked well formed according to the docs so I raised an issue with support.
Weeks later, they got back to me. It turned out the payment processor's WAF was rejecting my submission because the street name contained the word "Union", and the WAF was sanitizing input fields by rejecting any SQL syntax, despite the lack of control sequences. Napkin math suggests their WAF would reject payments from 1% of the US population on the basis of their street or city name. This is a hidden tax on top of their nominal processing fees!
Best practice also means the WAF is probably also configured to accept vendor security updates, silently introducing even more rejection criteria.
The love of compliance is the root of many types of evil.
These are pretty weak arguments. Making nice graphics in games also increases frame times therefore we shouldn’t make nice graphics? Yeah wafs will slow down network requests the question is does that matter?
The answer is no. The argument about cap1 is also extremely weak. It was a bad incident but it’s a single example of a waf being a vector and most of the damage was caused by IAM misconfiguration.
The WAF increasing hypothetical attack surface was the closest thing to a good argument on there, and since their “alternatives” amounted to “don’t misconfigure anything or deploy a vulnerability”, which solution would also have solved their single example of WAF-as-an-attack-vector actually happening… yeah, that still made the piece less convincing, overall.
Nicely written and the author clearly has more experience than myself. I did, however, get hit with a data breach via SQL injection, and everyone I spoke to (not vendors or sales folks) seemed to agree that a WAF would have blocked the attack outright.
Sometimes one has to host an application and has no control over the details of how that application is developed or configured related to parameterized SQL queries.
Yes, those are needed too. And static analysis and dynamic analysis, etc.
Despite all of that we just found a SQL injection that existed for years somehow. Luckily the WAF blocked attempts to exploit it until we could issue a fix.
Ayup. Especially for small teams working with large piles of software (lookin' at you WordPress) that are insecure out of the box. The ideas are also constrained mainly to fixing SQL injections, which are only an aspect of security.
* Isolate components in case of a breach
That's great but it doesn't fix a breach, it just limits the scope. Better than nothing but if a WAF stops the breach from happening ...?
* Immutability
Cool for those teams that have control of their entire infrastructure. However this also only solves those cases that are caused by mutability.
* Static analysis to look for stuff like devs forgetting to use prepared statements.
Definitely! Sure! But again, if you're using a piece of software off the shelf a lot of that is out of your hands.
* Restricting API endpoints to limit access to necessary tables
Another great idea if you are in control of your software. Hacking into a fairly large project like WordPress to effect these changes (if they aren't already) would require a large team and a ton of maintenance. Basically it'd be a fork.
Do any of these help against a DDOS, or even accidental DOS caused by search spam? Nah, but a WAF at the edge stops the latter in its tracks. I'm not saying a WAF is a panacea for all ills, and yeah I bet a rethink would mean better-built web app firewalls, but discouraging their use would cripple most of the long-tail of the web and honestly that's where all the cool stuff is anyway.
I agree with you to a point. In my small realm WordPress is the thing behind it, and simple search spam can just bring the app to its knees because of the horrible indexing WP has, even with plugins like Elastic. If you have control of the app it's a whole new ball game and then like others have said, the WAF is there to replace your own poor practices.
There's so much excusing away while things will be bad, for a long time, but that doesn't seem particularly hacker-ful. I'm glad those views are represented, but as many HN threads the status quo is again decidedly notably business-as-usual in-defense-of-meh.
That said, I do think front end routers are just taking over. Kubernetes Gateway API is the front end API, hard fought for & iterated again and again and again, to become a baseline set of expectations that's basically great. Having a front end that can compose routes is so so so good & powerful, needed to be standardized, and this seems far & away like what is happening.
I tend to agree that we have too often in the past intermediated our services. It wasn't even a firewall, but my org used to basically out of habit put an nginx in each container in front of node. No one knew why, no one could conjecture what for, but we just let it roll. WAF is that kind of thoughtless mindless zombie nonsense.
However, for 3rd party code we run/host but don't really own I see value to a WAF. For example, we unfortunately run WordPress, and I don't have time to manually audit all of the stupid plugins people want to be installed beyond a checker for known vulnerabilities, so a WAF is some comfort/protection.
This is like saying "Don't have a network-based firewall, because Google figured out zero-trust".
There are absolutely ways to do things that we should look to for inspiration, but the harsh reality is that legacy software, legacy teams and regulations mean we must (and often should) continue using security-team-maintained chokepoints for Internet-exposed services.
If there were to become some standard API gateway tool that is clearly auditable, with obvious IAM/permissions to specific database sections for each API call, then maybe that could be used in lieu of a WAF. The point is, regulators and Infosec need to have a tool or process in-line (physically, like a WAF or firewall) or procedurally (such as analysis and checkpoints in a CI deployment) to ensure a business application is secure.
Basically every service of any size is terminating SSL before the request gets to the “real” recipient, anyway. They’d do it with or without a WAF. CDN, load-balancing, centralized routing, you’re doing it somewhere whether or not a WAF is involved.
I mean, you know TLS will have to be terminated at some point anyways. Might as well rage against NGINX terminating the TLS instead of whatever back end you're using (Node, Django, whatever).
I don't see issues with WAF. Sure, it takes some time to set up so that you have <0.01% false positives but it filters out tons of robot attacks and other garbage. As backend does not have to deal with it, things like server errors almost vanished and I think it's a good tool - just like having a firewall. Of course you should not overdo it with the rules and check the logs once in a while (just like with SIEM).
I do however have issues stupid CAPTCHAs like Cloudflare has that even humans can't pass through when using a privacy oriented browser. Sites should serve visitors and not the other way around.
Writing all of this and concluding with a recommendation to use static analyzers feels like a joke. So we shouldn't use a tool that scans for known bad vectors but use a tool that...scans for known bad vectors instead?
Yeah, sure. The bad guys will attempt to circumvent the WAF, and, if it is just regexes, will do it after the Nth attempt. However, bad developers will not normally obfuscate their code multiple times to the degree required to evade the static analyzer.
I've fought against WAFs in the past, but in govt departments, they are often a check-box in your security checklist that must be checked. I've even had to fight a security guy telling me I needed two WAFs because he wasn't convinced the first WAF was good enough (because it was a managed service and the vendor didn't want to share the specific configuration.)
In saying that, there are definitely use-cases where a WAF is required - especially if you have a legacy app on an older code-base that needs to be exposed to the internet.
This sounds so much like music to my ears. Where I work a WAF is mandatory, Azure Application Gateway in our instance, and they enable ALL of the rules because the outsourced colleagues they follow the rules set by infosec like sheep. The consequences are that many requests are blocked as a false positive.
Completely legitimate requests, like for instance an OpenID redirect from Azure AD (Microsoft Online) login to the application. They then get blocked by the SQL Injection rules in AAG. So user friendly.
At one point we were seriously considering base64 encoding all request bodies from javascript before sending them to the server and decoding them there - just to get around this bullshit.
Such a shame that you're forced to use this. We run Azure Function apps directly to public traffic and the experience is really nice & simple. Not once have I had a "I wonder if a middleman ate my request" experience. OIDC works flawlessly for us.
I would quickly grow to hate my tech stack if I had to cram stuff like AAG into it. Right now, we can spin up a very robust stack with ~3 products. The moment I have to start playing with network policies, I feel like the security level of my solution goes down not up. Manual routing, firewall or certificate management is a canary to me in 2023. I don't want to touch any of that stuff anymore - I'll probably screw it up at some point.
Perhaps you could reframe the AAG as creating an emergent situation that is less secure than the alternative without. It certainly sounds like an honest possibility based upon the workarounds you seem to be entertaining.
So there are a few issues with this, WAFs do have their uses, generally speaking yes rules based on regexes looking for sql injection are silly.
But they do have their useses. For example tarrgeted blocking, https://confluence.atlassian.com/security/cve-2023-22515-pri... . While waiting for the patch, a WAF can quickly block all requests to the /setup endpoint.
I would also say that static analysis as a panacea for SQL Injection is laughable. SAST tools have a hard time finding sql injection in code. As they quickly loose track of user controlled data. They almost always create false positives / false negatives when Parameterised queries are used incorrectly. For example when user controlled data gets into the SQL query rather than the parameter of a paremeterised query.
And that completely ignores SQL Injection attacks that do not occur within your code directly, but in libraries you are using.
depends on the org. The appsec team, may not have access to the webserver in production atleast not quickly. But will have access to modify a WAF they own.
To me this blog post doesn't fully make its case, though has many good points and is a good read.
I think my main logical objection is that the alternative best practices at the end of the article were all security best practices before WAFs existed. Which makes me ask the question, why did WAFs come into existence in the first place? Did the founders of those companies convince customers they needed them without those customers actually need them?
I think not. In the years before WAFs existed, I was in the position more than once of being in an organization whose web application security footprint had grown to the point where we ended up writing a home-grown version of a WAF. E.g. adding an interception layer that would analyze inputs and outputs for typical security violations.
Why? Well, first because it started to give us a sense of the types of attacks that people were trying to use. Second, because the types of mitigations mentioned by the author of this blog post aren't the whole story. You can audit that your entire system avoids SQL injection attacks via stored procedures, then your company buys another company with a code base that fails such audits. Or someone attacks by leveraging your caching layer which stores and sends back unaudited key-value pairs. Perhaps (this has happened to me) a bug gets introduced into the deployment system, and the code that forces authentication is not shipped, and the calling code doesn't properly fail when the auth checking code isn't in there. A real head slapper in hindsight.
I do like the best practice of process isolation around APIs, and only allowing APIs to have the privileges they require, but in practice, if there are hundreds of APIs undergoing frequent changes, the complexity of managing that becomes a security risk in and of itself, because the ACL rules are deeply complicated.
Relying solely on a WAF seems like a bad practice. But also relying only on secure design philosophy is a practice with plenty of historical failures.
So if the point of the article is that WAFs breed complacency, I agree with that! But if a WAF is used as an analysis, auditing, and fast-response layer, alongside following secure design principles, then I'd say that based on personal experience, if WAFs didn't exist, people would write home grown ones with their own sets of flaws.
I get it and the points made here are valid but, the reality is that the teams deploying the WAF + infra and the team writing the insecure/secure code are different teams with different roadmaps. We have to deploy a WAF because the developers are not writing this magical unicorn code that follows all security best practices and gets refactored once a week. There are vulnerabilities, issues etc. that need to be addressed just like any app. SO yes, WAF is necessary.
Large companies have WAFPlus-as-a-service(load balancing +WAF+ SSO: any team can provision one and put their app behind a WAF. Is there any alternative to replace that?
I don't see the world as black-and-white as the author does.
The thing is, as soon as you're reachable from the Internet, you will get bombarded by crap. Skiddies just blasting every IP they can with Wordpress exploits, log4j exploits, whatever. People DDoSing you for the lulz or for ransom. A WAF and CDN - personally I like what AWS has to offer - is basically a tool that's (unfortunately) required to be on the Internet these days.
I never saw much discussion as to why Capital One’s IAM role for their WAF gave broad access to a number of S3 buckets. That SSRF attack should have resulted in the exposure of the WAF logs, at worst.
Who wrote the policy? Was it a centralized IAM team? Does a centralized IAM team make for more granular or more one size fits all IAM policies?
How do teams that deploy and maintain WAFs separate the signals from the noise. The constant door-knocking and buzz of credential-stuffing attacks, probes, and so forth that any API or web application gets generates a flood of data, most of it worthless. How do opsec people detect and address real threats meaningfully?
The usual answer is your WAF blocks them and then you write a report counting it in the cyber attacks blocked by the WAF, proving it saved the company.
That sounds like hell, not sure if should count this as „blocked“ or more as a „I ignore them, but I need follow enterprise BS policy so they end up blocked“
WAFs are a good way of masking some issues but they tend to not help against more sophisticated attackers, as in a bit more sophisticated than just using metasploit.
Especially when you only have the product in your pipeline in the first place because of some security compliance checkbox that needs to be checked ...
Qualyss WAS - don't get me started. Generic accounts, short passwords you can't change, incredibly slow and not-able-to-debug scans. Numerous false positives. And with a crawler you don't truly know if you've covered everything.
WAFs are also bring on risks that are not considered. At least in scenarios where things get logged and no one considers how they are sanitized. And then there is also GDPR in Europe.
Lot of these issues could be avoided by good API design. But if you need WAF you might not be doing that... So you end up passing tokens as parameters and they end up in logs. Then again if someone has access to them they probably also have other access. But it still is often forgotten aspect.
Feels like you could replace WAF with DLP, EDR or most any other 3 letter abbreviation... Or indeed bike helmets. Vendor push + industry ‘best practice’ tribal knowledge is tough to resist.
Here is my opinion: WAF is a tool, use it where it makes sense. Don’t use it where it doesn’t make sense.
I’ve seen many use cases where deploying a WAF solution was completely reasonable and the problem couldn’t have been solved in the application code. And let’s not forget often you have to run applications that were written by other companies, and you still need an additional layer of security.
—-
These “stop using X” articles are really pointless (and boring). They also misguide the people who have less experience in a particular field.
A simple "WAF" can be implemented without hardware or anything overly complex by doing just a three things that will eliminate most malicious traffic.
• Block all traffic from AWS, Azure, etc. yeah, you'll loose some traffic from some VPNs and maybe you care and if so, this suggestion isn't for you.
• Verify the traffic saying it's GoogleBot, BingBot or DuckBot are really from those sources. All three provide a list of valid IPs accessible via a REST endpoint to match incoming IPs against. Block Yandex etc. There is likely no good for you coming from a Russian or Chinese search engine indexing you.
• Make sure the browser versions in the User Agent aren't like 5 years old. That's a great indicator of a bot.