Hacker News new | past | comments | ask | show | jobs | submit login
Deep Inside a DNS Amplification DDoS Attack (cloudflare.com)
90 points by jasoncartwright on Oct 30, 2012 | hide | past | favorite | 49 comments



While it's not the most compelling of the arguments against DNSSEC (which is an ineffective and unnecessary complication of the Internet core protocols), amplification has been the case Daniel J. Bernstein has been making against it for years. Here's a good intro:

http://dnscurve.org/amplification.html


It's not a very compelling argument against DNSSEC no. The solution is for the World to close down open DNS servers in the same way it did with open SMTP servers. Granted, it was a lot easier to do with SMTP.


Or, we could simply not do DNSSEC. DNSSEC + many tens of millions of dollars of open redirector cleanup work is not an inherently better solution than simply not doing DNSSEC.


This seems like a stupid solution. Did they fix the FTP bounce attack by disabling all PORT commands forever? No. They disabled PORT being used for any host except the originator. Similarly, they could compromise here by only enabling DNSSEC for authorized clients, and keep the rest of the DNS records open for the world the way they work now.

This of course does nothing to fix the pink elephant of protocols that rely on a source address in a UDP packet to shovel off data without any limits. DNSSEC is just one single feature that can be abused; i'm sure there are many more available in other protocols, and more yet to be invented.


It would be a stupid solution if DNSSEC had some vital role to play in Internet security. But it clearly doesn't, as evidenced by the fact that virtually all high-volume commercial transactions conducted in the industrialized world in 2012 hit the Internet, and none of it is protected by DNSSEC.

If I had to draw a pie chart explaining the rationale behind deploying DNSSEC, a 1/3 slice of that pie would be labeled "IETF's misguided effort to replace the broken TLS CA PKI with yet another PKI controlled largely by the same giant businesses", and the remaining 2/3 slice would be labeled "Self-perpetuating fallacy that DNS must be SEC'd the way IP was (unsuccessfully) SEC'd", or, less charitably, "Windmill tilting exercise on the part of standards bodies".


I don't care about DNSSEC. I just think it's ridiculous to ignore the real flaw, which is that you can bounce DNS packets at anyone you want.

If you want to strip away part of the protocol to fix an attack vector, why not just redact EDNS0? Or do what networks around the world already do and block all port 53 udp packets bigger than 512 bytes. We'll continue to have a size-limited and somewhat inflexible protocol, but at least DNS amplification will have an upper bound.


You can't just wave a magic wand and make it so attackers can't bounce DNS packets, and so it actually does matter than one DNS feature makes those bounced packets much much larger than any other DNS feature.


You can keep DNSSEC and prevent amplification attacks by either limiting the requests to authorized clients or requiring DNSSEC records use only TCP connections. But you don't accept this solution because you hate DNSSEC and you prefer to kill the feature and save the headache.

I get that. Hell, you're probably right that the cost of supporting DNSSEC isn't worth its benefits in the long run. It's still a crappy argument and a crappy way to deal with a long-standing security problem.

If you want to prevent all future UDP DNS amplification attacks you must require all UDP DNS packets be no more than a specific number of bytes (for example, 512, the pre-EDNS0 size). This would fix the root problem forever. All feature extensions can simply require the use of TCP.

I get that there's a large cost involved with every solution except for forcing everyone to abandon DNSSEC. I don't think forcing everyone to abandon DNSSEC is a realistic goal at this point. Instead, I recommend fixing the root problem for all future cases. Everyone can continue to not use DNSSEC, and DNS will never be able to be used in an amplification attack past what was already possible before DNSSEC.


So I mostly agree with this comment (note the thing I said at the top of this thread: not the best argument against DNSSEC). The only thing I disagree with is that "forcing everyone to abandon DNSSEC" is unrealistic. Actually, DNSSEC hasn't been adopted yet. It has seen virtually no uptake in the ~decade since its current incarnation was put forward, and it has not seen a sharp uptake in interest after the "sign the TLDs" hurdle was crossed either. The reality is that lots of security standards put forth by the IETF don't go on to take over the Internet; for instance, your Google Mail connection is protected by SSL/TLS, not IPSEC.


DNSSEC does not cause DNS amplification attacks. It just makes them worse. If we want to stop DNS amplification attacks, "not doing DNSSEC" isn't the fix. Closing down open redirectors is. If we turned off DNSSEC tomorrow, we'd still see these attacks.


DNSSEC doesn't cause DNS amplification attacks. It just makes them much, much worse.

Meanwhile, DNSSEC itself provides minimal value (all online commerce on the Internet happens without DNSSEC today, and to a useful first approximation, none of today's online fraud depends on spoofing DNS).


Except that DNS needs to be secure before systems that rely on DNS being secure, are written.


Three issues with that, from least to most important --- that is, (c) is the most important issue:

(a) DNSSEC doesn't actually do a good job of making the DNS secure (for instance, it doesn't secure the "last mile" between desktops and recursive resolvers).

(b) Securing the DNS doesn't automatically secure the other core protocols that also need to be secure to rely on DNS promises; an IP address is still just an insecure IP address even if you learn about it from an RSA-signed message.

(c) There's no compelling case that the DNS needs to be secure, or that we need systems that rely on a secure DNS. A secure DNS doesn't make it any easier to clear credit card transactions, to transfer funds, or to create secure & anonymous messaging systems.

It seems to me that if DNS security was such a serious problem, it would be easier to come up with a scenario that benefited from it; that is, you wouldn't need to handwave with a comment like "the DNS needs to be secure before systems that rely on DNS being secure are written". Well, sure, but that shouldn't make it hard to merely imagine such a system. What is it?


You've convinced me. There is no conceivable benefit from a secure DNS, nor any possible application that could make use of such a thing.


I wish I felt like you were being sincere, since that actually is one of my arguments!


I was being sarcastic because I (and everybody reading this comment, other than you apparently) could probably real off at least fifty use cases given an hour, so I found it hilarious that you were claiming that I couldn't even name one.

I am not going to name one. You may now repeat your claim that the reason is because I can't think of one, for continued comedy affect.


I'm not looking for fifty use cases. One would suffice to move the conversation forward. Maybe somebody besides you --- "anybody reading this comment" being able to generate "at least" fifty of them --- will provide one.

It is not a subjective statement that Internet commerce doesn't rely at all on DNSSEC today. It's a fact.


In theory? The use case is simple: we already trust the DNS roots a very great deal - aside from the fact that most of the Internet doesn't use HTTPS, getting a HTTPS certificate for a domain essentially comes down to proving to a neutral server that you control it - so if DNS responses could be verified and could include SSL certificates, which DNSSEC allows, we could obviate the need to trust the CA system, which has lately been proven vulnerable, without having to trust any new entities in exchange.

I don't know much about the issues surrounding DNSSEC, so I wouldn't be surprised if they make this not worth it in practice.


DNSSEC replaces one hierarchical PKI (the CA system) with another hierarchical PKI (the DNSSEC tree).

Ironically, the dominant provider in both PKIs happens to be Verisign.

One difference between the two PKIs is that the CA system admits to many different CAs, and to browser and even end-user control over which CAs are trustworthy. On the other hand, DNSSEC bakes its authorities into the core of the Internet. Don't like Verisign? Tough shit. A pithier way to say the same thing: Ghaddafi's Libya would at one point have been BIT.LY's "CA" in a DNSSEC world.

The role of any PKI authority in the HTTPS/TLS ecology will diminish soon with something like TACK, which integrates key continuity with the TLS PKI. TACK doesn't require DNSSEC, but allows websites to "pin" their certificates into browsers so that even if Iran or China hacks your trust anchor, your site can overrule them. This is exactly the scheme Google uses today to protect its properties: no DNS record or CA signature will convince Chrome that you are the real GMail.

The short answer to "replacing the CAs" as a use case for DNSSEC is that DNSSEC doesn't change the model or the security characteristics of HTTPS/TLS; it's at best a lateral move. But there are specific ways in which it makes HTTPS/TLS trust problems worse, and even more ways that it introduces reliability problems.


Yes, but that other hierarchical PKI is one that we already trust almost everywhere: although extended validation SSL certificates give you an idea of the legal entity in control of the site, in general the only thing it means to be the legitimate owner of a domain is for the DNS roots to recognize you as such. In particular, I don't know any system other than certificate pinning in which a rogue operator of .ly wouldn't be able to get a certificate for bit.ly, or even if that would be considered inherently illegitimate; I like the idea of alternative, decentralized DNS roots like .bit, but fixed authorities are what we're stuck with today. Making the entity that you trust the only entity that you trust is a strict improvement.

And yes, certificate pinning is an exception, making the browser vendor the CA for a small number of domains, but Convergence and other systems depend on the DNS, and TACK only helps on subsequent visits to an already visited site.


I dispute that it's strictly better to have one trust anchor in the TLS PKI, because it matters very much who that trust anchor is and what their incentives are. Verisign won't be ousted from the top of the DNSSEC PKI, no matter what. CA's can fold.

Why are we required to assume that anyone who answers a plaintext email from a domain must be issued the certificate corresponding to that domain? Why can't we authenticate the issuance of new certificates out-of-band? We could address that problem without even requiring browser modifications.

Also, in a TACK+TLS/CA world, Libya could not have surreptitiously swapped out BIT.LY's certificate. In a TACK+TLS+DNSSEC world, they can; stub resolvers don't verify signatures in DNSSEC.

Ultimately, what we need over the long term is a system like Convergence to allow trusted third parties to vouch for CAs, and for users to choose from among trusted third parties who they want to vouch for CAs. This is largely a UI/UX problem, and it's orthogonal to whether HTTPS trust anchors come from X.509 or DNS.

It feels like the very best argument that can be made for DNSSEC as a CA alternative is that it doesn't make things worse. (I personally think that it makes things much worse, but stipulate otherwise here). But it's tremendously expensive and brittle, addresses mostly the solved part of the problem, and does nothing to help the major unsolved part of HTTPS trust.


> Why can't we authenticate the issuance of new certificates out-of-band?

Authenticate with who, Verisign (the ones that have the authority to determine who owns the domain)? I guess making it out of band alleviates some situations where Verisign gets hacked (but not where Verisign is untrustworthy, which you could argue is the case for recent US domain seizures), but the number of domains requiring certificates is high enough that they're just going to end up checking the same database.

> Also, in a TACK+TLS/CA world, Libya could not have surreptitiously swapped out BIT.LY's certificate. In a TACK+TLS+DNSSEC world, they can; stub resolvers don't verify signatures in DNSSEC.

Huh? In a TLS/CA world, Libya could get a legitimate new certificate (although I guess this could be noisier); in a TLS+DNSSEC world, Libya could produce a valid signature. TACK has the same effect on both, protecting some but not all users; invalid DNSSEC signatures are irrelevant. (But in general, the stub resolver issue is one such practical problem that I'm here not worrying about.)

As I said, I believe that Convergence doesn't really help as long as the trusted third parties are validating using DNS, and the only way around that is a new DNS root designed from the start to be decentralized and cryptographically secure.


(We're getting far out to the margin of the thread, and as we do so, I get more and more terse, if not in language then in thinking).

Regarding your first question:

To a first approximation, everyone that needs a TLS certificate already has one.

The concern about authenticating requests for certificates is about attempts to get certificates issued against entities that already attest to having them.

An entity that already has a cert can authenticate new requests for different certs; for instance, they can put a PGP or S/MIME key on file with the CA, and that key can be used to authenticate new requests.

Actually, every tool in the web authentication toolbox, from S/MIME through 2-factor auth keys, is fully available to CA authentication, which is a good thing.

So the question then is, why should ability to answer an email sent to a domain trump every other authentication mechanism we could use instead?

To your second question: my point is just that Libya can more quietly hijack BIT.LY under DNSSEC than they can under an HTTPS CA model.

I agree that we need better trust anchors for peer-to-peer verification than DNS.


Kaminsky on DNSSEC amplification: http://dankaminsky.com/2011/01/05/djb-ccc/#dnsamp

He doesn't think it's a very strong argument against DNSSEC.


This post doesn't actually say anything. It says that DNSSEC amplification is a known attack --- nobody is suggesting Bernstein "invented it" --- and that there are other amplification vectors besides DNSSEC. The latter point would be compelling if those vectors were as powerful as DNSSEC, but they aren't.

The notion that the problem is "actually open redirectors" (nameservers configured to answer queries from arbitrary points on the Internet) is indicative of the weird reasoning that the IETF DNS people have used all throughout the DNSSEC process. Open redirectors means DNSSEC is a viable mechanism to get ISPs to flood random sites off the Internet? Just mandate that ISPs not run DNSSEC that way! Secret DNS names mean that verified negative answers in the DNSSEC protocol will breach confidentiality? Just mandate that nobody have secret DNS names!

Daniel J. Bernstein did a much more convincing takedown of dakami's reasoning in a talk at 27C3; I'm not going to recap it. I'd just say Daniel J. Bernstein has earned the authority he speaks with regarding DNS security; the vulnerability dakami is famous for discovering is one that djbdns --- released many years before that vulnerability was disclosed --- was designed in part to address. Obviously (if you've ever installed djbdns), Bernstein did a good job of handling the open redirector problem as well.

Kaminsky got on the wrong side of this issue, which is ironic, because he's put a lot more time into practical DNS security than the people he's arguing on behalf of.


Hmm. What I got was that, disregarding the open redirector thing, DNSSEC is typically "only" about two times worse than regular DNS amplification. DJB's number is a lot higher.


First, Bernstein is talking about servers, and Kaminsky is shifting the goalposts to caches.

Second, you can compare their numbers directly (get Bernstein's from any of his DNS talks at cr.yp.to).

Third, read closely and you run into things in Kaminsky's post like this:

That’s a 3.6KB response to a 64 byte request, no DNSSEC required. I’ve been saying this for a while: DNSSEC is just DNS with signatures. Whose bug is it anyway? Well, at least some of those servers are running DJB’s dnscache…

Well, probably not, because dnscache was, from the time of its release, the first cache server to ship default-deny for remote queries; if you want dnscache to serve as an open cache, you have to jump through hoops to configure it that way. Most open cache servers are BIND.


I advise people to read the blog post linked to be morsch as it says a lot more than tptacek claims.


I am exclusively responding to the part of Kaminsky's post that discusses amplification. Please don't make a straw man argument that I've dismissed the whole thing based on his handling of amplification; I can do a better job of being dismissive of it than that.

I agree with very little of what Kaminsky has to say about DNSSEC, but my arguments are reasoned well enough that I'm not afraid to actually make them.


Then you shouldn't have a problem with people reading it, as I advised.


The hell? Who said I have a problem with people reading anything? How does that even make sense?


Nobody. Nobody said that.


Video of djb's talk which mentions DNSSEC amplification: http://vimeo.com/18417770

Slides for this talk: http://cr.yp.to/talks/2010.12.28/slides.pdf

(Also http://cr.yp.to/talks.html Ctrl+F "DNSSEC")


It's a good talk! He's a good speaker.


A bit disappointed that the truncate bit wasn't mentioned. So, for the open resolver owners playing along at home: Instead of dropping suspect requests send the truncate bit back. There's no byte amplification in the response, which defeats the attacks raison d'etre.

By dropping the incoming packets you're punishing well behaved resolvers. They're going to take a 2-500ms latency hit before retrying another resolver or over tcp. A well behaved resolver will respond to a truncate by sending the same query over tcp. That's only a latency hit of the rtt.

The tcp connection also effectively authenticates control of that source ip address. Now add that src ip to your whitelist of known good resolvers.

Attacker mitigated, other customers not impacted.


Bear in mind that none of these DNS queries are ever sent out of CloudFlare's network - as mentioned in the previous blog entry, none of these incoming responses are valid.

See http://blog.cloudflare.com/65gbps-ddos-no-problem

" What's great is that we can safely respond and ask them to block all DNS requests originating from our network since our IPs should never originate a DNS request to a resolver. "


Please to be rereading my comment. It's addressed to anyone running a resolver/authoritative name server, not the attack target. I promise I'm well acquainted with spoofed DNS attacks.


I was curious how operators of public DNS resolvers, such as Google, prevent themselves from becoming unwitting aids to DNS Amplification attacks. I found this info about Google's resolver:

https://developers.google.com/speed/public-dns/docs/security

The tl;dr is rate limiting, plus some other techniques like adaptively restricting the ratio of request size to response size.


In addition, it's network engineers' responsibility to prevent source address spoofing by dropping all outgoing packets having a source address that's not on a connected subnet.


I have wondered about this; does anyone who knows more about networks than me know why source-address spoofing is still alive and well when it was a known issue at least as far back as the late '90s (when I first learned about smurf attacks).

In particular since most DDoS attacks originate from botnets, simply egress filtering at the ISP level should be sufficient.


Laziness on the part of the network operators.

Seriously, they're just too lazy to auto-generate firewall rules from their list of assigned addresses.


I would argue Hanlon's razor applies here.

I think vendors also have some responsibility. The defaults are bad and the vendors make their devices hard to manage on purpose (for lock-in reasons). I'm looking at Cisco in particular.


Nope: someone will come up and say "network neutrality" and believe he would be right in this case.


Strongly disagree. Strict urpf is pretty much impossible to implement. The only plausible place to implement it is directly at the edge where a sinlge AS owns both sides of the link. Think DSLAM or other consumer agg device. Any connection to a multihomed peer, or peer with a different AS, cant have strict urpf enabled.

Once you get away from the network edge the only possible urpf is loose mode. But thats a restricted implementation as plenty of stubs out there use default routes. Then asymmetric routes are so common as to rule the use of feasible urpf completely.

So in summary, it has to be the edge networks who enforce this. And the prime intermediate offenders actually have a monetary incentive to not prevent this traffic.


What kind of connection do you need to the Internet to spoof the source address these days? I.e., who isn't egress filtering?


I was wondering the same thing. I live in a third world country and I've never seen a connection with no egress filtering.


Why doesn't BIND disable/block open recursion by default? It should be a configuration option that you have to turn on and should spit out warning messages in the logs.


I encountered a DNS Amplification attack recently and was confused how the VM I was using could be involved given it wasn't running BIND.

It turned out that a VPN was implemented using `dnsmasq` which was responding to the DNS queries.

I ended up using firewall rules to drop the queries because it seems like in certain situations it's not enough to just configure `dnsmasq` to not respond to the requests: http://people.canonical.com/~ubuntu-security/cve/2012/CVE-20...

Just incase the information is useful to anyone else. :)

(I Am Not A Network Engineer.)


My little personal VPS (non-recursive!) DNS server has been asked to participate in one of these DDoS attacks before; the incoming requests for isc.org tipped me off. I guess they didn't notice that my server wasn't sending replies.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: