Hacker Newsnew | past | comments | ask | show | jobs | submit | kirizt's commentslogin

I appreciate the update, but your service has been unavailable for hours already. This is unacceptable for a service whose core value is to ensure that we know about any incidents.


Given that a large swath of SaaS services, infrastructure providers, and major sites across the internet are impacted, this seems harsh. Are you unhappy with PagerDuty's choice of DNS provider, or something else they have control over? I don't think anyone saw this particular problem coming.


A company that bills themselves as a reliable, highly available disaster handling tool ought to know better than to have a single point of failure anywhere in its infrastructure.

Specifically, they shouldn't have all of their DNS hosted with one company. That is a major design flaw for a disaster-handling tool.


I'm not using the service, but I'm curious what an acceptable threshold for this company is. Like, if half the DNS servers are attacked? If hostile actors sever fiber optic lines in the Pacific?

I ask because my secondary question, as a network noob, is was anybody prepared / preparing for a DDOS on a DNS like this? Were people talking about this before? I live in Mountain View so I've been thinking today about the steps I and my company could take in case something horrifying happens - I remember reading on reddit years ago about local internets, wifi nets, etc, and would love to start building some fail safes with this in mind.

Two pronged comment, sorry.


I'm not using the service either, but I noticed this comment [1]. It's not the first time that a DNS server has been DDoS-ed, so it has been discussed before (e.g. [2]). At minimum, I would expect a company that exists for scenarios like this to have more than one DNS server. Staying up when half of existing DNS servers are down is a new problem that no one has faced yet, but this is an old, solved one.

[1] https://news.ycombinator.com/item?id=12759653

[2] https://www.tune.com/blog/importance-dns-redundancy/


Re question #2, Amazon uses UltraDNS as a backup and seemed to be relatively unaffected by today's attack.

Re question #1, check out PagerDuty's reliability page here: https://www.pagerduty.com/features/always-on-reliability/

Namely "Uninterrupted Service at Scale - Our service is distributed across multiple data centers and hosting providers, so that if one goes down, we stay available."

It seems fair to expect them to have a backup dns too, but I am not an expert.


> is was anybody prepared / preparing for a DDOS on a DNS like this

Yes.

I have, personally, been under attack with as-large or larger than todays attacks at my DNS infrastructure and survived.


This is exactly my point.


From the perspective of my service being down, my customers being pissed, and me not being notified.. yes, maybe PD should be held to a higher standard of uptime. Seems core to their value prop.


> I don't think anyone saw this particular problem coming.

Knocking half of the web off the grid because their DNS provider is under attack? It happened recently to DNSimple.

https://blog.dnsimple.com/2014/12/incident-report-ddos/

The irony is that I noticed it when dotnetrocks.com went offline, at that time dotnetrocks was sponsored by dnsimple...


Flush your DNS like the parent said.


Flushing DNS wont do shit


pagerduty.com moved to Route53, but the TTL on NS records can be very long. Flushing (restarting, ...) whatever can cache DNS records in your infra will help to quickly pick up the new nameservers.


Not on your laptop. On your local DNS resolver.


[flagged]


Running a redundant DNS provider is expensive as all hell.

While 'expensive' is a relative term, I disagree that it's cost-prohibitive for most firms, as I looked into this specifically (ironically considered using Dyn as our secondary). The challenge isn't coming up with the funds, it's if you happen to use 'intelligent DNS' features; these are proprietary (by nature) and thus they don't translate 1:1 between providers.

In addition to having to bridge the divide yourself, by analyzing the intelligent DNS features and using the API from each provider to simultaneously push changes to both providers, you have to write and maintain automation/tooling that ensures your records are the same (or as close as possible) between the providers. If you don't do this right, you'll get different / less predictable results between the providers, making troubleshooting something of a headache.

Thus in that case the 'cost' in man effort (and risk, given that APIs change and tooling can go wrong) in addition to the monthly fee.

If all you're doing is simple, standard DNS (no intelligent DNS features), it's not as hard, and it's just another monthly cost. Since you typically get charged by queries/month, if you run a popular service you're probably well able to afford the redundancy of a second provider.


Ah so make everything redundant. Double my costs in man hours and in monetary cost. Brilliant!


Ah so make everything redundant. Double my costs in man hours and in monetary cost. Brilliant!

The sarcasm is curious. It's a business decision. Either your revenue is high enough that the monetary loss from a several-hour intra-day outage is potentially worse than the cost of said redundancy, or you don't care enough to invest in that direction (it's expensive).

Making things redundant is exactly a core piece of what infrastructure engineering is. I guess with the world of VPSes and cloud services, that aspect is being forgotten? And yes, redundancy / uptime costs money!


When your service literally says it exists to help provide uptime, redundancy makes sense.


Your automation should be handling creating/modifying records in both providers. Also, if you're utilizing multiple providers you don't need to pay for 100% of your QPS (or whatever metric is used for billing) on every provider, only 50% for two or 33% for three. You can just pay for overages when you need to send a higher percentage of your traffic to a single provider.


A lot of providers do have 'fixed' portions of costs, so, it won't be quite 1/2 or 1/3rd.

It may, at scale, be like 100% (one provider), 55%+55% (two) and 40%+40%+40% (three). Still eminently affordable.


Really?

Route53 on AWS is $0.50/zone and $0.40/million queries. API integration is also very easy.

Using something like Route53 as a backup is significantly cheaper than suffering from the current Dyn outage.


That is not helpful if you want vanity name servers



I assume your clients would prefer working nameservers over vanity ones. Especially if you are in a critical business like PagerDuty.


Latest github NS moved to awsdns

        $ dig -tNS github.com @8.8.8.8

        ; <<>> DiG 9.8.3-P1 <<>> -tNS github.com @8.8.8.8
        ;; global options: +cmd
        ;; Got answer:
        ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15616
        ;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 0

        ;; QUESTION SECTION:
        ;github.com.                    IN      NS

        ;; ANSWER SECTION:
        github.com.             899     IN      NS      ns-1283.awsdns-32.org.
        github.com.             899     IN      NS      ns-1707.awsdns-21.co.uk.
        github.com.             899     IN      NS      ns-421.awsdns-52.com.
        github.com.             899     IN      NS      ns-520.awsdns-01.net.
        github.com.             899     IN      NS      ns1.p16.dynect.net.
        github.com.             899     IN      NS      ns2.p16.dynect.net.
        github.com.             899     IN      NS      ns3.p16.dynect.net.
        github.com.             899     IN      NS      ns4.p16.dynect.net.

        ;; Query time: 32 msec
        ;; SERVER: 8.8.8.8#53(8.8.8.8)
        ;; WHEN: Fri Oct 21 13:01:48 2016
        ;; MSG SIZE  rcvd: 248
But my local copy is still on dynect

        $ dig -tNS twitter.com

        ; <<>> DiG 9.8.3-P1 <<>> -tNS twitter.com
        ;; global options: +cmd
        ;; Got answer:
        ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62729
        ;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 4

        ;; QUESTION SECTION:
        ;twitter.com.                   IN      NS

        ;; ANSWER SECTION:
        twitter.com.            75575   IN      NS      ns3.p34.dynect.net.
        twitter.com.            75575   IN      NS      ns4.p34.dynect.net.
        twitter.com.            75575   IN      NS      ns1.p34.dynect.net.
        twitter.com.            75575   IN      NS      ns2.p34.dynect.net.

        ;; ADDITIONAL SECTION:
        ns3.p34.dynect.net.     54698   IN      A       208.78.71.34
        ns4.p34.dynect.net.     81779   IN      A       204.13.251.34
        ns1.p34.dynect.net.     8544    IN      A       208.78.70.34
        ns2.p34.dynect.net.     54775   IN      A       204.13.250.34

        ;; Query time: 0 msec
        ;; SERVER: <local>
        ;; WHEN: Fri Oct 21 13:02:14 2016
        ;; MSG SIZE  rcvd: 179


Your local copy is also twitter, instead of github :)


I believe you don't understand DNS. It's probably the most resilient service (granted it's used correctly). There's nothing inherent in the protocol that would prevent them to use multiple DNS providers.

> Running a redundant DNS provider is expensive as all hell.

What makes you think that?


<3 Remzi. Best Professor ever.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: