Hacker News new | past | comments | ask | show | jobs | submit login
iTunes Store Slowdowns With Google DNS (daringfireball.net)
35 points by shawndumas on Dec 30, 2010 | hide | past | favorite | 42 comments



I don't understand this at all.

Is there any reasonable theory for how the choice of DNS changes download speed this dramatically?

I can see it taking longer to resolve the ip address of the download server, but that should only be a one-time thing - and the total impact should be a couple seconds at most. Unless the download is constantly flipping between servers, I don't see how DNS latency is going to make a noticeable difference in the time it takes to download / stream a movie.


I think the deal is that it's returning an IP from the lookup based on the IP of the DNS server that's making the request. This is so it returns a server that's closer to your network. If your DNS request is going through Google, it's going to return a server that's close to Google's DNS server and not your own ISP's DNS server (which would of course be much closer to you networkingly-speaking).


The problem is that Apple's CDN is being extremely naive about DNS — 8.8.8.8 is not "Google's DNS server" at a particular geolocation, but many servers distributed around the world, with different routes advertised to different destinations depending on the client network (anycast).

Even without anycast the DNS server's geolocation is going to be way off for a lot of ISP defaults. Why look at a cached second-party side channel instead of the real routed packets from the client anyway? Anycast is the right mechanism for distributing this stuff — Apple's keying off of DNS is a bad idea, implemented poorly.


Apple's CDN is Akamai, which is definitely not "naive".

Google DNS is broken as a concept, and shouldn't have been released until they were anycasted in either all Google CDN POPs (where ever www.google.com is proxied from) or had talked the major CDNs (AKAM, LLNW, L3, etc) into supporting their proposed DNS extension (http://tools.ietf.org/html/draft-vandergaast-edns-client-ip-...).

As it stands, using Google DNS is optimizing exactly the wrong thing -- it may be "faster" than your local ISP (which I've actually never seen), but you're trading a couple of ms on an easily scaled distributed system (which every single ISP in the world provides) for a huge hit on network performance because you get a non-optimal CDN pop.

I'd even go as far as saying that Google DNS is ruining the network experience for anyone outside the US (and from Gruber's article, apparently in the US too) -- the very problem people pay CDNs to solve in the first place.


Couldn't the CDN redirect to a closer node if DNS doesn't bring the user to an optimal pop?

I'm pretty sure we're talking about an HTTP protocol so a 302 redirect ought to work, and it seems like that would give the CDN way more control than trying to distribute traffic using a cached and not reliably localized DNS mechanism.

Edit: foobarbazetc has a good point, but it still feels like the CDN has reasonable ways to work around and do a better job of selecting the correct pop than DNS. Adding a layer of subdomains which force locality (us-ny.host.com) would keep urls readable & virtual hosts intact.


Unfortunately not, because your redirect would have to use an IP address instead of the hostname, so your browser address bar would look like:

http://1.2.3.4/

Which would send the wrong 'Host' header to the server, and won't serve the right site and/or content through the CDN. :)


apple tv


Are you sure Apple is using akamai for video? They have a substantial contract with Limelight and have long been rumored to be bringing traffic in house that had been previously served by CDN.

With a long term connection that can withstand a little extra in setup time and a custom built client (both things fit apple tv) it's unnecessary to rely on geo dns anyway.


For my midwestern U-verse vendor supplied DNS server, it is trading annoying timeouts from unanswered queries for a reliable DNS response.

But the article makes a good point, and since I have a bind running on my firewall anyway, I'll just take the forwarders out.


This is how most global load balancing works, and even Google uses systems like this for deciding the closest data center to serve content from (which is why they have submitted an internet draft that would add a few octets from the source ip to forwarded dns requests as a solution to this problem.) If you are using a chatty protocol then you can take an early guess as to the closest server based on the dns lookup and then redirect/refine the path once you are actually talking to the endpoint, but in some cases you would take more time re-establishing the connection than you would servicing it from the "wrong" data center so you just grit your teeth and try to either be smarter next time or add the dns server to the "we have no idea where the endpoint is" list.

In theory anycast delivery is the right solution, but there is a wide gulf between theory and practicality here. There is a reason that no one uses anycast for anything other than dns at the moment...


Yes. The best thing you can do is check what your DNS servers are, and where they are.

If your ISP is giving you, for example, one local server and one non-local server, do some sleuthing and figure out where their second local dns server is, and hard code your DNS to those servers (or, remove the server that isn't local).

In the earlier days of the Internet, you were encouraged to provide geographically diverse DNS servers. But, what many operators did not understand is, that rule of thumb is for authoritative DNS hosting.

For DNS resolvers, you want those to be in the metro area that your internet connection terminates, and in that AS. Another ISP even in the same metro area is not good enough. That other ISP will not have the same peering/transit arrangements as yours.

So frustrating.


Correct. Using a DNS service like Google's can reduce throughput from location-aware CDNs, since you end up reading off of a CDN edge very far away from you.


Incorrect, the CDN is not location aware if it's using DNS. It's making a bunch of unnecessary assumptions about a DNS server's network proximity to the clients it serves. The root servers themselves run on Anycast so that you can get a 'root server' closest to your network location.

A much better way of locating a resource on the network is its IP address, since this is what IP was designed to do and not what DNS is designed to do (resolve names to IP addresses)

DNS is designed to be forwarded and cached. A much better way of optimizing network routes is to advertise a better route, since this is what routing was designed to do. (To pick the fastest way to get from your IP to a destination block. (aka. anycast.))

It's simply amazing how well things work when you use things for what they were intended for.

Why reinvent the wheel at such a high level?


Yes, it is location aware even if it is using DNS. The DNS lookup is the first chance you have to direct a user to the right data center. You examine the source of the dns lookup and figure out the best bgp path from each data center that will get you back to the source. You then respond with an answer to the dns request that directs the user to what you think is the closest data center. You may be wrong if the user is using a useless dns cache like google dns or if they are using a big ISP that keeps its dns in one location (10 or so years ago Comcast did all dns out of its east-coast data center, people learned real fast to not try to do geodns on queries coming from these servers...) OTOH, if the user is using their ISP's "local" dns server then you are going to get a result that will be very, very close to optimal.

If you guessed incorrectly then the tcp handshake will start from the user to your server and you will know that they are not at the right server. If the nature of the eventual exchange allows it (e.g. it will be repeated queries or a long data exchange) then you just hit them with a redirect and let the user re-open the connection to a server that is closer to them, or you seed the response data so that follow-up queries are all exchanged with the closest data center. If the exchange is quick and not going to be repeated a lot then you will end up adding more perceived latency by trying to re-direct once the connection is established.

Putting this sort of smarts down at the application server is expensive. It is much easier to do this job at the dns lookup phase and you will get a good return on the amount of investment that this approach requires. You do need to be smart though, and make sure that you do not let geodns end up directing large batches of users that are getting service from google dns or opendns to overload a particular location.

Anycast is not the solution, it is just a different approach that brings its own set of headaches into the equation (e.g. pop switches during extended exchanges)


GeoDNS allows you to return different zone files depending on the location of the source query. If you are in New York, query for foo.cdnbar.com, and the nameserver serving that CDN is geo-aware, you might get a northeast-based IP. Google then caches that. 5 minutes later, I, in Arizona, come along and ask for foo.cdnbar.com, and it just gives me your northeast-based IP, meaning I have an extra four hops and 3000 miles for my packets to travel than if I'd pulled that IP off of my ISP's DNS.


The point is that you don't know the IP address of the client, so you can't locate it.

Most CDNs couple DNS (for a large region) and Anycast (within the region) -- they're not idiots.


How does the CDN send data back to the client with out knowing its IP address? I'd like to know.


You don't know the ip address of the real client when they are doing the dns lookup, so you start by assuming that they are close (in network terms) to the server that is performing the dns lookup. When you are not dealing with a borked dns service this assumption you will get into the right general region of the world. Once you are in the right region you can use anycast because within a relatively small region you are less likely to see big route shifts that will pop an anycast connection and hose your content delivery.


The point is that by the time you've heard from the client on the CDN (after resolving host to IP), they've already contacted a non-optimal CDN POP.

Providing IP addresses (locations) to host queries is exactly what DNS is built to do. :)


Two hours vs. instant streaming isn't a localization issue, you can easily stream 1-2mbps (or much more) from half way around the world. ~100ms in latency is nothing with a fat, non time sensitive stream like recorded video.

It sounds like the specific POP the google DNS server is being fed is overloaded with traffic. It should be fairly easy for Apple to resolve the problem on their end, by simply not resolving to overloaded pops (they shouldn't ever anyway).

Other video cdn backed services (like netflix) don't suffer POP overloading on public DNS servers like GTE or open.


The overload may be due to the caching behavior of Google DNS; if it's not submitting new queries to Apple's DNS then the latter would have no opportunity to balance the traffic.

Conceivably caching behavior could also throw off geolocation, if Google's cache domains (i.e., the geographic areas that get placed into the same cache bucket) don't match the upstream CDN.


As far as I've seen google DNS respects TTL completely. Even if they didn't, it would be very uncommon for any DNS provider to force TTL over 60 (one minute).

Even if you found a large resolver that was really screwing you by doing something like setting TTL to 86400, you could just custom serve them a large EDNS response including all of your POPs in a round robin list.


Really?

From a number of australian providers, on links up to 30Mb/s, I have found this to be impossible.

Can easily get 2MB/s from an Australian site, but getting over about 200KB/s once I go international is about the best I can do. Multiple connections does get around this issue though


So basically the problem is using a public DNS server rather than your local ISP's, right? Gruber blaming this on "Google DNS" then isn't really a full explanation.


He didn't say that he was providing a full explanation, and he didn't blame it on Google DNS. He just shared what he did to fix the issue on his network.


True, but I think it's easy to start speculating other malicious causes for the problem in an article with "iTunes" and "Google" in it. Would be nice to tell the readers the why as well and not just the what.


Ugly. Maybe that's the incentive to split horizon on a sheevaplug.


http://code.google.com/speed/public-dns/faq.html#locations

"Here are the subnets from which Google Public DNS sends requests to authoritative nameservers, and their associated IATA airport codes"

So in theory, if you live in one of those cities, you shouldn't have this problem. Right?

Some of the reported numbers are too astounding to be covered by this explanation, however. I wonder if something else is at work.


I can definitely attest to having this type of experience. My first time trying to rent a movie was quite miserable. I first tried renting it from iTunes like I do TV shows which apparently doesn't work. I then started streaming it from the Apple TV and was greeted with these wait times a couple times during the movie. I can't wait to try experimenting with this to see if that was indeed the cause.


Anyone seen any problems with OpenDNS? iTunes Store has been running extremely slow for me over the last several days for no apparent reason...


OpenDNS suffers from the exact same issue.


Google's DNS is a pain in the ass. They either go OTT with the caching or it's really slow to deal with nameserver changes as I've encountered this myself (when all other servers I tested were OK) and advised people on IRC with similar issues.


I've always been a fan of:

  4.2.2.1
and I believe it has a family of DNS servers in a similar range. Has never let me down (whereas my ISP, Rogers, has done so countless times).


I use Level3's DNS servers also, but they're also distributed with anycast routing and would likely suffer from the same interaction.


It is not the fact that they dns server is using anycast routing that is the problem, it is the fact that the dns provider does not have a widely distributed set of POPs. When you send out a dns query to google dns or opendns you are going to a small set of boxes, the anycast address will route you to the closest one in this set but it is unlikely that any of the POPs are actually close to you. Level3 has a widely distributed set of POPs and if you are going to use a dns service that is not provided by your ISP this is probably the one to use.


Mine says this with the default DNS settings too. It starts a few minutes later anyway.


So how do you turn off google DNS? (If it is even on)


To use Google's DNS servers you have to put their IP addresses manually into your network settings (http://code.google.com/speed/public-dns/docs/using.html). If you don't set these, or set them back to using the "default" you'll use your ISP's local DNS servers instead.


It won't be on by default, you have to enable it (http://code.google.com/speed/public-dns/docs/using.html)

You can check by opening a command line (Run -> cmd) and doing a "tracert random web site.com" (or Terminal, then traceroute in OSX/mtr in linux). If it doesn't come back with 8.8.8.8 or 8.8.4.4 as any of the servers (should be in the first few), you're not using Google's DNS servers.

Otherwise, remove the servers and revert to automatic or ISP settings (instructions are at the bottom of Google's doc).

EDIT: Sorry, confused DNS server with my ISP connection node. Check out the explanation by jonburs below.


The traceroute family doesn't show DNS info; they show intermediate routers on the path the the resolved destination address. The early servers are the local routers in your ISP's topology, not the DNS server used to resolve the destination address.

A tool such as dig, on the other hand, will show you what server you're using, and easily see the results from an alternate. For example, compare the results of 'dig www.apple.com' (configured DNS) to 'dig www.apple.com @8.8.8.8' (Google DNS).


Ah, whoops - thanks for the correction.

Quick question: dig (with no @server argument) turns up my router's IP. Is there a way to get around that?


That would happen if your router is providing DHCP services (common) and returning itself as the DNS server. A good reason for that configuration is so that you can resolve the names of other computers on your local network.

If that's the case there's a good chance your router's admin interface will let you view and configure the DNS server its resolver is using.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: