The way AlphaGo plays is more general purpose than other game playing systems. It doesn't have any heuristics, rules, or game rule books programmed into it, it learned to play Go like you would. The same technique could be applied to other areas, the same way DeepMind originally got super-human level Atari 2600 game performance purely by having it watch pixels.
What I mean is that it doesn't have particular heuristics programmed into it by observing chess experts. It seems to me that the general technique of having policy and value networks combined with monte carlo search scales to other domains and doesn't require the amount of expertise that it took to get Deep Blue to a winning level.
Another way to look at is, is how fast they were able to make it a lot better in a few months.
Unless I'm wrong, the AlphaGo paper doesn't mention transfer learning at all. Yes, the methods are new and exciting, but I mean there are hand-crafted features in the MCTS.
Unlike with the Atari games, AlphaGo learned from a large database of human games. It'll be interesting to see how much that matters when they try it again without it.
I went looking for that xkcd to post an "obligatory xkcd needs updating" comment, but you beat me to it!
What surprises me about it is that connect four was only solved in 1995; that seems relatively late for a 6x7 grid with only 7 possible moves per turn.
We'll still always have Calvinball! https://xkcd.com/1002/