So the idea is pretty simple. Basically, they are asking, what is the mutual information between two variables? For those without a background in Inf. Theory: mutual information basically measures the amount of detail involved in knowing the two variables separately vs. knowing the combination of them. So if one variable can be predicted from the other perfectly, the mutual information is high, since knowing the two variables separately you would have to keep a lot of details, while knowing them together, you only need to know one. Information can be measured based on the discretization applied to the variables. They basically say if you look at all discretizations, then you can see if there is any way the two variables are related. Of course, they have to resort to an approximate algorithm. The idea is simple, and really, it is not new. I think a lot of non-parametric technicques try do the same. There software can be downloaded and I'd like to see its complexity and performance. I didn't find too much inf. on big O complexity or run times.
They claim a non-parametric correlation metric that can capture all sorts of dependencies that current metrics may miss. This is kind of cool, and may eventually prove to be a big deal. If you're trying to find out what your new, unstudied, gene of interest does, it may help you find genes that it is co-expressed with, and give a hint at function.
In reality it's a big deal right now because they're marketing their paper as if it were a movie, with press releases, a professionally produced trailer, and well-designed web sites. Personally, it comes across as trying way too hard, but whenever Broad rediscovers some well-hashed field they like to play it up as if they were Newton releasing calculus upon the world.