Yes, there is one: KangarooTwelve, by the Keccak authors.
The problem is, there's just not a lot of reasons to use it. SHA-2 isn't broken; in fact, there are hash cryptographers who think SHA-2 may never be broken.
> there are hash cryptographers who think SHA-2 may never be broken.
It is a dangerous assertion, especially if it makes people build things that aren't future-proof enough to support more than one hash type.
Otherwise, KangarooTwelve is faster than SHA-2[0], which may be an incentive. Although it is very late compared to BLAKE2, and as a result, it is present in much fewer libraries.
That's not necessarily a dangerous assertion. SHA-2 is theoretically immune to foreseeable improvements in computational power. If you use something like SHA-512, a birthday attack will require searching through approximately more candidates than there are atoms in the observable universe. You can't meaningfully improve this by scaling up inordinate amounts of computing power.
If you can't rely on raw computational power to force obsolescence, you need either a major paradigm shift (a la quantum computing) or a clever cryptanalytic attack that bypasses the actual difficulty.
Quantum computers do not currently pose a serious threat to SHA-2. Grover's algorithm can offer a quadratic improvement to collision identification, but not an exponential one. That's impressive, but not enough on its own.
That leaves the last category, which is a clever cryptanalytic attack. This is possible, especially with novel mathematics. But it's not reliable for predicting risk.
So really, the only "future-proofing" we can do for SHA-2 is against vague improvements in the underlying math. In any practical sense of the term, it's not a dangerous assertion to claim SHA-2 may never be broken. As Thomas said, research will certainly continue on cryptographic hash functions regardless; moreover, "improvements in math might happen" is not reliable enough to calibrate forced obsolescence or regular updates to hash standards in the future.
Given the foregoing, some researchers choose to act as though SHA-2 will never be broken, because there's no meaningful way to assert that it probably will or to coordinate when or how it will, and because there are so many more productive areas of research to focus on where the current algorithms absolutely will be broken in the future. You can't productively work on future-proofing something any further once the space of coherent threats has been reduced to, "this might happen in the future somehow" without real specifics.
> "improvements in math might happen" is not reliable enough to calibrate forced obsolescence or regular updates
Definitely: it doesn't make much sense to move away from a hash that doesn't seem in danger of being broken, just because it is old — a clever new attack can be found for a newer hash, too (or for both).
Besides, usually there's a lapse of time between finding a major weakness and a practical collision.
On the other hand, when picking a hash… well, having a faster and less error-prone hash is nice.
Additionally, the primitives used in the SHA-2 core are well-enough understood and accepted that for SHA-2 to be broken, pretty much all the hashes are going to end up broken --- is how the rest of the logic would go.
Yeah, that's Aumasson. He compared it to finding that P=NP; sure it could happen, but it probably won't, and by the time we get that far "future-proofing" suddenly ceases to be a coherent concept for a lot more than just SHA-2.
Would it also be fair to say that novel math could just as easily be discovered that would break SHA-3, and so, switching to SHA-3 would not reduce risk? (Or, worse: might it increase it, since the SHA-3 constructions have been subject to less cryptographic research and so might be more likely to fall to some kind of new math?)
It's arguable that SHA-3 is better understood than SHA-2, at least in the open research community. The rationales behind every component of SHA-3 are well documented, and the design is very conservative, both in structure and number of rounds. More so than SHA-2.
It is in theory possible that a new technique could break SHA-3 and not SHA-2, but the opposite seems far more plausible.
I'm not that sure about that. In the history of cryptography almost everything was broken in the end (though some peculiar stuff took a couple hundred years), but it seems that with modern cryptography, which is very, very different from classic cryptography, breaking it is far more difficult, and the intervals at which algorithms fall apart seem to increase.
For example, DES held out for about 20-25 years or so, with some cracks showing earlier than that. AES is now ~20 years old, and we don't have a discovered a single crack in it. SHA-2 is a similar vintage and no significant issues have been discovered. The previous generation of hashes only resisted ten years or so until the first real issues showed up.
What differences does K12 have compared to SHA-3? Just less rounds or are there more changes? Something similar to the differences between BLAKE and BLAKE2 maybe?
The problem is, there's just not a lot of reasons to use it. SHA-2 isn't broken; in fact, there are hash cryptographers who think SHA-2 may never be broken.