The point of NIST standardizing on SHA-3 is to gradually replace SHA-2 due to the rise of computing power and the likelihood it will become as weak as SHA-1 is now in the near future. Unfortunately, like American credit cards vs. European chip & pin, it's going to take forever to adopt.
No. The "rise in computing power" doesn't jeopardize SHA2. There are important design differences between SHA1 and SHA2 (here's where in my younger days I'd pretend that I could rattle off the implications of nonlinear message expansion off the top of my head). SHA2 is secure; don't take my word for it through, you can find one of the Blake2 designers saying SHA2 is unlikely ever to be broken, or Marc Stevens on a Twitter thread talking briefly about why his attacks on SHA1 don't apply at all to SHA2.
I agree, SHA-2 is secure as far as we know. But since it's based on Merkle-Damgard, it permits length-extension attacks - i.e. given H(x), one can derive H(pad(x) || y) without knowing x.
So we need to be careful not to use it in setting where that would be problematic. Or we can use it with workarounds, like double hashing (SHA-256d) or truncating its output.
SHA-3 is sponge based, so its output is always truncated, preventing length-extension attacks. So I think SHA-3 is a better default, though it's fine to use SHA-2 if you know what you're doing.
Truncated SHA512 hashes, such as SHA512/256, defeat length extension attacks by omitting part of the hash state from the output. They're also significantly faster than classic SHA256 for large inputs.
I'm always leery when primitives mention their speed as a selling point because I'm thinking about the memory and CPU/GPU/ASIC costs required for adversary X years from now. Sure, one can hash or encrypt using N repeated rounds to up the cost but still: speed isn't everything.
Outside of password hashing applications, which message digests like the SHA and Blake families aren't ideal for anyway, hash functions don't really derive their strength from being slow. If a hash is seriously broken -- in the kind of way that MD4 is, for example -- attacks against it may require so few operations that the speed of the hash doesn't matter at all. But, so long as the hash function remains unbroken, any attack against it requires so many operations as to make it completely infeasible, regardless of how fast it is.
Like the sibling says, speed is not really related to hash security. Either the hash is good at any speed or it’s broken. Speed is important if you’re hashing often. Systems that hash often, in my experience, tend to have a stronger security posture than systems that use bearer tokens. So… fast strong hashing is good for applications because it enables good crypto to be applied more thoroughly. And that’s why we use Blake3, fast strong hash/kdf/xof that we can use anywhere (from powerful servers to wasm) without much of a second thought.
SMH. You're conflating "broken" by mathematical attack and having enough computing power to brute it (GPUs or quantum). Rise in computing power always jeopardizes the baseline brute cost of every algorithm, which is why standards shift over time, otherwise 3DES would still be recommended for new applications instead of AES.
The point of NIST standardizing on SHA-3 is to gradually replace SHA-2 due to the rise of computing power and the likelihood it will become as weak as SHA-1 is now in the near future. Unfortunately, like American credit cards vs. European chip & pin, it's going to take forever to adopt.