>If you didn’t, here’s how the exploit works. The comparison operator, “!==”, is vulnerable to a timing attack. The string comparison compares the two keys one byte at a time, stopping as soon as it finds an offset where the two strings do not match. As a result, checkApiKey will take longer if the two keys start with the same bytes. It’s sort of like if the error message itself said “wrong key, but you got the first 2 bytes right!”.
I understand that it is technically a vulnerability, but you will not be able to measure these nanosecond fluctuations over network. hell, you wouldn't even be able to measure them with direct physical access to the machine. even if you sample every attempt a trillion times.
Thanks to thinking like that, many a system has been hacked. I can't think of any statistical methods that would recover this information, therefore they don't exist.
The fun thing about physical systems is that in spite of complexity in the stack, there are a lot of Gaussian processes in play which are very easy to clean up with some processing.
all I know is that it doesn't take even one microsecond to compare 32 bytes - it takes nanoseconds. and yes, I can't think of a way to meaningfully measure that delta over a real network, where every packet passes through dozens of switches and routers, each one of which introduces enough entropy to completely drown out any attempt to measure things with such precision.
but more importantly, what kind of system allows itself to be pounded with thousands or millions of requests to the authentication endpoint?
- a repeated event when aggregated (like summed or averaged) tends to look more and more like a normal distribution the more samples you got
- the standard deviation of a normal distribution scales roughly by the square root of the number of samples
- From microseconds to nanoseconds you need a 1000x in precision, 1000² = 1.000.000.
So from an event that has a noise amplitude of microseconds, I think you may be able to measure a difference of nanoseconds with ~1 million samples.
---
So yeah unlikely in normal scenarios. But if you could, for example, rent a server in 10km from your target, you can reduce by 10-100x the latency, which will reduce your number of samples by its square, so a 100x-10000x reduction
And what if it's possible to rent a server in the same datacenter of your target? ... well K.O.
I understand that it is technically a vulnerability, but you will not be able to measure these nanosecond fluctuations over network. hell, you wouldn't even be able to measure them with direct physical access to the machine. even if you sample every attempt a trillion times.