If we assume cryptographically-relevant quantum computers will one day exist, you don't just need to worry about certs being cracked before they expire, but also the ECDH-established session keys being cracked. These keys are ephemeral, but if you store the ciphertexts long-term, you can crack them at any point in the future (aka https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later).
Perfect forward secrecy means harvest now, decrypt later does not apply to signature algorithms when ephemeral keys are used and TLSv1.3 mandates ephemeral keys. If the ephemeral keys are cracked, that would be the fault of the key agreement algorithm, not the signature algorithm.
> If we assume cryptographically-relevant quantum computers will one day exist
One day could be 10,000 years in the future, so what meaning is there to such an assumption? You need to assume much more than that such machines will be constructed one day to suggest that there is a need for action. The industry is switching to hybrid key agreement algorithms out of an abundance of caution that it is not just one day that such a machine will be made, but one day in our lifetimes. It is not certain that will actually happen, but if it does, having adopted hybrid key exchange algorithms years in advance is enough. There is no need to switch signature algorithms from ECC until the creation of such a machine is imminent. Thus it is fine to proceed with EdDSA adoption in PKI.
The pivot is occurring on both key agreement and signatures. Hybrid schemes currently only exist for key agreement. Perfect forward secrecy means that as long as the key agreement schemes are secure against Shor’s algorithm, we can afford to do a much more leisurely roll out of PKI with PQ signing algorithms. Whether people will opt for “hybrid” signatures is yet to be seen.
They say coefficient b is determined via BLAKE3, but unless I'm missing it, they don't actually say how?
They also claim that the prime modulus was chosen "carefully", and enumerate its favourable properties, but do not elaborate on how it was chosen. Presumably they had some code that looped until they found a prime that gave them all the right properties, but it would be good if they shared that process.
And even with the constant `b=BLAKE("ECCFrog512CK2 forever")` there is an open question, while not as problematic as it is with the NIST & SEC curves, it's covered in "How to manipulate curve standards: a white paper for the black hat"[1]
I'm surprised they didn't include the constant in the paper and at least a short justification for this approach, despite stating "This ensures reproducibility and verifiable integrity" in section 3.2, whereas several other curves take the approach of 'smallest valid value that meets all constraints'.
Really they should answer the question of "Why can't `b` be zero... or 1" if they're going for efficiency, given they're already using GLV endomorphisms.
Likewise with the generator, I see no code or mention in the paper about how they selected it.
Agreed. I have a draft article (far from finished) with my own attempt to explain ECC, and the opening diagram is the classic "pretty pictures" with a big red cross through them. They have surprisingly little relevance in the overall picture of ECC.
The "enter text to sign" demo is pure nonsense. If I enter "AAAAA", it's "encrypted" to "4242424242". What's that supposed to mean? Was this vibe-coded?
Edit: The article has since been edited to disclaim "values are not encrypted realistically" - sure, use small numbers for demonstration purposes etc., but what is being demonstrated here? You've added scare-quotes to "encrypted" but what is the actual intended meaning?
I guess you already know what a “Functional API” is and feel patronized. Also possibly you dislike the “cute analogy” factor.
I think this could be solved with an “assume the reader knows …” part of the prompt.
Definitely looks like ELI5 writing there, but many technical documents assume too much knowledge (especially implicit knowledge of the context) so even though I’m not a fan of this section either, I’m not so quick to dismiss it as having no value.
Which is ironic considering that I strongly disagree with one of the primary walled garden justifications, used particularly in the case of Apple, which amounts to "the end user is too stupid to decide on his own". Unfortunately, even if I disagree with it as a guiding principle sometimes that statement proves true.
It’s not about stupidity, but practicality. People can’t give informed consent for 100 ToS for different companies, and keep those up to date. That’s why there are laws.
No doubt in a dense wall of text that the user must accept to use the application, or worse is deemed to have accepted by using the application at all.
reply