Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Peer review and "informal standards". Good examples of things that were, until long after their widespread adoption, informal standards include Curve25519, Salsa20 and ChaCha20, and Poly1305. A great example of an informal standard that remains an informal standard despite near-universal adoption is WireGuard. More things like WireGuard. Less things like X.509.



Both formal and informal peer review are why I like the FOIA, and standards / competition discussion to be open in general. I actually dislike closed peer review, or at least without some sort of time-gated release.

Likely scenarios, and that closed review hides:

- Peer review happened... But was lame. Surprisingly common, and often the typical case.

- If some discussion did come up on a likely attack... What? Was the rebuttal and final discussion satisfactory?

It's interesting if some gov team found additional things... But I'm less worried about that, they're effectively just an 'extra' review committee. Though as djb fears, a no-no if they ask to weaken something... And hence another reason it's good for the history of the alg to be public.

Edit: Now that storage and video are cheap, I can easily imagine a shift to requiring all emails + meetings to be fully published.

Edit: I can't reply some reason, but having been an academic reviewer, including for security, and won awards for best of year/decade academic papers, I can say academic peer review may not be doing what most people think, eg, it is often more about novelty and trends and increments from a 1 hour skim. Or catching only super obvious things outsiders and fresh researchers mess up on. Very diff from say a yearlong $1M dedicated pentest. Which I doubt happened. It's easy to tell which kind of review happened when reading a report... Hence me liking a call for openness here.


You get that the most important "peer review" in the PQC contest took the form of published academic research, right? NIST doesn't even have the technical capability to do the work we're talking about. My understanding is that they refereed; they weren't the peer reviewers.

Replying to your edit I've been an academic peer reviewer too. For all of its weaknesses, that kind of peer review is the premise of the PQC contest --- indeed, it's the premise of pretty much all of modern cryptography.


As much as I like the design of WireGuard, the original paper made stronger claims of security than were achieved with respect to key exchange models. Peer review and informal standards failed in catching this. From my perspective, the true benefit of a formal standardisation process such as this is that it dangles such a publishable target in front of researchers that we formally verify/disprove these claims out in the open.


WireGuard's design is superior to that of its competitors, and one of its distinctive features is that it lacks formal standardization. It's not as if we don't have decades of experiences with attempts to standardize our way into strong cryptography; see IPSEC for a particularly notorious example of how badly standards processes handle this stuff.


For sure, if a standardization process had been called to design a VPN protocol, I'd agree that the resulting design would almost certainly be less than WireGuard. I think that the competitive nature of the PQC process as well as soliciting completed submissions as opposed to a process to build from the ground-up helps in this regard. I don't think that engages with the point I was making, however: the original submission of WireGuard made claims that were incorrect, which would have arguably been caught sooner if it were a part of a formal standardization process, since researchers would have been incentivized to analyse it sooner.


Having come from a community that is often cleanup duty for unfounded claims (PL) and having to spend ~decade+ $100M+ efforts to do so... I didn't realize that about wireguard. That's pretty strange to read in 2022.


To be clear, WireGuard is a good VPN protocol, and definitely a secure design. I wouldn't recommend another over it. It's just the initial claims of security in the NDSS paper were incompatible with its design.


I'm sure it's a pretty good one, but it's quite hard to trust more than that both on the design + impl side if you ever have tried to verify (vs just test) such a system. Think the years of pain for something much more trivial like paxos + an impl of it.

In this case, looks like the community does value backing up its claims, and the protocol is verified: https://www.wireguard.com/formal-verification/ . Pretty awesome! The implementation itself seems to be written unsafely, so TBD there.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: