This is a fun phrase, that as a non-crypto person seems reasonable, but I always wonder if there's something of a confirmation bias.
> The reason why that statement exists is because there are _countless_ examples of teams coming up with their own, new cryptographic mechanisms that either break...
But aren't there _countless_ examples of this in crypto made by cryptographers?
I'm not playing devil's advocate, I don't really have a stake here. :)
Not a crypto expert either, but from what I've gleaned listening to e.g. Peter Guttman describe evaluating new crypto mechanisms, you'll see that:
1. Actual cryptographers usually design with a set of constraints that make their crypto work: those might be about compute power, or memory bandwidth, or what have you, that make an algorithm difficult to brute force.
2. The algorithm will typically be peer-reviewed to try to weed out mistakes, either fundamental mathematical ones, or in the assumptions.
3. The implementation then needs to be high quality.
There are certainly no shortage of examples where systems which pass 1 & 2 are undermined by failures in 3. All algorithms are susceptible to the context around 1 changing (changes in compute power or whatever).
When you go it alone, you're assuming that you won't make any mistakes any of these. That seems a pretty tall order.
What really sets cryptography apart is that for a non-expert, there is no way to tell whether it's correct or not. Most bad software has bugs that can be found by users. A bad ML model will do poorly in validation.
But a bad crypto implementation will work. For all intents and purposes, it will appear completely fine. Users will get their messages. The bitstream will appear completely random. At least, until somebody with expertise in breaking crypto systems digs into it.
>But a bad crypto implementation will work. For all intents and purposes, it will appear completely fine. Users will get their messages. The bitstream will appear completely random. At least, until somebody with expertise in breaking crypto systems digs into it.
And that applies even if they're using AES but wrong mode of operation. That applies even if they're using best practices like AES-GCM but the CPU doesn't support AES-NI and a cache timing attack allows key exfiltration.
Like Swiftonsecurity wrote:
"Cryptography is nightmare magic math that cares what kind of pen you use. Should math care what kind of pen you use to implement it? No, but Fuck You, this is Cryptography."
The attacks are incredibly subtle for even the best systems, and Telegram is so far away from even adaquate it's difficult to emphasize it so I'll try with my best restraint:
TELEGRAM FUCKING LEAKS EVERY GROUP MESSAGE TO THE SERVER WHICH IS THE EXACT EQUIVALENT OF A FUCKING BACK DOOR.
Group messages on Telegram and normal messages are explicitly not encrypted in order to allow multi-device operation. That is explicit. I don't see how that has anything to do with the security of MTProto.
Also notable is that it can't be fixed or patched in the way you'd expect for any other software -- once it's found broken, everything that ever used it is now broken unless they're re-encrypted. There's no migration path to the fixed version
Assuming that money is what they’re after. Are you reading Durov’s channel on Telegram? Also, having invented the Russian Facebook and forcefully selling it to the Kreml - I don’t think he needs any more money. He’s playing a totally different game. I don’t know which one though.
> The reason why that statement exists is because there are _countless_ examples of teams coming up with their own, new cryptographic mechanisms that either break...
But aren't there _countless_ examples of this in crypto made by cryptographers?
I'm not playing devil's advocate, I don't really have a stake here. :)