IMO the difference is mostly theoretical anyway. Despite the fancy HSMs and end-to-end encryption, if Signal or Whatsapp wanted to read your messages they trivially could - just push an app update to you that sends all of your messages to them.
It's more risky in terms of getting caught, but probably not hugely so if you do it in a way that has plausible deniability.
I think you pretty much have to trust the app supplier. Which in this case, I do not.
Pushing a malicious app update creates an inspectable artifact. Researchers can discover exactly when it happened and therefore who is vulnerable and which messages.
This is a much much much better situation than handing someone your keys and letting them MITM you at any time with no hope of knowing.
You're not thinking sneakily enough. Obviously you wouldn't push a visibly malicious app update to everyone. Instead you make it so that you can push malicious code to specific users. Then all researchers can say is "it's a bit suspicious that this webview has access to Java APIs" and you just need to say "it's needed so we can get your app version" or whatever. They'll never be able to actually see you reading the messages, unless you happen to be very stupid and target a security researcher.
I agree it's definitely better to do proper e2e encryption like WhatsApp / Signal do, but I don't think we should pretend they are magically fully secure against this attack.
IANAL but the difference seems to be gigantic. If the secret key is stored on a company server, then that company can be subpoenaed for it. If it's not on the company's server, the client's endpoint has to be compromised (e.g. by a police raid or by electronic surveillance). The former is much easier than the latter. I don't think government authorities can force companies to actively eavesdrop on their clients by pushing malware through their update mechanism to a client's device, at least not officially in non-authoritarian countries with due process.
If. Just recently there was a news about how meta bruteforcing localhost of all its users to hack their devices (https://localmess.github.io/). And now someone seriously suggests to believe that meta does not collect private keys on its servers?
Moreover, I think that now corporations don't even have an option not to steal keys from users: you either have their keys or go to jail. And if you have the keys, but users think you don't and trust all their secrets - that's even better.
And if you think government authorities can't do something, look what happened to Alexey Pertsev. A criminal uses your tool to ensure their privacy? You're going to jail. So in today's world, it's better to have keys on your server, even if you're not going to use them. Because at some point, you might be asked for them and refusing (ore not having one) will mean jail time.
> I don't think government authorities can force companies to actively eavesdrop on their clients by pushing malware through their update mechanism to a client's device, at least not officially in non-authoritarian countries with due process.
Yeah unfortunately that list no longer includes countries like the UK and Australia.
With public/private key pairs, encrypting anything with the private key means that you use the public key to decrypt that same thing. This means anyone (as the key is public!) can decrypt the thing. So if you get the public key, and if the thing decrypts successfully, then you know that the corresponding private key was used to encrypt the thing. This is considered proof that the private key holder encrypted the thing / sent the message, and that's why everyone calls it "signing" instead of "encryption" - you send the cleartext thing along with the encrypted thing.
For private messages, you encrypt with someone's public key and have them decrypt with their private key. You'd sign it with your key, and that person would verify the signature with your key. That's 4 keys you need to worry about.
This doesn't even begin to consider key rotation, perfect forward secrecy, multiple recipients, etc.
That's signing, it's not encryption. It's a private-key operation, so at best you could consider it decryption. In asymmetric cryptography, the private keys are used for decryption & signing, and the public keys are used for encryption & verification.
Usually you want separate key pairs for signing/verification vs encryption/decryption, but some systems can safely share a key pair for these two sorts of operation.
And the top 1000 replies for any stupid thing he says are nothing but positive reinforcement from boosted blue check bot accounts that bury any actual criticism.
If you hide the boosted replies, you find Twitter's thread loading stopping after about 200 while it's trying to fill the space you made, so you're lucky to get more than a few non-boosted replies under one of his "popular" Tweets, they're bidding for a very limited space under there.
Gell-Mann amnesia effect applies - if you speak confidently about a topic where I know better I should judge your false confidence for other topics on which I know no more than you, because you're probably bullshitting just as much.
Crichton claimed this effect was unique to popular newspapers - but actually I think today we can say it applies elsewhere, people will see that Musk hasn't the faintest idea how software engineering works and then go straight back to believing on aerospace.
I felt it was pretty probable that in his head, he was thinking "bitcoin style cryptography", which makes his claims much more technically accurate (even if the implementation ultimately ends up effectively handing over control of private keys to the service provider), and that he likely just had a brain fart in translating the idea from his head into written language, a common phenomenon that affects pretty much everyone at some point in time.
Maybe I'm just being too generous to people suffering from the human condition. We should probably start holding everyone to the standard of absolute perfection all the time - never misspeaking, never making any typos - and start reflexively discarding any and all ideas that have any kind of minor mistake in them; that sounds like a much more rational and reasonable approach.
It is possible that he had a brain fart. But Musk makes a lot of statements, and he's repeatedly demonstrated that he is the kind of person who wants to project an image of himself that is smarter than he is. There's a trend of people discovering, when he starts speaking in the areas that he's knowledgeable, that he's just a complete idiot making n00b mistakes in their field, and if he's that bad in their area, why not others? Recent entries in this category include programming and playing Diablo, but the conversion factor for me was hearing him talk about tunneling and transit technology. And once you hear him as a fast-talking technobullshitter, it's hard not to treat any future misstatement of him as anything but fast-talking technobullshittery.
But for this claim in particular, there's another element that makes me think the claim was intended to be truthy instead of true. "Bitcoin style encryption" feels like it's meant to be a riff on "military-grade encryption"--a signifier that it's "really good" encryption while being extremely vague on what it is, but using "Bitcoin" instead of "military" to make him seem cooler to the people for whom cryptocurrency references gives you extra credibility.
Even if we assume it's a brain fart for "cryptosystem" or something similar... people with a basic understanding of cryptography recognize that bitcoin isn't using encryption, so a reference to Bitcoin's cryptosystem isn't directly relevant in the first place. To the extent Bitcoin itself uses a cryptosystem, it's the same cryptosystem everybody is using, so the reference itself degenerates "hey, we're using the same algorithms everybody does" which isn't something to tout.
So, no, I don't think it's a brainfart. I think it's a smarter-than-thou bullshitter trying to bullshit his audience, although I'm willing to accept that he may "just" be an idiot repeating what somebody told him without properly understanding what was told to him, leading him to give a confused response.
Except Musk and the chadsphere he surrounds himself with spends an inordinate amount of time promoting him as some kind of techno-genius. A first year CS student couldn't confuse those terms, and he makes embarrassing gaffes like that quite often. They go ignored because people make unlimited amounts of excuses for him for some reason, other than the very obvious conclusion - that he doesn't know wtf he is talking about.
Even your corrected, generous version is wildly inaccurate.
There hasn't ever been a single time in your entire life where you were thinking of one thing, but the words coming out of your mouth communicated something different, by mistake, even though you genuinely did understand the difference?
How many times do you make that excuse of "he just flubbed a word" before thinking maybe he doesn't really know what he's talking about? Once? Twice? A dozen times?
> where you were thinking of one thing, but the words coming out of your mouth communicated something different
He was promoting a new feature with fanfare, in writing, it wasn't just a casual utterance. Besides, his words sound wrong even if the intended message was "Bitcoin style cryptography", it's still a preposterous non-description because bitcoin isn't, and has never been, a measure of cryptographic strength, the formal validity of that statement doesn't make it less uninformed.
If you don't want extra skepticism, don't be the richest person on earth, don't insert yourself into government, don't insist you are uber-intelligent, don't be a notable person, don't be an asshole in public, etc.
It literally doesn't matter whether it's a mistake, he does this too often to give him the benefit of the doubt anymore. Elon Musk reliably claims to be an expert in everything ever, despite all available evidence to the contrary. Elon has never demonstrated technical competence in anything.
I do, but that post is arguing a point (Elon Musk doesn't know the difference between encryption and cryptography) that's unsubstantiated, while a plausible alternative explanation (he does know the difference, and mis-spoke, because he, like all other human beings, sometimes makes errors in translating thoughts into words) was proposed in my parent post.
Your post completely sailed right past that alternative plausible explanation, and immediately went back to asserting the unsubstantiated claim without addressing the alternative hypothesis, in what appears to be a bout of motivated reasoning against a figure that is politically disliked.
You don't get to completely ignore the point I'm raising, assert your own, and then play the "why aren't you staying on topic" card when your post was the one that brought up an unsubstantiated and unrelated response to the initial claim - that's hypocritical at best, if not outright trolling.
the point is less the infallibility of human cognition and more Spider Man's Law (with great power comes great responsibility).
if you're one of the most powerful people on the planet and you make public statements and decisions that will impact many people, you should be held to a higher standard of emission.
Other than me not making this claim at all, of course it’s substantiated, given he literally mixed up these terms people purporting that kind of expertise typically don’t. That’s literal substantiation, but whatever, I wasn’t even making that claim.
If it seems like I’m skipping past your point it’s because you’re not really making one, or at least not the clever one you seem to think you are.
To answer your q in good faith - yes, I have mixed up words, even in professional settings. I will then typically issue a correction, because the degree of such a mixup can cast a shadow on my credibility and can damage my career and thus earning potential. You seem to be taking the position that elon’s credibility cannot be questioned, at least on the topic of technical expertise. I find that a little bit (actually a lot) silly and an infantile way of looking at this.
Likewise, if I was routinely claiming to be this like, super technical genius founder engineer elite space dude that could never admit fault and was an expert at basically all topics, I would expect to be placed under the same skeptical lens (if not much more, given I’m just a low level grunt) I would face in a scenario like this in my day to day work.
> The obvious remedy for this problem is just to store secret keys with the service provider itself. This is convenient, but completely misses the whole point of end-to-end encryption, which is that service providers should not have access to your secrets! Storing decryption keys — in an accessible form — on the provider’s servers is absolutely a no-go.
OK, so Twitter themselves are our adversary.
> One way out of this conundrum is for the user to encrypt their secret key, then upload the encrypted value to the service provider. [...] Most human-selected passwords and PINs make for terrible cryptographic keys. [...] you need some mechanism to limit the number of guessing attempts that the user makes, so an attacker can’t simply run an online attack to work through the PIN-space.
As I understand it, this stuff is all implemented in-browser, using javascript that's 100% under Twitter's control.
Wouldn't it be a simple matter for them to save your message's plaintext (or indeed your password) by just saving a copy while it's in plaintext form?
I think the relevant scenario here isn't one where Twitter itself is malicious, but one where Twitter gets a law enforcement order requiring it to hand over decryption keys. If you don't have decryption keys, you can't hand them over.
The worry people generally have about these sorts of systems isn't that they distrust the substrate NOW, as you say all bets are off at that point unless you're a cryptography expert and programmer yourself, but rather that they want protection of the data they produce now from being read in the future.
Basically, If twitter wanted to read my data today, they could do so. If they decided in 2 years to read my data now, it would be too late because it was encrypted. With the encryption key, that's trivial. If they have to save the plain text, well that's too late now.
I tried X's encrypted chat last week just to see what it was like. The interface is clean and it works smoothly, but once I understood how it actually handles encryption, I stopped using it. If they hold the private keys and there’s no forward secrecy, it makes me cautious. It feels more like something that looks secure rather than true end-to-end encryption.
Yeah, it’s live on the mobile app. You have to be verified and both users need to follow each other. Once I enabled it, there was a little lock icon on the chat, but the UI didn’t really change much beyond that.
So is this protocol (Juicebox) at least safe when used with a high-entropy PIN/passphrase then?
What's nice about Meta's similar implementation for chat backup using OPAQUE is that, given a high-entropy passphrase, the reliance on the server/HSM as a trusted actor goes away.
Musk considers Twitter as the 'town square' and he wants to bring all of those features for payment and whatnot that apps in China already have to his 'town square'.
I think he has been off the ball with Twitter/X, using it as his own private megaphone rather than building out the features, however, encrypted messaging is going to be the cornerstone of future developments such as a means of payment, or a WhatsApp rival and so on. I find it hard to believe, but maybe there is a cadre of engineers at Twitter with a vision of what it should be, and building out a serious platform.
And I'm out. I don't want every thread about X to degenerate into another debate about Musk but at this point they're kind of inseparable. Do I trust that if Musk decided some day that he doesn't like me for whatever reason that he wouldn't grab that private key and publish my DMs? I can't.
Same, but not even for Musk.. just their employees in general.
I definitely don't trust all of them, in particular the yappy one who was publicly inflammatory on Christmas Day. In a regular corporation there would have been public consequences. If it was overlooked here, what else and who else is being overlooked? It's the culture.
Same thought, probably the same person I'm thinking of.
While I can't speak for Twitter's org as a whole(sorry anyone who works there), the fact that Elon encourages that racist troll to publicly post, as a known employee of the company, indicates to me the team is probably super immature and not to be trusted.
It's a tab in the drawer called "Chat", I guess to distinguish itself from the legacy "Messages"
But then you click the Chat button and it takes you to a screen called "Messages" that looks visually identical to the old Messages screen. Furthermore, the Chat button icon is a message bubble, as to not be confused with the envelope icon for Messages. But the compose button in the Chat screen is the envelope with a +, and clicking it brings you to a screen titled "New message". The compose field in the chats themselves is also labelled "Message".
It's X, the company that took the brand "Twitter," valued at multiple billion dollars, and changed it to X because its owner thought X was a pretty cool name, and did it without telling any of the UI designers in advance.
I'm still stunned by that.. definitely a Joker lighting the money on fire scene.
So many products, printed packaging, websites, business cards, games, etc. had the Twitter logo and link on them. It was even integrated into iOS at one point.
This wasn't the first time either.. he tried it with PayPal as well, but they said no, we aren't doing X as the name for PayPal.
The boss can do what they want like all bosses, but this wasn't a decision based on fiduciary value for the shareholders.
And also SpaceX, of course. And the Tesla model X (as part of a series so he could have the models S, E, and X). And his son X. Well, X Æ A-12, but X for short.
I hope that if I ever go crazy, there's someone who loves me that I trust that's around me to tell me when something I thing is cool is actually incredibly stupid.
The author writes that the encrypted private key (DEK) is susceptible to decryption if the server is compromised because then there are no more limits on incorrect attempts, allowing an attacker to walk the whole key space (of the KEK). But doesn't strong password requirements and a proper derivation function provide a large enough key space, making decryption by guessing (through any of various methods) infeasible?
The author only mentions two alternatives for this problem, hardware security modules to prevent the compromise of the DEK from the server in the first place, or "sharding" between independent hosts to minimize the odds of that. Both certainly harden the server, but what about hardening the KEK?
The author mentions PINs for the KEK because they are easy to memorize, which certainly makes for a poor key space, but why not use the same password the user already memorized to login, which should have strong requirements? Proton Mail, which also stores user's (encrypted) private keys,[1] initially had two passwords, one for login and one for decryption, and now allows users to have a single one, used both for login and decryption but never transmitted to the server, by using SRP for authentication.[2] Yet another approach is taken by Mozilla for Firefox Sync, which does two key-derivations on your password at your machine, creating one key for authentication and a separate one for decryption.[3] I wrote more about both approaches, check my submission history if you're interested.
Anyway a nice read, I just missed more discussion about hardening the key in the first place, and how far that gets you in case of server compromise.
The nice thing about these protocols is that you can add a memory-hard hash function like scrypt or Argon2 into the middle of the protocol. This is computed on the client’s side, and will “harden” the key derivation by a bunch: essentially slowing down brute-force attacks by as much as the hash function costs. As best I can tell if you combine this with a very strong password, the problems I mention in the post won’t bother you (but no guarantees.) Unfortunately this still probably won’t save most users who choose short PINs and weak passwords, because offline password guessing is embarrassingly parallel and there’s only so much scrypt you can throw at any real system before everything becomes unusable.
I find this all moot. Not useless (because it's another layer of defence in depth), but still recoverable.
A real end-to-end encryption is such that the transport intermediary only passes opaque blobs, and won't be able to decrypt them to save the CEO's life. Everything else is sparkling obfuscation.
But even with that level of unbreakable content encryption, the metadata, which has to be accessible to the intermediary in cleartext, could blow enough covers.
> If you can't look at the code doing the encrypting, it's simply encoded.
Not sure it being open source is required to be considered "encryption". Besides, even if you can look at the code you don't know if that's what's running on the server.
I'm telling you that I applied state-of-the-art, uncrackable encryption to that. Why should you believe me? What evidence do you have that I didn't just take your text, throw it in some Caesar Cipher generator, and copy-paste it into this text box?
Well, none. It just happens to look like I did that, and if that were data you wanted to keep secret but that a hacker had obtained without permission, you can bet that they would say "looks like a Caesar Cipher, I'll try a combination of decryption parameters until it makes sense".
If I can look at the code, decide I trust the implementations of the primitives being used, how they're being used, how identity is established, and how initial key exchange works, I don't need to know what's running on the server. That's sort of the point of end to end encryption.
You mean using the algorithm to verify that the observable input leads to the observable output? That would make sense and would allow you to form an opinion about the "primitives" like you said.
If you don't trust whoever is handling your server-side secret computation, being able to view the code supposedly running there doesn't help either, as you won't have proof that that's what they're actually running.
That's why we have proper end-to-end encryption in the first place: So that you don't have to trust the server.
When Trustico decided to light their whole business on fire they sent people's private keys to the root CA they were reselling, triggering all the relevant certificates to immediately get revoked.
But if you were like "LOL, use keys you picked instead of my own private keys I tell no-one? Do I look like moron?" then no matter how stupid, greedy or incompetent Trustico were they didn't have your keys and couldn't give them away on purpose/ accidentally.
Twitter's new encrypted DMs aren't better than the old ones - https://news.ycombinator.com/item?id=44191591 - June 2025 (204 comments)