That's said often about any tool which has been around long enough. People without experience come around and think they can replace an old tool with a better one, but it's usually only ignorance of either the complexity of the task, knowledge of using the tool properly, or both.
GPG is just not a very good tool. I think 'tptacek explained it quite well in the article I linked.
With that said, Magic Wormhole is also a very good tool for transferring files. It will encrypt in transit. So for many files, using a separate encryption tool is not necessary. (So far I haven’t tried it for large files.)
What no one seems to mention is that Magic Wormhole depends on the maintainer's server to negotiate transmission between clients. I don't like that requirement. Better to GPG encrypt and use bittorrent for a direct transfer. At least then we're using public trackers instead of some private server.
How would public BitTorrent tracker servers be better than a single rendezvous-server run by the tool's author?
With trackers, you're revealing the fact-of-transmission, transmission-size, & endpoints to any number of unknown remote parties. Potentially, attackers not even on the privileged network-path from origin to destination could tee off a copy of your encrypted data for offline analysis.
With Magic Wormhole's rendezvous-server, only one server, run by the same person whose code you're trusting (& can audit), briefly relays encrypted control-messages. (It might even be limited in its ability to deduce the size of the transfer – I'm not sure.) And if that's still too much, you can run your own rendezvous server.
It seems to me the amount of information leaked in the BT Tracker approach is strictly (& perhaps massively) more, to more entities, than that leaked in using the Wormhole author's server.
Its not a requirement that you use the default Wormole rendezvous or transit server. Its all open source you could run your own private servers if you wanted to (and its easy).
Of course the nice thing about Magic Wormhole is that its security does not depend on server components to be trustworthy. Use the default servers, use your own, use a different third party server, it doesn't matter, your data is still secure.
Edit: If you are worried about privacy magic-wormhole supports transit over tor.
I want to comment on that article you keep referring to, but I don't want to clutter up the top of the thread so I'll do it here.
The author really wants to dislike PGP, but the reason everyone trusts PGP is because it's been around forever. Yeah there've been deficiencies, just like there've been deficiencies in OpenSSL, but that doesn't make it a bad tool. I could go on but this xkcd sums it up: https://xkcd.com/2347/
>Absurd Complexity / Swiss Army Knife Design
Git is complex, yet effectively every project ever uses it. The reason is you're fine to avoid the edge-cases and just focus on the main functionality, but that one time you need to do something ridiculously hacky, there's a tool to do it, instead of having to roll your own solution.
>Backwards Compatibility
Would you rather your software not have backwards compat? GPG has sane defaults, and everyone you talk to using modern versions will be secure by default. Not sure what the author is going on about with weak default password encryption:
$ gpg -vv --symmetric test.txt
...
gpg: using cipher AES256
gpg: writing to 'test.txt.gpg'
I've never heard of any of this. Sign and encrypt, by default you get AES256 encryption and a SHA512 digest:
$ gpg --sign --encrypt test.txt
...
$ gpg -vv -o /dev/null --decrypt test.txt.gpg
...
gpg: encrypted with 3072-bit RSA key, ID 74588E74DDD483BC, created 2020-09-02
"test"
gpg: AES256 encrypted data
gpg: binary signature, digest algorithm SHA512, key algorithm rsa3072
>Incoherent Identity
Have an identity. Have other people verify it. Trust based on that. It's the same way the PKI works, you know, that thing that runs the entire internet. Except you don't need to trust CAs anymore.
>Leaks Metadata
He's not wrong about this one, normally you can see who's ID a message is encrypted for. If you're trying to be sneaky just use symmetric encryption I guess, it feels like a different use case.
>No Forward Secrecy
Definitely a different use case. There's no case where I want to decrypt a packet from the middle of a TLS conversation a few years later. But an encrypted attachment in an old email?
>Clumsy Keys
How are GPG keys harder to handle than SSH keys? Both are just blocks of base64 (gpg --export-secret-keys -a)... one is 80 lines while the other is 50, but does it really matter?
>Negotiation
Same argument as Backwards Compatibility, I think.
>Janky Code
The page he linked has 27 CVEs. Over the last 15 years. For comparison, OpenSSL has over 200.
This speaks to the article's complaint that GPG is usually the wrong tool for the job. For example if you just need to transfer a file securely (and have a fast, reliable internet connection on both ends and don't need to worry about active tracking of metadata), you can use Magic Wormhole (or a similar PAKE system) to do it. Imagine two scenarios: one with GPG and one with PAKE, and in both cases an adversary captures a ciphertext. With GPG if they can get your private key 6 months later, you're screwed. With PAKE the keys used to exchange the data are ephemeral, and so this isn't even a possibility.
> > Broken Authentication
> I've never heard of any of this.
I believe this is referring to authenticated encryption (AEAD), which is definitely valuable and GPG does not provide. AGE does.
Most of the other stuff you mention also falls under "wrong tool for the job". If you want a better argument, I'd talk about GPG having a web of trust system built in. On the other hand I think it's an open question whether this has ever brought real value to anyone. We have enough other secure messaging systems that it's no longer necessary for a single program to get there on its own. Usually you can count on some other mechanism to confirm your contact's identity.
>With PAKE the keys used to exchange the data are ephemeral, and so this isn't even a possibility.
AFAIK this isn't really true. If the adversary captures the initial key exchange plus all the data (ie the full transaction), then later discovers your PSK, they'll be able to decrypt. The only case where this helps you is if they capture some packets out of the middle without the initial handshake.
>authenticated encryption
It doesn't matter if your ciphertext is authenticated if it's both signed and encrypted, which is what I was getting at. In a normal TLS-like encrypted conversation, yes AEAD is very useful. But it's not applicable here.
> AFAIK this isn't really true. If the adversary captures the initial key exchange plus all the data (ie the full transaction), then later discovers your PSK, they'll be able to decrypt. The only case where this helps you is if they capture some packets out of the middle without the initial handshake.
Someone can correct me if I'm wrong, but I believe the idea behind a PAKE is that the password only authenticates the key exchange and doesn't contribute to it. So if you record all transmitted data you still need to break the key exchange which should have used a bunch of random bytes from both parties that are thrown away after use. The password is only there to prevent MITM, not to derive keys.
I believe magic wormhole uses SPAKE2, which has perfect forward secrecy. When using passwords to secure transmitted files it's really important to have forward secrecy otherwise you risk the transmission being recorded and the password being attacked offline which depending on your password strength might lead to trivially decrypting the data.
> Someone can correct me if I'm wrong, but I believe the idea behind a PAKE is that the password only authenticates the key exchange and doesn't contribute to it.
That's right. From memory, the passwords are just used to do a DH key exchange. The key is entirely ephemeral. Even if the entire ciphertext is captured, and even if the adversary then gets your password, they can't decrypt. To decrypt you'd have to MITM the key exchange, which would require knowing the password before the file is exchanged.
I wish the downvoters would explain what's wrong with the parent comment, as the counter-arguments raised by parliament32 seem completely reasonable to me. Perhaps it's just the length of the comment, and they would prefer it written up as a blog post somewhere.
Don't worry, this is normal for HN. As a commenter higher up said, "People without experience come around and think they can replace an old tool with a better one". People tend to like thinking along the lines of "surely old things must be bad" and you'll get met with disagree-downvotes anytime you try to explain why you want old software in fields like crypto or internet routing. Try defending BGP in any of the DDOS or hijack threads and you'll be met with the same fate.
> I wish the downvoters would explain what's wrong with the parent comment,
One obvious guess is contradicting Latacora. 'tptacek is well known here, his name alone give significant weight to anything he writes.
In any case, they have valid points. PGP was written at a time we didn't understand cryptography as well as we do now. We can do better. Have done better, if half of what I've heard about Age is true.
Absurd complexity: We can definitely do simpler than PGP, at no loss of functionality.
Swiss Army Knife design: I think I disagree with Latacora there. Doing many things doesn't mean you have to do them poorly. There's no material difference between having 3 programs, and having one programs with 3 options, at least on the command line. If PGP does anything poorly, it's for other reasons.
Mired In Backwards Compatibility: well that depends. It makes sense that PGP can decrypt old obsolete ciphers & formats. Ability to encrypt to those same old stuff wouldn't. For instance, PGP longer be able to generate RSA keys at all. Then, one year later, once all RSA keys have expired, new PGP versions should no longer be able to encrypt to RSA keys at all. (In an ideal world. More realistically, we should wait a couple more years.) Only the ability to decrypt old messages should be kept until the end of times.
Obnoxious UX: I don't know enough to have an opinion.
Long term secrets: Sure they're bad, but I don't think we can avoid them. People need your public key to send you anything, so it can't be too short lived. My guess here is that Latacora is attacking the whole file encryption + web of trust thing, not PGP in particular.
Broken Authentication: If attackers can trick PGP decoders into decrypting forged messages, that's fairly critical, and should be fixed even if it breaks backwards compatibility (we could have an optional `-legacy` flag or something to compensate). Now if you go sign and encrypt… well there are two possibilities: if you sign then encrypt, you run into the cryptographic doom principle: the decoder will decrypt then verify, which creates the temptation to process unauthenticated data. Many vulnerabilities have been caused by such errors. If you encrypt then sign, you reveal to the entire world that you signed this particular ciphertext. Not the kind of data most people would like to leak. In my opinion what we really want is authenticated key exchange followed by AEAD. With the Noise X pattern, you'd even hide your identity from snoopers.
Incoherent Identity: Okay, they're clearly attacking the very notion of web of trust, not PGP specifically. They say it doesn't work, but I'd like to know why. First, I'm not sure I want to take their word for it, and second, the causes might be fixable.
Leaks Metadata: that one is clearly avoidable. Noise X for instance uses an ephemeral key to encrypt the transmitted public keys, and the recipient's key is implicit. Can't know who the message is for (nor from) without the recipient's private key.
No Forward Secrecy: Different use case indeed. Again, Latacora is attacking the very notion of file encryption, not PGP specifically.
Clumsy Keys: I'm with Latacora on this one. The 50 lines SSH keys are clearly RSA based, and as such obsolete. Modern keys use elliptic curves, and those take one line, which is more easily copy & pasted in various contexts. Arguably a detail, though.
Negotiation: It's more than just backwards compatibility. Backwards compatibility can be achieved with a simple version number. If instead we have a range of algorithms to chose from, things get more complicated. Now, you can't avoid the need for different kinds of encryption: you can send against a public key, or you can encrypt with a password. Possibly both. Beyond that however it's simpler to have a version number that hints at a single public key encryption and a single password based encryption.
Janky Code: Can't judge for myself. I can guess however that much of it is caused by the (ever evolving) PGP specifications themselves. Probably more a consequence of all the other issues than a separate problem. Still, I think we can do much better. I mean, I've written an entire crypto library, and people give me hell for a single vulnerability in over 3 years. 27 CVE in comparison would be worth burning in Crypto Hell for a long time.
There are many tools which are way older than gpg, but still in the wide use. For example, I have not heard any arguments that people should stop using things like "rsync" or "curl".
I don't see how any of the points in the article are relevant to the symmetric encryption method I posted. Yes, GPG can do a lot, and yes, some parts of it are kinda ridiculous. But symmetric pre-shared-key encryption/decryption is a solved problem, and I'd much rather trust GPG than some random's git repo.
Others have said it better than I can. See e.g. https://latacora.micro.blog/2019/07/16/the-pgp-problem.html