Hi, Eric here, co-creator of evil32. I posted a brief note on our site about this, but here's a little more detail.
I found an old (local) backup of the private keys and used it to
generate revocation certificates for each key. Fortunately, there is no
way for anyone else to access or regenerate the private keys for this particular clone of the strong set, and I have
been very careful with my copy - it is only available on my personal
machine, and I have only used it to generate the revocation
certificates. I will not use these keys to generate any fake signatures
nor to decrypt any messages intended for the original recipients.
We wanted to bring awareness to the dangers of using short key IDs in
the 21st century, since that ID is very easy to fake, and most of the
contents of the key body are not covered by the signature, so they can
be changed at will. However, we feel that the keys uploaded to the
public keyserver are, on balance, more of harmful to the usability of
the GPG ecosystem than they are helpful in highlighting security flaws.
It's important to realize that anyone could repeat our work pretty
easily. While we did not release the scripts that automated cloning the
web of trust, the whole process took me less than a week. Cloning a
single key is even easier - it could be done with only a few minutes of
effort by someone familiar with GPG. The GPG ecosystem needs to develop
better defenses to this attack.
Our original talk (and previous work) seems to have convinced people to
stop using 32-bit IDs in documentation or on their business cards.
However, there is another common and harmful pattern: users who want to
email someone discover their key by searching the keyserver for that
email, then taking the newest key. This is akin to trust-on-first-use,
and opts out completely from the web of trust or any kind of external
verification.
> users who want to email someone discover their key by searching the keyserver for that email, then taking the newest key. This is akin to trust-on-first-use, and opts out completely from the web of trust or any kind of external verification
Well, yes? What is the alternative, if I want to email someone who exists only in the form of a pseudonymous online identity?
Most of the time there's some at least semi-trusted communication channel. If they have a website, ask them to publish the key or the full fingerprint on their website. If they frequent some IRC channel, ask them on IRC for their key's fingerprint. If they regularly sign their emails you can check mailing lists they participate on to confirm they use the same key there.
If the key is just for their pseudonym, I usually offer to sign the key if they can send me the key through one service of my choice (where their username is public knowledge) and the fingerprint through another (meaning an attacker would have to compromise both accounts I chose). The offer to sign their key often makes people much more willing to jump through hoops, and I get to improve the web of trust.
But for some people I just don't care enough and just add the first best key.
I guess you're talking, here, about identities that are at least in some way connected to the "public" social network. Identities that publish things on public websites, etc.
But if this isn't true—if, for example, you are someone who wants to get in contact with a terrorist group (maybe for an interview, maybe because you want to join them, etc.) then there's not much to do but to trust-on-first-use some channel that seems to be them, no? No public channel can possibly be vouched for as being "the real them", or that channel would have been chased up by the CIA. Which means that any/every channel might just be a honeypot from the CIA or whoever else, trying to either frustrate your efforts, or convert you into a double-agent.
The bigger terrorist groups all have websites and/or a social media presence.
As you say any one of those channels could be a CIA operation, that's why asking for verification from two independent channels (i.e. asking for the keyfile on one channel, for the fingerprint on another) is preferable. A terrorist group that actually uses pgp might even entertain you if you ask on more than two channels for the fingerprint. The more channels you chose, the less likely it is that a single attacker controls all of them.
Another factor is that any public channel that is a front is likely to be called out sooner or later as a non-official channel. Most people and organizations are wary of the dangers of impersonation.
Of course there will always be situations where it's impossible to establish trust, like a leak by a group who tries to stay anonymous to the point of not associating with any previously used pseudonyms. Here you can't do anything but trust the first communication. But I think those cases are extremely infrequent: most groups and individuals try to establish a reputation, which nearly always gives you more points to anchor trust.
Deprecate searching keyservers by name or email address, and only allow searching by fingerprint. Still not a complete fix (the source of the fingerprint may have been compromised) but better than before.
> Well, yes? What is the alternative, if I want to email someone who exists only in the form of a pseudonymous online identity?
If you want to communicate with someone specific then presumably there's something that distinguishes that person from other people. Find a way to connect that something with a key fingerprint. E.g. if the point is that that someone is a journalist for the New York Times (as when Snowden was first looking to leak), they should publish their fingerprint in the NYT, or at least something NYT-official (like their website).
There are use cases where trust-on-first-use is adequate, sure. But there are use cases where it isn't.
I think Keybase.io is a pretty good solution to the problem of key ownership. You can confirm the identity of anyone's Keybase key by comparing the fingerprint to one listed in any one of several "public" sources: Twitter, Github, Reddit, and even Hacker News.
Also, I have several invites for Keybase is anyone wants one.
> I think Keybase.io is a pretty good solution to the problem of key ownership. You can confirm the identity of anyone's Keybase key by comparing the fingerprint to one listed in any one of several "public" sources: Twitter, Github, Reddit, and even Hacker News.
Doesn't that undermine the whole decentralized web of trust concept? All those services are operated by US companies - or what if someone simply compromised Keybase itself?
An important part of keybase is that all proofs are publicly verifiable. When i prove i own a github account, I have to post a public gist. When you get my key from keybase, your client automatically looks at that link and verifies that the gist, and the text within (which is signed by my key) is valid.
Keybase is just the place that connects all the proofs. The actual client verifies that they are correct. As such, if keybase was every compromised they would only be able to change the link to the gist, which wouldn't do them much good without access to my github account.
> Doesn't that undermine the whole decentralized web of trust concept? All those services are operated by US companies - or what if someone simply compromised Keybase itself?
Ideally: Nothing. Keybase refers to other sources. I.e. a page on GitHub woth username and key fingerprint. So if keybase is compromises those links miss and it's no prove.
It sounds like they have issued revocation certs for the keys associated with the fake accounts to stop people from sending any more messages to the fake accounts. They are also promising not to decrypt any of the messages that were meant for the attack victims, but encrypted for the fake accounts' public keys and sent to the attacker during the attack.
I found an old (local) backup of the private keys and used it to generate revocation certificates for each key. Fortunately, there is no way for anyone else to access or regenerate the private keys for this particular clone of the strong set, and I have been very careful with my copy - it is only available on my personal machine, and I have only used it to generate the revocation certificates. I will not use these keys to generate any fake signatures nor to decrypt any messages intended for the original recipients.
We wanted to bring awareness to the dangers of using short key IDs in the 21st century, since that ID is very easy to fake, and most of the contents of the key body are not covered by the signature, so they can be changed at will. However, we feel that the keys uploaded to the public keyserver are, on balance, more of harmful to the usability of the GPG ecosystem than they are helpful in highlighting security flaws.
It's important to realize that anyone could repeat our work pretty easily. While we did not release the scripts that automated cloning the web of trust, the whole process took me less than a week. Cloning a single key is even easier - it could be done with only a few minutes of effort by someone familiar with GPG. The GPG ecosystem needs to develop better defenses to this attack.
Our original talk (and previous work) seems to have convinced people to stop using 32-bit IDs in documentation or on their business cards. However, there is another common and harmful pattern: users who want to email someone discover their key by searching the keyserver for that email, then taking the newest key. This is akin to trust-on-first-use, and opts out completely from the web of trust or any kind of external verification.
Proof of identity: https://keybase.io/aftbit