Interesting/educational read but I'm still not convinced that this unintended side effect is a bad thing - it seems like a desirable property to have authenticated emails. Matt argues this might lead to regular folks (as opposed to politicians) getting blackmailed, but:
1) it seems unlikely this cryptographic proof is needed (he acknowledges this criticism in the post), and
2) what seems more likely to me is that politicians would intentionally _not_ opt in to any alternate solution and use that deniability for their own advantage. (Also as an alternate he proposes GPG, which I know Matt knows is laughable).
If you had access to a public key for every email address then why stop at authentication - you could encrypt all email on the web. But we don't, so we can't.
Authentication in this context doesn't need to be end-to-end. Instead of a custom protocol, we could probably just use SSL client certificates authenticating the sending domain to achieve the same effect.
Put the company you work for in your threat model, and this becomes relevant to you.
The corporate world is mired in zero-sum competition, and some of your colleagues are willing to do things that will shock and appall you if it increases their chance of "winning".
Try working in defense, finance, or security as a closet anarchist. Have a few Chomsky books in your Amazon purchase history? Good luck climbing the Amazon corporate ladder.
You might want to validate your emails more than a few months out.
Regardless of if your emails are valid or not, blackmail is still a crime. Not being able to have your emails validated doesn't protect you from blackmail. The power of blackmail is often in the social cost of the accusation itself. The thing that protects you from blackmail is not getting involved in things you can be blackmailed for.
This is like saying don't lock your doors so that nobody can break and enter into your house.
> The thing that protects you from blackmail is not getting involved in things you can be blackmailed for.
This is incorrect, because the things that someone can be blackmailed for is not the same as the set of immoral or unethical acts. You can be blackmailed for being gay, or for having a serious medical condition that's undisclosed. Neither of those situations is a "well just don't do that" kind of thing.
The defense against blackmail is to make blackmail difficult (eg release DKIM keys), severely punish people who engage in blackmail, and guard your secrets effectively.
> This is like saying don't lock your doors so that nobody can break and enter into your house.
This is like saying the latch on your fence is a security mechanism. Nobody intended that fence latch to keep your safe or secure and hiding behind it won't make you safer. Rip away the false pretense.
No because it's not on us to make moral distinctions between someone who is gay and doesn't want to make that information public, and someone who isn't gay and is being falsly blackmailed.
Publishing the DKIM keys makes both cases harder because you no longer have authenticity claims. Neither claim has authenticity value and instead is just a "their word versus yours" situation.
Humans are terrible moral adjudicators, and acting off of universals leads to repugnant ends. The truth can absolutely do terrible damage and still be the truth, but being the truth doesn't remove value from privacy. Put another way - would it be moral to publish your full medical records in the public? After all, they are the truth...
It is on us to make moral choices. You are suggesting we do so by making email repudiatable in order to protect a hypothetical person from being outed via stolen email verified with dkim.
So far as I can tell this has never happened in history and logically neither blackmail nor public harm via exposure of sexual orientation particularly requires dkim verification.
It looks like you are asking us to give up DKIM verification which could and has aided us to verify politicians leaked emails in search of a purely hypothetical gain that may never materialize by suggesting that we both must and must not make moral determinations.
When an angry mob goes after suspected homosexuals, do you think they stop to authenticate DKIM signatures of emails? I'll give you a hint: no. In fact, it's hard to imagine any scenario where anybody goes through the trouble of authenticating e-mails of a "normal person" who is persecuted of something other than crimes.
Nobody is telling you to not be gay.
If you being gay is a secret, then don't send that secret over _plain fucking text email_.
Do you send your social security number to people in emails?
Email has never been privileged communication and the problem isn't one of validation but one of not understanding one's level of privacy and risk. It uses relays without end-to-end encryption and there's no guarantee that what you sent is not totally out in the open.
This is every bit as much about email you receive as email you send, and you are not in control of email you receive.
If your doctor slips up and emails you about your AZT prescription, it doesn't matter how careful you were about not disclosing your HIV status over email you sent.
> Not everyone is law savvy. Not everyone understands legalese. Not everyone makes rational choices all the time.
> Does that mean everybody should suffer the consequences of an arguably unintentional side effect of the technical implementation of law?
Thankfully this is not law, but people should understand the things that can get them in serious trouble. A lot of legal concepts follow from basic principles and history.
Privacy concepts are the same way. It's simply not sufficient that we don't educate everyone about these things anymore. Ignorance isn't going to protect anyone from the fallout of misuse of technology.
The solution isn't to coddle people, it's to provide better technology that does the thing the way people intend to use it.
Okay, but what about a topic that is legal and acceptable in today's society but not in the society 20, 30 or 40 years down the line? What if being gay becomes socially unacceptable again? Or supporting the second amendment? Or [literally anything]?
The problem is that what is socially and legally acceptable changes over time. Just 30 years ago, the standard for social acceptable commentary was wildly different in the areas of gender identity, sexuality, and race for instance.
Yes, you should absolutely think about everything that you commit to public record.
Yes, you might be totally fine now. You might be hanging out and get photographed with this creepy billionaire named Jeffrey Epstein who is just
another creepy billionaire at your creepy billionaire parties. Then 20 years from now we find out he's running pedophile island and people start looking into your associations.
We are not teaching people to be cautious about their public data and in fact there's an entire industry out there encouraging everyone to detail their whole lives in public record.
Get off of social media _today_. Yes, it's probably too late.
The other option is to be such a big celebrity that your entire life is public and you have the defense of scrutiny.
Side note: Somebody from my high school class is a famous criminal. I regularly receive requests for interviews on the basis of that association alone despite having nothing to do with the person for decades.
Using the billionaire pedophile example is a disgusting trick. You're trying to set me up for appearing to support that.
Why not use more neutral examples? Like being gay or supporting certain political causes? What if those later become controversial or illegal? What then?
Do you want to live in a world where you have to guard everything you say in semi-private conversations, just in case it one day becomes controversial? That sounds like an oppressive nightmare.
The social media argument is tangential but I do agree with you there.
I'm illustrating the severity of the identifiable public record.
When the Nazis started rounding up people to put in camps, they looked at the _extremely detailed_ Christian Parish records saying who was what and where they lived.
They were thought to be innocuous and important records to keep at the time. Actually I think in the Scandinavian countries the state Church is still responsible for recording all marriage & death records. They stopped tracking births for the previously mentioned reasons. (Hey, we just learned the importance of separation of Church and State, too!)
Nobody knows how important privacy is. Until they do.
>I'm illustrating the severity of the identifiable public record.
This is an argument in favor of emails being non-verifiable, so now I'm confused.
Previously you seem to be supporting the idea that email should be verifiable. Now you seem to be arguing the opposite. Everything you wrote above correlates with the opinions I've expressed so far.
You also wrote:
> If you live in a [country where homosexuality is illegal] and secretly a homosexual, do not send emails indicating that you're a homosexual.
As if that's an acceptable state of affairs and a reasonable compromise for the purpose of catching the occasional bad actor. It isn't.
a) email being verifiable is fine
b) nobody should be so stupid as to use email for anything personal. it is not privileged communication and potentially permanent public record.
c) if you want to use email, you'd better encrypt it and only for recipients that you trust.
As you well know, people use email for things they shouldn't. Why not make email slightly better? What's the harm? Isn't greatly reducing the chance of blackmail unequivocally positive?
So far, you've just been repeating disparaging comments on less technically minded ("stupid") people. You've not presented an argument for why this change would be detrimental.
We can continue educating people about the inherit insecurity of email while still improving it for those that (a) will never get it and (b) simply don't have access to alternatives.
You can argue improving email shouldn't be a priority, but the proposal in the article has zero cost. It's a free improvement.
I have to say you've failed to articulate why making email better (while we work to come up with a better solution) is an inherently bad thing. Especially when we can make it better for free.
1) it seems unlikely this cryptographic proof is needed (he acknowledges this criticism in the post), and
2) what seems more likely to me is that politicians would intentionally _not_ opt in to any alternate solution and use that deniability for their own advantage. (Also as an alternate he proposes GPG, which I know Matt knows is laughable).