I certainly appreciate the effort is better than nothing, however, how often are those notices served to US/European citizens? It's one thing to stand up to government overreach in foreign countries, but how about the country where you (and a large percentage of your users) reside? They specify attackers, but I'm assuming this notice to the end user does not apply to the US/EU governments requesting your data and them complying?
Another gripe I have is that TLS has probably been broken by the NSA^1. It's better than nothing to alert us about the other party not using it, but really provides limited protection. PGP/GPG is really the only assurance you have and the plugins for different desktop apps are nearly always buggy. I end up just manually encrypting/decrypting with GPG because a buggy encryption integration is not a comforting thought. If they really cared about keeping your privacy safe, they'd have an end-to-end encryption tool/integration.
If you really need secure communications, email+GPG is still a very poor solution. The body of the message is encrypted but a staggering amount of metadata will be transmitted in the open: sender, receiver, crypto used, date, subject, approximate size, etc. Furthermore, it doesn't provide forward secrecy: if somebody gets access to your key, they will be able to open everything you've ever sent.
None of them have verifiable builds. There are some doing reproducible builds with untrustworthy compilers. Verifiable builds are a larger problem they're ignoring because it's not a fad yet. Lots of prior work in CompSci and even industry on that, though. Links below on subversion and build-related stuff.
> Verifiable builds are a larger problem they're ignoring because it's not a fad yet.
Or because... oh, wait. You alluded to it yourself:
> There are some doing reproducible builds with untrustworthy compilers.
I expect that the bar required to verify (or design and build from scratch) a high-performance optimizing compiler [0] is substantially higher than the bar to rework your build system to give the same outputs for the same source code.
[0] And -in the case of systems with VMs- a high-performance VM.
You mean coding the stuff people are doing in LLVM and GCC in ML on CompCert or similar system? No, it's significantly easier to do that than get current architectures right in C-like languages. FOSS just doesnt do it for most part. Rust was exception: did theirs in Ocaml.
After FOSS compiler types build it, users can get the reproducible source and build the tool. Then that builds the other apps from source. See how easy that is?
Note: Wirth et al built a safe language, simple ASM, CPU, OS, apps, and all with a few people in a few years. The ASM-3GL-Compiler build is WAY easier than you think. It bootstraps faster one after.
> You mean coding the stuff people are doing in LLVM and GCC in ML on CompCert or similar system?
No. I mean exactly what I said:
> I expect that the bar required to verify (or design and build from scratch) a high-performance optimizing compiler (and -in the case of systems with VMs- a high-performance VM) is substantially higher than the bar to rework your build system to give the same outputs for the same source code. [0]
Remember that this addressed your assertion that:
> None of [the projects doing reproducible builds] have verifiable builds. ... Verifiable builds are a larger problem they're ignoring because it's not a fad yet. [1][2]
If you can't see how my statement addresses the quoted claim, then more's the pity.
I may see what you're saying now. The problem is that this problem isnt solved in a vacuum: it's pre-requisites for security plus what SCM adds. They want to be sure the binary does what it's supposed to. That requires the What and How of source to be correct with no malicious modifications during development, storage, compilation, and distribution.
That's a large problem. The compiler part is smaller with lots of work in CompSci, FOSS, and private sector (eg books). There's tools with source available on net for imperative and functional languages that are safer, too. Ignored almost entirely by safety or security oriented projects in compilers and general FOSS in favor of harder-to-analyze, less secure stuff. However, they'll happily bring up fad-driven stuff like Thompson attack or reproducible builds as The Solution.
Here's the actual solution. You start with a simple, non-optimizing toolchain designed and documented for easy understanding and implementation. It has extensive test suite. User worried about subversion implements that in tooling of their choice on own machine. Wirth simplified it with P-code interpreter that was easy to inplement with compiler and apps targeting it. Once first compiler is done, it compiles the HLL source of its own code. Now, you can use it to compile a high-performance compiler's source or add optimizations to it. Most of this work is done so it's a natter of FOSS compiler types or project teams just integrating and using it.
They won't, though, because proven practices and components for secure, software engineering are rarely use. They do popular stuff instead which screws them up in other ways. At least the malicious or buggy source from the potentially subverted server run throigh a black box with similar issues has identical binary coming out. Take that NSA! ;)
To make sure that you do: It's obvious that you recognise the difficulty of creating a trustworthy compiler. Given that you have that information, it's rather disappointing that you chose to assert that this problem was being "ignor[ed] because it's not a fad yet".
> The problem is that this problem isnt solved in a vacuum...
That's one problem. Another problem is that the skills, competencies, and interests of one programmer differ (sometimes wildly) from that of another programmer.
Asserting that a randomly selected programmer isn't tackling the Verified Compiler problem just because it's not currently in vogue will likely be wrong as often as asserting that a randomly selected native Korean speaker doesn't also speak Greek because speaking Greek isn't currently in vogue.
"Given that you have that information, it's rather disappointing that you chose to assert that this problem was being "ignor[ed] because it's not a fad yet"."
This is one case among many. I counter it here all the time. There's literally dozens of works, including certification requirements, on securing this aspect of things. Yet, people have ignored all that from academia to industry every time we bring it up. Then, there's two things that pop up in every online discussion: Thompson attack or (recently) reproducible builds. Literally the smallest aspects of the problem with trusted distribution covered under old criteria. Why does that matter now but not before? It's a social phenomenon.
In 70's-80's, people designed, assured, and pentested guards with great results. Firewalls were a watered down version that came later with features but not assurance. Push guards on firewall proponents, even developers, then you'll just get ignored. They will work on whatever is making rounds on favorite IT or INFOSEC sites, though.
Compiler and OS people. They usually write their stuff in a monolithic style in C despite decades of bad results that way. Showing even one person (Edison), three (Lilith/Oberon), or handful (MINIX 3) can do entire system safer with less people and time will not change this. Showing them ML or something with C compilation for portability will not change this. They systematically reject this while doing whatever is their tradition or becomes in the vogue.
People making secure messaging apps have solid code and endpoint OS's to draw on. They rarely use them. They often use whatever most projects use or roll their own because it's fun. This is true even when shown risks, zero days, or benefits of alternatives.
All of these are social phenomenon that have nothing to do with technical difficulty. If anything, they're doing things that are harder to avoid the more robust option. This is systemic in our industry in compilers, OS's, and certain types of libraries. It's not about what a random Korean or Greek speaking programmer tackling a random project won't do. It's about the fact that almost all compiler-related work is ignoring the ways it got done robustly and more easily before. Same with SCM. There's exceptions, especially in functional, but the rule supports my point.
Most likely explanation for them chasing the same senseless stuff in mass even when shown evidence to the contrary... stuff that's easier... is that they chase fads for social reasons.
One interesting, if old, idea is to submit to an anonymous newsgroup where all messages are encrypted. You only see messages encrypted to you, and there's remarkably little metadata, so long as there is regular traffic from multiple sources.
Not to mention that blockchain entries (at least for bitcoin) are anonymous but unique to each user. This adds metadata, and could allow an adversary to unravel your connections.
If they had end-to-end encryption, then Google wouldn't be able to read the emails; meaning to gain value, Google would have to charge for the service.
It's not (only?) a question of "gaining value". End-to-end encryption is fundamentally incompatible with many features that Gmail users rely on. I would recommend reading https://moderncrypto.org/mail-archive/messaging/2014/000780.... from an ex Gmail anti-abuse tech lead. And for 99.99% of Gmail users, protecting automatically against untargeted phishing and malware attacks is a larger security improvement than having e2e encryption.
protecting automatically against untargeted phishing and malware attacks is a larger security improvement than having e2e encryption.
Not to mention, from what I heard out of the mouths of ordinary folks, that protection is literally one of the leading reasons GMail took over. It was nearly spam-proof, and people loved it. (That and lots of free storage)
Boxbe is one that I have encountered people using: http://www.boxbe.com/help It's a little surprising this was never a standard part of email. It's the same workflow as granting permission to be contacted by IM.
Based on a quick search, you are right that the phrase "permission based email" has been SEO'ed into uselessness by email marketing services. What should it be called? Screening? But the spam filtering services have almost SEO'ed that into uselessness.
Boxbe is an interesting service, thanks. Maybe this 'permission-based' was never a standard because spam didn't exist back then and, later when it did exist, didn't play well with the idea of mailing lists. In the latter context it's funny how this search term is SEO'd to return marketing services!
Having just read a former google abuse team member's take on end to end encryption and anti-spam[1], this topic now looks a lot more complex. Regarding IM's, he mentions how spam-free WhatsApp is and argues that spam is a lot easier to fight when you have central control because you can change anything at any time at any point (client or server).
IM is low in spam because spammers would have to wheedle their way onto your contact list to spam you. Same for social networks. You just delete or block spammy "friends."
Even so, they could provide the option, which probably no more than 1-2% of the users would ever use - those who actually need their privacy to be protected, while the rest probably "won't care".
Leave it up to the users to decide whether they want to use end-to-end encryption despite a potential spam problem. It's also not like people couldn't use multiple email addresses.
If you were developing a product, would you invest time, money, and research into a feature that (maybe) one or two percent of your users would utilize?
Google already said in the above article that "only" 0.1% of its users get targeted by state-sponsored attacks (which by the way is about 500,000 users) - so why even bother building that then, by your logic? Clearly, just a waste of resources (probably the same for two-factor auth, Security Key, etc).
How many times have we heard companies "China has a 1 billion people - imagine if we only got 1% of that market with our product!". But we're talking about a feature of a product here, not an entire product that only gets 1% of a market's userbase.
0.1% here, 1% there, another 10% over there - all of these features add-up to create a great product that everyone loves because of the aggregate of features but also because of that "one" feature they love individually.
Another thing to remember is that the enthusiasts are the market-builders. You can't just win with a product that surveys well with 80% of the market. I don't think most of the phone or smartphone customers in 2007 wanted a touchscreen phone. Probably (well, literally, actually) only 1% of the market wanted it then.
Also, we don't know how important this feature could be to gain Google more trust. Telegram for instance has gotten promoted as a private messenger that uses end-to-end encryption - and yet its end-to-end encryption isn't even enabled by default (so same scenario that I was talking about), while its "normal" encryption is probably worse and less secure than what Google uses for Hangouts.
I'd argue that Google implemented it because they don't want their product to be implicated in a high-profile attack; if someone disappeared because their Gmail credentials were phished, it could easily blow back on Google and contribute to a perception that Google services are fundamentally insecure.
It might be a harder sell to implement E2E crypto, although perhaps the same argument might apply some day. The notifications are probably just low-hanging fruit.
The way they word these releases certainly makes it sound like they have users' best interests in mind.
But honestly, as you have highlighted, these announcements should be insulting to users' intelligence.
"Dear Users: Please allow us to store copies of all your sensitive data, including every email ever sent or received, in perpetuity. In return, to the extent the law permits us to do so, we'll let you know (_ex post facto_) when some other third party is having a look at it."
The solution to the problem if indeed there is a problem here is not going to come from Google.
The problem _is_ Google.
The only parties who need a copy of an email are the sender and the recipient. If you really care about privacy, security, whatever, then "store and forward" and "POP" via some third party (Google, etc.) is not the proper way to implement email.
Hypothesis: Google does not charge for Gmail because, quite simply, no one would pay.
> The only parties who need a copy of an email are the sender and the recipient.
Emphasis on "need". Most recipients also like to have a third-party anti-spam service also have a copy of their email.
Assuming the encryption costs are low enough, spammers and virus senders would like nothing better than to cripple anti-spam learning tools by having each recipient recieve a cryptographically unique opaque blob. This would force users to develop their own training corpus and react to new spam and virus outbreaks individually.
You may argue that you could still use anti-spam locally, but it wouldn't be as good. While I wouldn't mind sending decrypted spam out to a server and getting updates to my local anti-spam program, no one would want to send legitimate mail, so the service would have no "ham" to train against.
I suppose encryption could help in the fight against spam by requiring CPU time to encrypt the email. 10 seconds per email would be hard to notice for a real human responding to messages, but might make spam unprofitable.
Google does charge for email services, for anybody using gmail with their own domain, which is a lot of small companies and probably a few medium size one.
I don't see why Gmail has to be all-or-nothing. I would like to see Google continue their free/cheap service as is, and offer a more expensive service with better security. I think Google would be able to greatly improve the interface for end-to-end encryption if they chose to. However, the company seems disinterested in cooperating less with law enforcement and spy agencies.
I would suspect this is because when 99% of users hear of an upgraded gmail service that costs money, they will start asking questions as to what's wrong with their current gmail service. They get so much synergy across products from users data it's probably not worth it to point out current gmail drawbacks to 100% of customers: even if x% switch to premium, 1-x are now aware of drawbacks of using google across all products and could unpredictably hurt top line rev.
It is just naive or dumb to think Google is left alone when they don't give access to the government. Of course they give access, but in the interest of being a vital source of intelligence and data they put on an act.
They could mine for keywords on the client. They'd not need to store those against who you are sending the message to. It would be less intrusive than is the case now, but not as secure as then knowing nothing about what you're sending. Where would draft emails be stored in an end to end encrypted system? They'd need to store the information on their servers in a way that they can recover it for subsequent editing.
Everybody claims that GPG is unusable, but I just don't think that is the case. Thunderbird+Enigmail works perfectly hassle-free. You just enter your passphrase and everything else just works. The hard part about using GPG is convincing your peers that it's worth the minimal effort to set up. "I've got nothing to hide" is the common response.
Yes, for the technologically literate echo chamber of HN and most of our immediate peers, GPG may seem trivial to install. But the real people that have something important that needs to be encrypted (journalists, entire countries full of suppressed people, business secrets, etc) will NOT think this is trivial and most people will simply take their chances.
That doesn't mean that GPG is bad or that it isn't a strong tool, it means that for mass adoption then we need to make the tools easier to use.
My 60 dad, who has never managed to work well with computers (and has 4+ toolbars in IE, and likes them) managed to install GPG and Engimail in his Thunderbird, and managed to actually get them to work.
Your last statement is just not true. My mother struggles to log into Facebook. "It works on the other computer". Yes, because the password is cached on the browser there; it's not on my laptop. "I'm putting the right password in but it's not working". No, you're not. Etc.
No, I'd certainly not compare a web service which nearly everyone uses - and for many is the only reason they have an internet connected device in the first place - with an encryption technology practically no-one uses.
Someone here suggested that "PGP isn't too hard - you just need to spend 30 minutes studying cryptography before you use it" which pretty much makes my case for me!
I'm sad that we've gotten to the point where asking users to make a small investment of time makes a product basically a non-starter. There's such a thing as too much convenience.
Proton mail doesn't support IMAP, 3rd party clients, or email export. If you decide you no longer want to use Proton but you want to keep some of your emails, you have to manually save the contents of each message. It's irresponsible to recommend Proton without mention of the commitment involved.
Oh yeah - lol - email clients. I remember them. No, the solution has to work seamlessly with gmail, yahoo, hotmail, outlook, whatever clients people are using. It needs to be everywhere or it might as well require a different protocol/client altogether.
Sure, if you only ever check your email from that one workstation, it's easy. That's not really how email works today. Most people expect their email to work seamlessly across multiple devices and OS's. Email would be useless to me if I couldn't use it on my phone.
My home workstation and my phone both talk to the same IMAP server (my home server). The phone has K-9 Mail and OpenKeychain on it; no trouble sending and receiving PGP mail at both locations. Not that I have anyone to send PGP mail to, since no one uses it...
Yeah, I've been there too. At the company I worked for a few years ago I tried to get everyone using PGP. One or two actually did. The problem is that while it's usually possible to set it up on a mobile device, it's not trivial, and you lose some important features. For example I very often need to search for an old email on my phone. If that email came in encrypted, then I'm not going to be able to find it if it's not stored on the phone. (Right? Was I just doing something wrong?)
The problem is the security of you GPG jey. If my phone gets hacked or stolen I lose my key. Sure, my master key is not on my phone (or my workstation) but still a big deal.
I use Yubikey Smartcard both on my workstation and on my phone. The setup works, but its both exepnsive, takes a long time to set up and its fairly difficult to do. Now that I have the setup, I can only use it with a few friends.
Keybase is working on a more scalable solution. Every device has different keys and messages are encrypted for each device. This allows them to revoke individual devices. While deploying this to most non technical people, is still quite a challenge, at least its an attempt at solving the multi device problem. I can at least imagen how with their approch we could get wide use, while with traditional gpg I cant really see it.
What does the average user do when they forget their passphrase? (Which they will do constantly.)
How do you determine the keys associated with the people you wish to communicate with? (The web of trust and going to key-signing parties are about a thousand miles from 'everything just works'.)
To be honest, I don't even see the point of setting a separate passphrase for your PGP on a machine you own if it adds any hassle for you [1].
I don't see why you PGP key should be so different from every other password you use. Sure, PGP does not have forward secrecy, but neither does gmail if someone grabs your password.
I agree with you that if you want to actually improve the security of the people who are right now not using any encryption, make encryption easy to use. Even if this means that it might not fit the threat-model of those targeted by state-actors.
IMHO, PGP could actually be used for this (even if it might not be the perfect fit) by using Trust On First Use, better interfaces and more integration into mail clients. The problem is that it is right now a tool used and made by those who want really strong security. This includes e.g. encouraging you to set a passphrase, which makes it more secure, but also more of a hassle for most people.
[1] This is obviously only true if you don't fear it being compromised / your security requirements are low. But then, if you need strong security, use full disk encryption and Qubes or something similar.
What happens if you lose control of your private key? How do you ensure the recipient keeps up to date with which keys have been revoked? Isn't it a risk that unless you keep changing keys you're at risk of someone getting your private key and having access to every message you've ever received?
I think installing a mail application is certainly well above the hassle threshold of most email users. Yes, it has to be that simple. If users have to generate or store a keypair, or install a mail client, then it won't happen.
So what do you think is the reason nobody uses PGP? It can't just be "nothing to hide" because people aren't struggling to disable encryption on the services which currently offer it. WhatsApp, imessage, HTTPS etc.
I don't use it because I don't put into a computer anything I wouldn't mind seeing on the front page of the Times. I lost the game before I even started playing.
same thoughts. I have been using GPG since the late 90ies and can't see the issues everyone has. But funny how that the same people who have no issues discussing some of the most complex things in CompSci e.g. functional programming, ML, AI, who write code in emacs and then say they hate pgp because it is too complex. Come on guys :-)
I'd rather explain this with necessity to learn concepts of public key cryptography, which takes time and efforts to understand. Most of people don't know about that and are not interested to know that. Interface is secondary issue: it won't take a lot of time to setup your keys given you know crypto.
My girlfriend have successfully passed the EFF's guide to PGP in half an hour (with a bit of my guidance), coping well with some inconsistences of the guide. https://ssd.eff.org/en/module/how-use-pgp-windows
There's "people who are happy reading about maths for half an hour before installing and configuring PGP" and "people who want to download an app as simple as Facebook messenger" and there's very little intersection between them.
It seems like this has an implicitly libertarian slant that not everyone's going to buy into. If you look at the U.S. Constitution it's more nuanced. The fourth amendment forbids searches without a search warrant. If judges are involved and they're doing their jobs properly then it's okay.
If you're not a libertarian, it matters that law enforcement has to get a search warrant, rather than breaking the encryption and snooping on whatever they want.
Currently, it's more along the lines of physics, math, trust issues, and malicious insiders threatening safe escrow or backdoors. A recent paper by top cryptographers, breakers, and policy makers showing there's no known method to apply to digital devices what we have with physical:
Note that all the techniques they have allow stealthy, persistent access with ability to forge evidence with write access. I don't have to be libertarian to be concerned about that given all the corruption cases of local and federal law enforcement. Just know human nature & scope of technical problem.
Not seeing how that paper applies here since it seems to be about peer-to-peer encryption which is entirely different.
In the case of a cloud service provider, putting the company (and a judge) in the loop is a procedural speedbump. The government doesn't have write access unless the company allows it. Of course if they collaborate all bets are off, but that's the best that separation of powers can give you.
It depends on what the company promises its customers about security, privacy, and recoverability. The paper's point is that methods for adding lawful intercept for one party tend to result in intercepts by other parties.
Far as write or collaborate, you should look up the Lavabit case records or summaries. The FBI and court demanded that he hand over the master key compromising all users then lie to them about it. FBI also wanted to put a black box in a privileged position that could probably be used to compromise the service. They've used 0-days before in ops.
So, a backdoor with enough privilege to read everything in the system, install updates with their spyware, and be unnoticeable to OS must be both (a) enormous technical risk and (b) have write access that could enable corrupt officials to frame dissidents. I've worked on potential counters to A but B is unacceptable given fed's and spook's track records.
Yes, that's what happens when the government and the company don't work out any compromise. In the end, if you stonewall and the judge agrees, the government gets to use coercion and shut you down. There are worse things (they could raid your datacenter or fine you or put you in jail).
In the happier case (with better lawyers), a legal procedure gets worked out where the company checks that the search warrants are valid, law enforcement gets the info they want, the company keeps their private key, the government doesn't get to install any hardware, and companies publish "transparency reports" showing that mass surveillance isn't happening.
Those are the ones you hear about. Then tgere's the ECI level Snowden leaks on US companies "SIGINT enabling" their products. It said the FBI "compelled" those that resisted. SIGINT enabling is backdoors. The method of compelling isnt revealed. And Lavabit case was similarly expected ti keep secret of it all.
So, you are told that it happens as you describe. We cant be assured of tgat given leaks.
And you'll notice it's conveniently not in the Chrome Web Store (where the average user can easily install it). It's better than nothing, but it is a bit transparent to have this tool yet not release it into their Web Store (or even better standard in Chrome).
Maybe I'm too demanding, but it seems like all the main web portal/social network providers have the tools at hand to make email secure against the state actor threat: Protocol standards that enable use of open clients, and a social graph and real-time communication tools that would enable a web-of-trust and key signing to prevent MITM. Their ad revenue per user is small and easily replaced by a subscription fee for using their web-of-trust and storage infrastructure.
But I don't want end to end encryption. Why are you forcing me to use encryption if I don't want it?
I'm getting a little annoyed by the folks demanding I do something I don't think is necessary. I'd rather have all the features Google gives me by not enabling end to end encryption, why must everyone have the highest possible level of security on every single thing they do?
You may think it's stupid, but I genuinely don't care if the government reads my emails (with a warrant), or if Google indexes them. I still fundamentally trust the government, and I believe that any incidental processing of my emails that the NSA might be doing is searching for what we currently consider to be terrorist activity. If the definition of "terrorist" shifts beyond reasonability, then my habits will shift as well, but we're not anywhere near that.
Well, the theme of the comments is: if google adds protection against certain adversaries (coffee shop wifi, MITMs, etc) via stricter TLS, but not all adversaries (nation-state attackers) via end-to-end, then the former protection is useless, I guess. It doesn't make sense to me, either.
You're right, next time you want to send your credit card data, do me a favor and email it to the company and let them know want you'd like to buy. As for warrants, forget email, possible your local police are already listening to your calls with a warrants. I'd go on, but don't see the point. I mean, for someone with nothing to hide, unclear why your don't list your full name and zip code; or for that matter, any affiliation you might have to the topics being commented on.
If there were a way to give you read-only access to my daily life, I'd do it (for the most part). I have stuff to hide, just not a lot of stuff, and none of it is in GMail.
It's a lot like having my Netflix account require a 32 character password -- just not necessary. For my bank? Sure. But Netflix? I just don't care.
We disagree in opinion. I'm advocating for what I feel to be important in security and privacy and you are advocating for ease of use and features. It's not stupid, just we have different priorities.
> It's on thing to stand up to government overreach in foreign countries, but how about the country where you (and a large percentage of your users) reside?
I'm not sure I catch your drift. Users in any country may get targeted by governments of any country. You might say that governments should be allow to do whatever they like to their own citizens, but in this day and age, that's hardly an easy distinction to make.
You're right. Governments of any country could target any user. My point was, they make a point of saying "attackers" which is not synonymous with the data they release due to government requests (which is likely a more worrisome concern).
They're alerting you of "attacks" to obtain your data, but not necessarily government requests for your data.
Thank you for the link. That's good to know, however, as you alluded to sometimes it's not possible to alert users.
In that vein, end-to-end would protect those users even if you can not alert them. Your hands are tied with something like PGP because you couldn't access it even if you wanted to. Someone mentioned your end-to-end extension. Will you be releasing end-to-end to the Web Store soon? Or some other killer end-to-end solution?
I think the red lock next to the recipient email is more confusing than anything.
It's suggesting strongly some kind of end-to-end encryption, like PGP, when there is still nothing. Google still has full access to the plain text versions of these emails as well as the receiving email providers.
It's creating a fake assumption of security that can be more damaging than anything.
I'm not sure if it's far fetched, but instead of creating more security for gmail users, perhaps this is a deliberate attempt for people to start requesting that "lock" next to their email address (especially businesses), and as such create a surge in providers adopting secure communication? Because that's the biggest impact this will have, and as such, will mostly indirectly increase security for google's users.
I agree with much of the others commenting here. The IETF strict transport security draft is ridiculous. If every carrier who passes the message can #1 read it and #2 potentially change the content and #3 promiscuously route messages to each other then why does it really matter if they pass it amongst themselves securely? Line security is easily defeated by other 'features' of SMTP.
End to end encryption is the only thing that will really matter in email security. And even with end to end encryption email is a flawed medium, since it leaks meta data in the process of message delivery. That is kind of a barrier to secure messaging.
Because if you trust your email provider(s) then big baddies can't manipulate it. It's about improving the threat model, not "solving it." Basically it cuts off the easiest attack angles from very sophisticated hackers and governments.
TLS Cipher downgrade and DNS weakness. SMTP and almost every protocol out there whether SSL/TLS enhanced are designed for least common denominator interoperability and legacy compatibility.
SMTP and really email as we know it is inherently insecure. Without end to end encryption and relying on hosted mailboxes, we're inviting "service providers", government and hackers alike to read and tamper with our email.
Google has no interest in end-to-end encryption: It would put them out of the loop. It would be contrary to their mission to analyze the customer data to deliver better ads.
Gmail is a big privacy problem. Even if you don't use it yourself, nowadays a large percentage of your emails will end up there. And why? It's all about lazyness and low friction.
Computer literate persons (that's you, right?) should really consider to get off their butts and host their own email. It's not hard, it's not expensive and it's not a lot of work to maintain either. It can even be fun and informative. By sticking to gmail, you're no longer credible when complaining about the erosion of privacy on the internet.
But not everyone has the time or money to maintain an email server running 24/7. Thus why people resort to hosted services. Also the configuration of mail servers nowadays is a huge pain in the backside. They always get tagged as spam unless you go through a ton of hoops. It's flat out annoying.
These days it's hard to make a mail server an open relay. By default they are no longer open. It's been like that for the last 10 years or so.
The defaults are safe, the various popular guides you'll find on how to setup something like Postfix with Dovecot will result in a safe setup and if you use something like sovereign ( https://github.com/sovereign/sovereign ), you'll end up with a very well configured mail server with very little work.
Why does Google have to be the one to provide secure paid email to you? Gmail obviously operates at a vastly different scale than their similar paid software products. If you really want secure paid email, I'm sure there are plenty of other companies willing to take your money.
>get off their butts and host their own email. It's not hard, it's not expensive and it's not a lot of work to maintain either. It can even be fun and informative.
I'd love to do this -- I ran an email server for a while and it really was informative. But my sent mail got sent straight to the spam bin every time. Ironically enough, Gmail's spam filters are usually so reliable that people don't really check for false positives with any reliable frequency, so I bailed.
I'm running my own mailserver since the late 90s. I haven't had that issue. You do need to use an IP address from a non-dialup pool, i.e. rent a VPS or a dedicated server at an ISP.
When you start with a new IP address, check if it's on one of the spam blacklists. If it is, complain to your ISP (and to the blacklist). Get your IP off the list.
Next, add all the usual anti-spam things such as SPF records, DKIM, SSL certificate and a reverse DNS record.
> In the 44 days since we introduced it, the amount of inbound mail sent over an encrypted connection increased by 25%.
I'm surprised that it's not more than that. I can imagine executives everywhere asking their IT people "Why do all of our company's emails have this error on them? They are all red and scary!"
Why "state-sponsored" attacks specifically? If any group of attackers is targeting me, then I'd be just as concerned. Introducing that distinction seems like it will force Google to determine whether a group is backed by a government.
There's many other warnings and notifications that Google gives users about phishing and hijacking attempts. This specific warning is exposing the fact that we think someone is being targeted by a government backed attacker. We believe this is information that could be useful for some people and that we shouldn't keep this to ourselves.
My major concern here is that to me, end users are reading into this way too much.
I say this because I caught a developer a few days ago implementing an online payment gateway using a Wordpress "form to email" plugin. The ensuing argument came down to his firm belief that email to gmail is now "encrypted", and thus, this is perfectly safe.
We need to be careful about sending this sort of message.
The mail content is still in the clear inside Google. As long as Google does mail that way, it's not secure. Only end to end encryption can provide any security.
Well, if GMail would be open source, and we could self-host it, we could get the same advantage.
Giving all your private data to a foreign company, serving interests of investors, acting directly against your and your nations interests is NOT acceptable, and should NOT be common.
How is "if GMail was self-hosted, we’d have the benefits without the disadvantages" off-topic to the parent thread, which was about privacy concerns regarding GMail?
Please read the link I posted before replying. Mike Hearn explains that the biggest advantage that large email service providers in the war against spam is their centralized aspect. Because of the access to large amounts of data (obviously) but also because there is basically no known way of writing decentralized anti-spam computation engines that cannot be gamed by spammers.
Also, would you rather have users willingly giving their data to a foreign company (legally liable, bound to a published privacy policy) or unwillingly to malware authors and credentials phishers? In the current security landscape this is a very real tradeoff to think about.
>Also, would you rather have users willingly giving their data to a foreign company (legally liable, bound to a published privacy policy) or unwillingly to malware authors and credentials phishers? In the current security landscape this is a very real tradeoff to think about.
False dichotomy. As well, keep in mind that those privacy policies are subject to change w/o consent and are overridden by any country's government. So even with your false dichotomy you're just choosing _who_ steals your data, not if someone does. I'd rather live in a world where I can use services and not give away my data.
You realize that almost all email filtering that GMail is doing nowadays is based on trained networks on the content of the mail, not trained on the domains?
And in fact, if you train your own neural networks to do this same task – as I’m currently doing – you get the same quality of categorization and spam filtering that Google got.
I consider Google, the NSA, and so on just as trustworthy as a Nigerian Scammer, so I see no difference in giving my data to Google, or giving it to the phisher.
They operate under laws I can’t control, use my data in ways I can’t control, and don’t ask me if they wish to use my data for more other purposes later on.
But how do you get access to large enough sets of training data? Wouldn't that always require plain text access to other peoples mail? Even more, the data would have to be recent, as to take into account trends in approaches.
The training data is emails. Sharing training data means your email is no longer contained within your end-to-end encryption, it's leaking allover the place. If you can find a way to extract useful training data from emails while also making it so it doesn't identify anything about you or the emails you've been receiving, I'm sure you can make a lot of money with that. I suspect if it's not impossible, it's extremely hard, and even harder to do right (that is, in a way that we don't find that is later susceptible to some partial reconstruction of attributes).
If you share the network, and allow members of the network to use their own data to help train, how do you prevent spammers from joining and submitting garbage, or worse, targeted updates to make specific spam pass?
Well, the idea is that you can check networks against your own set of organised data — if adding network X reduces the overall effectivity, you just stop using network X and X's score is reduced.
EDIT: As HN prevents me from adding new comments right now (Seriously, HN, allow us to post more than 3 comments per hour, it’s seriously hard to hold a conversation like this), I’ll answer your comment here:
Users would train networks locally based on their own decisions. Those networks would then be submitted to a repo, and you’d get other networks in return. If a network sorts badly (aka, you always undo its sorts manually), you will not get networks with similar sorting capabilities next time.
The concept would automatically prevent people from adding malicious networks – as they’d end up in the local blacklist of users.
Obviously you wouldn’t blacklist the network itself, but a representation of its concept of sorting.
So, how are these networks getting their data? Users submitting data? That means users are reducing their individual security to increase the group security as a whole. You are then presented with just consuming this data (and staying secure), or contributing, and we're back at the same point, data needs to be shared so it can be trained against.
Let's also look at the incentives for these networks that have data you can subscribe to. How are they supposed to keep spammers out? Any sort of vetting and management of the individual networks will be non-negligible, and if it's not funded will be at a disadvantage to the spammers that are doing this for profit.
Finally, I'm not sure that training sets for data like this can be easily combined without a massive amount of reprocessing, if at all. I'm not familiar enough with the classifying networks involved to know, but I suspect that problem alone ranges somewhere from "non-trivial" to "very-hard", if not already solved.
It sounds good, and in a perfect world we'd have well run and managed shared networks of fully anonymized spam/phishing classification training data that was easy to combine into individual personal classifiers without having to heavily re-process large training sets.
I'm just not sure how feasible the individual parts of that are, much less them combined into a whole.
Google has been completely useless in protecting my data and emails.
Google has shared them with governments all over the place, including the US government and the NSA. Some parts obviously willingly, some parts out of negligence.
If Google can’t protect my emails from being read by any government, or being read by any employee or algorithm from Google that wishes to use my data for profiling, advertisements, or any other purpose that isn’t directly required to fulfill the tasks I gave it, then it is not doing what it’s supposed to do.
Also, accusing someone of being a tin foil hat wearer is just making you sound crazy in the post-snowden world.
Snowden has shown us proof that the NSA has had access to all the data from many US companies, including Google.
If you can prove otherwise, please do so – but currently, Google deserves no trust at all.
Pigeons have very easily exploitable security vulnerabilities. No, you should use end to end encryption if you want security. But the other end is the weak link so you have to trust it.
What's the progress on the End-to-End tool? What's the progress on making Hangouts end-to-end encrypted for that matter?
I feel that these improvements, while useful, are a sideshow to stop privacy enthusiasts from switching to better encrypted services or tools, while Google (and Microsoft, and Yahoo) continues to mine all of your private conversations for advertising purposes.
Indeed, it's surprising that there is no sign of end-to-end encryption on Hangouts, while the competition (with marketshare) is getting there. iMessage had end-to-end encryption since the beginning. Whatsapp has end-to-end encryption on Android based on TextSecure (though it is still hard to see whether it's active). Telegram has support, though it's not the default.
In the meanwhile, Apple also has encrypted notes, phones that are encrypted by default, etc.
Google seems to focus purely on transport security.
No it doesn't. It just makes that a little harder to implement. You can set up a side channel between your own devices to sync that history. Or you can use double-encrypted key and just change the outer layer of that onion when a new device is added.
>You can set up a side channel between your own devices to sync that history.
Then
1. Some device would need to be logged in and online (not always true)
2. You'd need a way to authenticate the new device, which means not only do you need the other device on, but you need it close when first setting it up, and you'd need to manually transfer keys or something of similar bandwidth
3.Either all history would need to transfer, which is a bandwidth hog if history is large (although if device is close then you could transfer over Wifi direct or bluetooth or something), or that other device would need to kept online whenever I want to query history (currently, Google serves this function)
4. If you lose a device, there's no recovery (this is probably the main reason Google won't do it.)
If you really want it, there are apps that do E2E and support Google Talk. See https://chatsecure.org/
The development on the end-to-end extension seems to be relatively slow if you look at the commit log.
Does anyone with more inside information know what is going on with it? Does it have dedicated team members or is this just an internal side project for a couple people?
If you think that enterprise hosted email customers want end-to-end encryption, you are seriously mislead. What most of the customers want more than anything is censorship and compliance, at delivery time. They want mails to be rejected if they have naughty words. They want viruses dropped. They want messages containing customer credit card numbers dropped.
Content analysis is the #1 feature required by enterprise email customers. They do _not_ want end-to-end encryption that would prevent content inspection by intermediaries. They do not want that at all.
I guess it all depends on the enterprise. Encryption is a requirement for certain businesses. My partner is required to use it at her law firm when working with certain clients.
This immediately popped up a red warning in Chrome:
Your connection is not private
Attackers might be trying to steal your information from
www.security.googleblog.com (for example, passwords,
messages, or credit cards). NET::ERR_CERT_COMMON_NAME_INVALID
It seems that the SSL certificate is issued to *.googleusercontent.com. Given that we're talking of Google, I expected that the URL will redirect to non-www https site, but apparently not.
This seems to be an edge case between how wildcard certificates work versus how HSTS works.
Wildcard certs only validate one subdomain of depth (so *.foo.com cert does not validate a.b.foo.com). HSTS "includeSubDomains" will require a valid SSL cert for all recursive subdomains.
It's a problem, but I don't think it's a problem worth solving.
I certainly appreciate the effort is better than nothing, however, how often are those notices served to US/European citizens? It's one thing to stand up to government overreach in foreign countries, but how about the country where you (and a large percentage of your users) reside? They specify attackers, but I'm assuming this notice to the end user does not apply to the US/EU governments requesting your data and them complying?
Another gripe I have is that TLS has probably been broken by the NSA^1. It's better than nothing to alert us about the other party not using it, but really provides limited protection. PGP/GPG is really the only assurance you have and the plugins for different desktop apps are nearly always buggy. I end up just manually encrypting/decrypting with GPG because a buggy encryption integration is not a comforting thought. If they really cared about keeping your privacy safe, they'd have an end-to-end encryption tool/integration.
[1]: http://blog.cryptographyengineering.com/2013/12/how-does-nsa...