Someone in the other thread had a genius solution to this:
- All domains should now come free with a domain certificate.
Seriously, this entire problem set is solved by that one single change. It is one of these ideas that once you hear it you cannot un-hear it. We already know (and as this blog post says) that domain certificates have zero cost associated with creating them, so just bundle it with domain registration and soon competition in that space will force the additional cost to near zero (in particular if it is required).
This is a much better solution to "Let's Encrypt" because it scales better, and we don't have all of our eggs in one basket.
So who do we lobby to make this happen?
PS - Domain registrars won't fight this too hard as their customers will be "forced" to buy a "bundled" domain cert'. I am just hopeful that even if initial prices would be higher, eventually the additional cost will be absorbed into the domain registration itself.
Most people don't understand how certificates work and get themselves in all sorts of trouble, so part of the problem is technical and educational.
"Let's Encrypt" is much more than just a certificate authority, in fact other CAs can adopt Let's encrypt protocol (ACME).
ACME is arguably solving the bigger part of the problem which is automatic configuration and renewal for the vast majority of people who have no clue how any of this works and don't want to have to think about it.
It's pretty easy to prove that a given domain was registered by a given registrar, so it's pretty easy to say that e.g. Hover is responsible for example.com, so Hover can request one (!) certificate for .example.com (or re-request it and revoke the old one).
Then the only problem is when your registrar is a bad actor, and that's rough because they lose all* their business when people find out that they're screwing over their customers.
This post has an incomplete and somewhat dismissive take on certificate pinning.
It's true that pinning involves a degree of trust on the certificate presented on the first connection, and that is a weakness.
But that weakness is mitigated. Browsers also rely on the CA signatures for that certificate (pinning augments CAs, but doesn't replace them).
The potency of pinning is subtle, because nobody trusts CAs (and shouldn't!). You have to think beyond just your browser, and you have to grok that CAs are a finite resource for your adversaries. You trust the CA-signed pinned cert for an HPKP site on the first connection. But other browsers have had that pin cached, and when the pin is tampered with, they don't trust it. When they see the broken pin, they can do more than just not trust the connection: they can also relay the evidence that a CA is implicated in signing a certificate that breaks a pin.
Google won't say so specifically, but it's not unlikely that some of the last few CAs to have been burned by trying to sign Google sites were caught because of pinning.
Pinning protects more than individual browsers; it also uses the installed base of pinning browsers to protect everyone, not just those using pinning browsers, by turning them into a global surveillance system for compromised CAs.
The realist in me says this will just frustrate developers as staunch advocates of Firefox pester for working services while higher-ups refuse to justify the cost to suit a possible minority userbase. These users being forced to either switch browser or move service provider.
IMO, before http is deprecated, we need public key in DNS support, bypassing the CA system. It would possibly be a lower level of security than CA cert, but would be good for many sites.
That's kind of the issue. There's basically two circumstances where I want to connect with a remote site:
1. I don't care who they are, I just want to read their content (any site I'm not going to log into, e.g. blog posts, etc)
2. I care who they are, I need to know they're them (banks, HN, Twitter, etc.)
The current CA system provides the second one, but fundamentally it would be nice if, with the lack of a CA-verified certificate, the server/browser would just encrypt the connection anyway.
> I do not believe that there is data that is "not important enough to encrypt".
I do believe this. When I visit a static web page over SSL people sniffing my connection are almost always going to have a good idea of which domain I'm looking at (either by IP, or the DNS requests I just fired off, etc.), so the attackers know what content I'm seeing. So why encrypt it? The benefit is that we can trust that static content really did come from that domain and wasn't changed by a MitM, but this can be solved by simply including a signature of the content rather than encrypting everything.
SSL is really not cheap CPU wise; it can limit the usefulness of a small, personal, cheap VPS for hosting even moderately popular content. Signing static content is essentially free, since we can pre-compute them.
If the writer really does believe everything should be encrypted (this necessarily includes metadata), then I assume he would advocate Mozilla deprecate supporting for non-Tor connections? :)
> I do believe this. When I visit a static web page over SSL people sniffing my connection are almost always going to have a good idea of which domain I'm looking at (either by IP, or the DNS requests I just fired off, etc.), so the attackers know what content I'm seeing.
The domain is not the same as the content. Far from it. Aside from that, a site being 'static' is not a meaningful data point here.
> SSL is really not cheap CPU wise; it can limit the usefulness of a small, personal, cheap VPS for hosting even moderately popular content.
Being somebody who runs a HTTPS-only PDF hosting site off such a cheap VPS, I disagree. It does not make a meaningful impact on resource usage.
I'm quite sure you could easily run a static site HTTPS-only from even a LowEndSpirit VPS - these are €3 per year.
> If the writer really does believe everything should be encrypted (this necessarily includes metadata), then I assume he would advocate Mozilla deprecate supporting for non-Tor connections? :)
No, I do not. Tor is yet another dependency that needs to be available, and there are significant usability (and privacy!) concerns with routing all traffic over Tor.
"Everything should be encrypted" does not come at any cost. There are real-world problems that need solving before this can be made a reality. Just disabling non-TLS connections does not cut it.
> The domain is not the same as the content. Far from it. Aside from that, a site being 'static' is not a meaningful data point here.
For a lot of web pages this is true, but what about sites like restaurant webpages? They have very few actual pages on their site, and they don't change depending on who is making the request. If I request the site over HTTPS, anybody that is sniffing my traffic will know that I'm visiting that domain and thus can request the content themselves. The encryption here is not providing anything.
If you are sending/receiving personalized data, e.g. logins, search requests, etc., then you should be encrypting the requests. You would still be encrypting huge amounts of data for no reason, the CSS and static images used on the site are not being transmitted with or based on any personalized data, so any sniffer will already know that these resources would have been downloaded by you anyway (and I don't care if an attacker sees the CSS of a page I'm looking at anyway).
Requesting these static resources over plain HTTP, but authenticated via hashes or signatures, doesn't provide an attacker with any information they wouldn't already have.
> No, I do not. Tor is yet another dependency that needs to be available, and there are significant usability (and privacy!) concerns with routing all traffic over Tor.
And yet Tor is the only way to actually protect against sniffers tracking which domains you visit. If we really care about encrypting all the content - including non-personalized, static content - then I don't see why we wouldn't also care about protecting the domains we visit.
> "Everything should be encrypted" does not come at any cost. There are real-world problems that need solving before this can be made a reality. Just disabling non-TLS connections does not cut it.
I totally agree with this. However I think I come to a different conclusion.
TLS is usable precisely because of its insecurity that we have to trust so many CAs. The key problem with any encryption architecture is key distribution, more often than not the more secure the key distribution the less usable the end product is. Its (relatively) easy to get a TLS certificate for your domain that everyone else will trust, but precisely because its so easy makes TLS very vulnerable to malicious certificates.
For the most part TLS is fine, it offers a certain level of security, but is still vulnerable. I'm happy to rely on TLS when browsing google search, online shopping etc, however I'm not particularly comfortable with using it for online banking, and I certainly wouldn't trust it for secure person-person communication.
So instead of trying to fix TLS, I think we should have more choice for the different levels of security I desire:
1. No personalised data is being transmitted either way => I only need to authenticate the remote content, not encrypt it.
2. Personalized, but not particularly confidential information being transmitted, e.g. google search, shopping, logins for sites where its not the end of the world if they get stolen => TLS, there's a chance stuff will get intercepted, but it is convenient.
3. Highly confidential information is being transmitted => Some other protocol or cert distribution mechanism, e.g. for online banking I might only trust a certificate given to me directly by my bank IRL.
> For a lot of web pages this is true, but what about sites like restaurant webpages? They have very few actual pages on their site, and they don't change depending on who is making the request. If I request the site over HTTPS, anybody that is sniffing my traffic will know that I'm visiting that domain and thus can request the content themselves. The encryption here is not providing anything.
The value of encryption as a whole increases when everything is encrypted, because it is harder for an adversary to distinguish "important" traffic from "unimportant traffic". It may not matter for that domain alone, but it certainly matters in the bigger picture. It significantly increases adversary cost.
> If you are sending/receiving personalized data, e.g. logins, search requests, etc., then you should be encrypting the requests. You would still be encrypting huge amounts of data for no reason, the CSS and static images used on the site are not being transmitted with or based on any personalized data, so any sniffer will already know that these resources would have been downloaded by you anyway (and I don't care if an attacker sees the CSS of a page I'm looking at anyway). Requesting these static resources over plain HTTP, but authenticated via hashes or signatures, doesn't provide an attacker with any information they wouldn't already have.
False. Assets can leak very easily, disclosing what content you are looking at. Just identify which assets are not loaded on every page.
> And yet Tor is the only way to actually protect against sniffers tracking which domains you visit. If we really care about encrypting all the content - including non-personalized, static content - then I don't see why we wouldn't also care about protecting the domains we visit.
It's not. The exit node still sees your traffic - and this is also why routing everything over Tor is a terrible idea (and incidentally, the same reason devices like the Anonabox are fundamentally broken). If you tunnel personally identifying traffic along with "anonymous" traffic, you're "contaminating" the anonymous traffic with your identity.
> I totally agree with this. However I think I come to a different conclusion. [...]
Saying that TLS is "not entirely useless" is a very poor argument for not working on making it better.
The "different levels" you suggest are pretty much already implemented as such, except there is no "authenticate but don't encrypt" level, because it's not a useful or desirable level to have.
> The value of encryption as a whole increases when everything is encrypted, because it is harder for an adversary to distinguish "important" traffic from "unimportant traffic".
Except you can classify a lot of traffic as unimportant by domain. If someone is trying to steal my bank account information encrypting all my other traffic isn't going to help. The easiest attack against TLS is to get a valid cert and then MitM, at which point it doesn't matter how much traffic you're sending to that domain.
I would also be interested in knowing if sending more encrypted data down a TLS channel actually makes it harder to brute force; considering the tendencies for browsers to connection pool there will probably be (relatively) few actual distinct TLS connections.
> False. Assets can leak very easily, disclosing what content you are looking at. Just identify which assets are not loaded on every page.
Assets are cached by the browser, making it impossible to know if a newly downloaded page included those assets or not. Only downloading assets in plain that are on the root page or on the majority of pages mitigates this attack.
(Also irrespective of if you used TLS, if you requested the HTML over TLS and it included hashes for all its static content you would have a much stronger guarantee over authenticity of downloaded assets if they came from CDNs. Hashing also makes caching much nicer and friendlier.)
This feels more like an argument along the lines of "its not really worth having a separate method for doing authentication without encryption, it complicates things and probably won't be able to be used very often", which is perfectly valid and an argument I'm sympathetic to. It just doesn't support the notion that all data is important.
> It's not. The exit node still sees your traffic - and this is also why routing everything over Tor is a terrible idea (and incidentally, the same reason devices like the Anonabox are fundamentally broken). If you tunnel personally identifying traffic along with "anonymous" traffic, you're "contaminating" the anonymous traffic with your identity.
But you can tunnel TLS through Tor(?) The idea behind using Tor here is to stop attackers from being able to trivially tell which domains you're looking at.
> Saying that TLS is "not entirely useless" is a very poor argument for not working on making it better.
My, probably badly worded, point is that its not as easy as saying "lets make it better". Making TLS more secure means coming up with a way of issuing certificates in a more trusted fashion; I don't see how you do that without making it harder to get a cert.
The only other tech I'm aware of that is trying to address this is Perspective, but even that is not perfect.
> The "different levels" you suggest are pretty much already implemented as such...
Browsers are getting better at supporting pinning, but I don't think any allow you to manually add domains? Its not something that has been advertised and encouraged. My bank certainly doesn't advertise its certs fingerprints in their branches.
> ...except there is no "authenticate but don't encrypt" level, because it's not a useful or desirable level to have.
And I disagree with you, obviously.
Unless there are actual, provable, benefits to enforcing encryption everywhere, I don't like the idea of anyone removing the ability for me to make the choice. If you think all data should be encrypted, by all means encrypt all your data.
Personally, I don't care if an attacker knows what BBC articles I read. Yes, I know all the dangers that might befall me, but I've made an informed choice based on a risk analysis of my current situation and, well, I just really don't care one way or the other.
The quotes are getting very long, so I'm just going to respond directly to points without quoting here.
You seem to be oversimplifying the notion of "privacy" to "stuff like bank details". That is incorrect. Any kind of browsing data that a user does not want exposed to third parties falls under this banner. For some that's just their bank details, for others that's every single site they visit.
The point is that you can't decide for other people what is "private" to them. Therefore, the only acceptable solution is to make privacy opt-out - and that is done by encrypting everything by default.
Asset caching depends heavily on the site, and on whether different pages use unique assets. A cache is not a security feature, was not designed as such, and should not be treated as such.
Yes, you can tunnel TLS over Tor. It doesn't afford you any additional confidentiality. Your domains are still being leaked, just in a different place - and routing everything over Tor still exposes you to the same traffic correlation issues, just now it's domains that are being correlated rather than all request/response data.
Making TLS more secure entails removing the requirement of 'trust' as much as possible. That does not necessarily translate to it being harder to obtain certificates. A good example of this are hidden services - it's trivial to obtain an .onion identifier, yet since it's self-authenticating, it does not require trusting a third party.
The real problem with your argument shows itself in your very last paragraph - "Personally, I don't care [...]". You are extending your own personal point of view to everybody else on the internet, and it doesn't work that way. Others will have different privacy requirements, and those should be accomodated to.
Just because you don't care, that doesn't mean you get to decide that nobody else cares either.
EDIT: Also, just to emphasize this: I am not arguing that encryption should be forced. I'm arguing that it should be default. That is something very different.
SSL has became significantly efficient on newer CPU w/ hardware supports. And even the static page itself may be not worth to steal, there may be other things valuable for sniffering such as your identity (e.g., cookie) or your browsing habits. Moreover, MitM can easily insert ads in the page without a SSL connection.
I've ever interviewed some guy from a ISP contractor (not US), their business is to mine search queries from user URLs and inject targeted ads into static pages sent to user.
> ...there may be other things valuable for sniffering such as your identity (e.g., cookie) or your browsing habits.
And SSL should be used in those cases. I'm not saying you should never use SSL, I just don't buy the idea that you need to encrypt everything. Due to all the unencrypted bits like the IP address and DNS (and SNI?), sniffers can already guess what domains you're visiting anyway. Encrypting in those gives nothing that signing the plain text wouldn't.
> ...MitM can easily insert ads in the page without a SSL connection
Not if the response is also signed. (Assuming clients actually checked the signatures of course)
So, under this scheme, what is the best practice for dealing with all the web servers in devices such as printers, routers, copiers, embedded systems, etc.? Quite a lot of these have no provision and, in quite a few cases, not the cpu power to do https.
most printers/routers/copiers offer HTTPS from what I've seen. Sometimes it's on by default, usually not. They just use self-generated, self-signed certificates. There are problems with that, but CPU power isnt one of them.
My experience with large numbers of smart powerstrips is that they support ssh and https, but it's not reliable. Their telnet and http is reliable. I don't know why this is the case, but there you have it.
Smart powerstrips are still a minority of connected devices.
Printers, routers, etc — anything that can afford a $5 ARM or MIPS core — have plenty enough power to allow TLS access.
Getting a certificate for each of them to provide a Web interface is another story.
In corporate environment the IT department will probably install their own certificates, automatically trusted by corporate browsers. Home-oriented devices will probably use massively-copied certificates instead of unique ones. It's not as secure as a per-device unique certificate, but definitely more secure than no encryption at all.
But it mandates that you click through an SSL warning, which no user should ever have to do unless they are actually testing SSL-related stuff. Otherwise, it's just teaching everyone bad practices.
If self-signed certs are accepted silently and shown as "not secure", the way plain HTTP is accepted and shown (per https://news.ycombinator.com/item?id=9472037 proposal), the user won't need to click through anything.
Self-signed HTTPS is in no way less secure than unencrypted HTTPS.
These powerstrips are expensive enough to have a $5 ARM or MIPS core driving their software. And yes, I'm aware that they are a tiny minority of connected devices - I just wanted to point out that there's a class of devices that have problems with encryption.
There's one solution that the author didn't cover: Start treating self-signed certs as unencrypted. Then, deprecate http support over a multi-year phase out. That way, website owners who want to keep their status quo, can just add a self signed cert and their users will be none the wiser.
For https there are two major objectives. 1) Prevent MITM attacks. 2) Prevent snooping from passive monitoring. Self-signed certs can prevent #2, which the IETF has adopted as a Best Current Practice (https://tools.ietf.org/html/rfc7258). I'm much more in favor of trying to at least do one of the two objectives of https, rather than refusing to do anything until we are able to do both objectives.
Here's a proposed way of phasing this plan in over time:
1. Mid-2015: Start treating self signed certificates as unencrypted connections (i.e. stop showing a warning, but the UI would just show the globe icon, not the lock icon). This would allow website owners to choose to block passive surveillance without causing any cost to them or any problems for their users.
2. Late-2015: Switch the globe icon for http sites to a gray unlocked lock. The self signed certs would still be the globe icon. The would incentivize website owners to at least start blocking passive surveillance if they want to keep the same user experience as previous. Also, this new icon wouldn't be loud or intrusive to the user.
3. Late-2016: Change the unlocked icon for http sites to a yellow icon. Hopefully, by the end of 2016, Let's Encrypt has taken off and has a lot of frameworks like wordpress including tutorials on how to use it. This increased uptake of free authenticated https, plus the ability to still use self-signed certs for unauthenticated https (remember, this still blocks passive adversaries), would allow website owners enough alternative options to start switching to https. The yellow icon would push most over the edge.
4. Late-2017: Switch the unlocked icon for http to red. After a year of yellow, most websites should already have switched to https (authenticated or self-signed), so now it's time to drive the nail in the coffin and kill http on any production site with a red icon.
5. Late-2018: Show a warning for http sites. This experience would be similar to the self-signed cert experience now, where users have to manually choose to continue. Developers building websites would still be able to choose to continue to load their dev sites, but no production website would in their right mind choose to use http only.
Won't treating self-signed certificates as not-errors make 1 (prevent MITM attacks) worse though? You could then MITM an https site, serve up a self-signed cert, and anyone but the most observant users won't notice that something that used to be secure https now isn't.
It isn't about refusing to do anything but both at once, it's that doing only one of them makes the other worse.
No, the idea would be that http becomes https with a self-signed cert. It shows as 'not protected' the way http does now to the user. It only turns green or gets the key box when it's a signed cert.
Very few people are going to notice that something that used to be green now isn't.
In an attempt to increase security against passive monitoring (a laudable aim) this scheme heavily neuters https' ability to increase security against MITM attacks.
HSTS is designed to protect against downgrading from HTTPS to HTTP. Best I can tell, it doesn't protect against downgrading from an HTTPS with a validated certificate chain to HTTPS without a validated certificate chain.
To do that, you would need something like certificate pinning. Once there is a widely deployed mechanism for detecting when a certificate has erroneously changed it would be more feasible to stop treating self-signed certificates as errors.
We're talking about changing how self signed certificates are handled. You can't imagine the obvious solution of having self signed certificates not count as HSTS connections?
Showing no warning for self-signed certs would make it much easier to MitM https. For example: Let's say Bob has an https site with a valid (CA-signed) cert. Alice tries to connect to Bob. Mallory intercepts the connection, connects to Bob, and responds to Alice with a self-signed cert. Alice sees no warning, but her communications are being intercepted and possibly tampered.
No, they would get a warning if it's self-signed "google.com" certificate which doesn't match the one in the Certificate Transparency registry, or OCSP fails, or HPKP fails to match, DANE/Tack fails to match.
At least that's what a sane implementation would do.
Aren't all of those vulnerable to denial of service attacks? If the "man in the middle" just drops all OCSP requests on the floor, for example, the default case is to accept everything.
Certificate Transparency is to detect compromised or malicious CAs, and typically only lets people know about problems after the fact. How would OCSP help with a self-signed certificate? What CA would the OSCP request go to? And cert pinning only works if you've visited the site before while not being MitMed.
A lot of these extensions to TLS help secure popular sites like Google, but they don't scale to the many small-but-important sites such as your local credit union. Instead of adding so much complexity, it's much easier and safer to keep current behavior and warn on self-signed certs.
>There's one solution that the author didn't cover: Start treating self-signed certs as unencrypted.
Firefox recently supports this in a way through opportunistic encryption! HTTP connections can include a header to tell the browser to contact the server over HTTPS and ignore certificate errors. The connection is shown as HTTP to the user, but you get the benefit of a self-signed certificate that blocks passive eavesdroppers.
Honest question: Why would I show a warning for a http-only site which only displays photos from my latest vacation? (assuming said photos are hosted on my blog). Or random thoughts of mine about tramway-spotting? (again, assuming said rumblings happen on my blog)
* Your website can be manipulated to contain anything: it could be a script attacking another website (Baidu vs GitHub), it could be a sneaky redirect to a phishing page (tab-nabbing), or just a ton of nasty tracking ads injected by an ISP.
* There are still sites that should use HTTPS, but don't. Browsers can't know whether these are just holiday photos, or photos that are privacy sensitive or can even cause a visit from the secret police in some countries. It's better to err on the side of security.
Would you enjoy your weblog to be overlaid by ads, probably inappropriate? Some free wi-fi spots do just that.
A self-signed certificate, which takes 10-30 minutes to generate and install, including searching the web for a step-by-step guide, would prevent that.
The author mentions letsencrypt, but their description of why letsencrypt is not a solution makes me think they have a less than complete understanding of modern SSL.
Also the configuration issue with regard to not supporting wildcard certs is not a problem IMO. If you're big enough to require multiple subdomains the fact that you're managing multiple certificates instead of one does not need to be a logistical issue. Automation is our friend.
I'd be a lot more willing to consider letsencrypt.org as a valid support for changing the fundamental requirements for communication on the rest of the web when it was a proven service whose practicality (and any pitfalls) for real users in a wide range of cases including those that weren't the most common was well-established, rather than an upcoming one.
- All domains should now come free with a domain certificate.
Seriously, this entire problem set is solved by that one single change. It is one of these ideas that once you hear it you cannot un-hear it. We already know (and as this blog post says) that domain certificates have zero cost associated with creating them, so just bundle it with domain registration and soon competition in that space will force the additional cost to near zero (in particular if it is required).
This is a much better solution to "Let's Encrypt" because it scales better, and we don't have all of our eggs in one basket.
So who do we lobby to make this happen?
PS - Domain registrars won't fight this too hard as their customers will be "forced" to buy a "bundled" domain cert'. I am just hopeful that even if initial prices would be higher, eventually the additional cost will be absorbed into the domain registration itself.