Hacker News new | past | comments | ask | show | jobs | submit login

Is it me or...

> If they’re willing to convert all their customers to ESNI at once

Why does it seem like this is over-engineering at it's finest? Not only are CDNs now part of the problem/solution space, but they are now dictating.

It is now that much harder to diagnose issues when they do crop up, instead of checking ping or nslookup. Now, you've got to see if the DNS-over-HTTPS/The DNS record itself/Host/client/any number of other steps is broken.

We've completely removed the ability for a poweruser to diagnose before calling their resident IT professional.




It's even worse. To use ESNI you need DOH. To use DOH you need a resolver with a server certificates, which is kindly offered by the same cloud providers. So now all your base are belong to cloudflare.


None of this is tied to Cloudflare though. Or really to using a cloud provider at all. Of course, if you're not using a cloud provider, then the IP address can be used to figure out what the site is, but the point remains that you can pick from any cloud provider that supports ESNI (and, separately, pick from any DNS provider that supports DOH).

At the moment if you want ESNI it looks like you have to use Cloudflare, but the solution to that is to encourage other cloud providers to support ESNI rather than to decry the notion of ESNI.


> (and, separately, pick from any DNS provider that supports DOH).

But why do I have to? I already have a trusted DNS resolver operated by myself wired to my OS. Why require the whole DoH rube goldberg machinery to let me try ESNI?


DNS is plaintext, like HTTP, so running your own resolves does nothing to protect your internet provider from selling your domains resolved list - in aggregate or in specific - to other companies for revenue.

There are three well-known trusted public DNS resolvers, run by Cloudflare, Verizon, and Google.

Which of those three would you encrypt your DNS traffic to, if those were the only three options available other than plaintext for all to see?


DNS does not need to be plaintext and DoH is not the only alternative. It's the alternative that the advertising engines and CDNs prefer because it extends control. The privacy of DNS argument is a major red herring.

DNSCrypt (or DNS over TLS or DTLS) is a wonderful alternative that works in-band and works with DNSSEC.

People are also ignoring the consequences of the switch from UDP to TCP.


I remember that DNS over HTTPS just landed recently in the generally-available Firefox 62?


I think the specific question is: Why prefer `DNS over HTTPS` over e.g. `DNS over TLS`. The relevant matter is that DNS requests are encrypted. How exactly it is encrypted should not matter.


AFAIK it doesn't matter. DNS over TLS would work fine too (though I'm not sure if Firefox supports it). The important thing is that you're not using plaintext DNS, as that would defeat the purpose of using ESNI in the first place.


What about applications that communicate over protocols other than HTTP(S)?


If your browser is talking to a DNS resolver wired to your OS, that doesn't change what any upstream network observer sees, since they can observe DNS requests generated by your OS-level DNS resolver exactly as they would observe DNS requests generated by your browser.

DOH isn't about trust. It's about preventing network observers from figuring out what sites you visit by observing the DNS requests you make.


Not _only_ about trust. One of the things DoH gets you for free is that it means that your ISP doesn't get to touch DNS requests, which was not true previously. You can get Internet service from whatever bunch of money-grabbing assholes are available where you live, and get DNS from somebody else without that being tampered with since it's encrypted on the wire. You do still have to trust the DoH provider (outside of DNSSEC).

In the UK for example, the current government proposals for yet more Internet censorship assume that they can "just" order ISPs to censor DNS. This, their white paper says, is relatively cheap and so ISPs might even be willing to do it at no extra cost, which is convenient for a supposedly "small government / low tax" party that keeps thinking of expensive ways to enact their socially regressive agenda...

But DoH and indeed all the other D-PRIVE proposals kill that, to censor users with D-PRIVE you're going to have to operate a bunch of IP layer stuff, maybe even try to do deep packet inspection, which TLS 1.3 already made problematic and eSNI skewers thoroughly.

So there's a good chance this sort of thing for _ordinary_ users (the white paper already acknowledges that yes, people can install Tor and it can't do anything about that) makes government censorship so difficult and thus expensive as to be economically unpalatable. "Won't somebody think of the children" tastes much better when it doesn't come with a 5% tax increase to pay for it...


Conventiently, the same will also apply to me trying to block apps from talking over the network with things they should not talk to.


You could always point your DNS at your own DNS-over-HTTPS server, then configure that server to forward requests over an encrypted connection to another DNS-over-HTTPS server.

Don't know if there are any tools available right now that will do that for you, but there's no technical reason why it isn't possible.


I think the problem will be apps with a DoH service hardwired. There wouldn't be anything for me to point anywhere short of patching the app.

Yes, apps could theoretically already do this today if the developers are willing to run their own endpoints. However, my guess is this will become vastly easier to do when there are already public DoH endpoints available to connect to.


There isn't anything that would prevent the local resolver to talk upstream using DoH.

Same privacy, minus cloud companies that try to insert themselves as middleman.


DOH does not rely on cloud companies. The requirement for DOH has nothing to do with cloud companies "try[ing] to insert themselves as middleman". You certainly could argue that ESNI should be used even if DNS isn't done over DOH, in case you trust the network path to your recursive resolver and you trust the recursive resolver itself to be using DOH, but that also has nothing to do with cloud companies. I admit I'm not sure why ESNI requires DOH (it's not particularly useful without DOH, but that's not an argument to disable it); my guess is simply due to the additional resource usage necessary to process ESNI requests, and so Firefox doesn't want to put this additional load on the server if there's no practical benefit to doing so. Ideally, using an OS-level resolver would have a way to tell the browser that the recursive resolution was encrypted (whether via DOH or something else it doesn't really matter), but I'm not aware of any way to do that at the moment.


DoH by itself does not, but the way it is getting implemented, most users will just happen to be dependent on them. That's why I wrote that.

Instead of pushing this functionality into OS resolvers and standalone resolvers used by networks, it is being pushed into commonly used applications, with cloud companies providing the other end by default.

ESNI doesn't require DoH, but there's no point of using it without one, if your network can check the DNS records you are asking for, and then check the ecrypted SNI against it (it will have the same key you are using to encrypt, so it can do the same and match).

Some resolvers (systemd-resolved, ducks and hides) do use custom API to attach properties to responses. That was the only way to get DNSSEC status, so it can be reused to indicate upstream protocol too. However, not many applications use that, they rely on standard gethostbyname(), which doesn't provide anything similar.


Most users will just happen to be dependent on cloud providers for DoH not because of anything inherent to DoH, but because at the moment only cloud providers are offering DoH-enabled resolvers. If you don't like this, then instead of decrying the usage of DoH, you should be pushing for more DNS providers to offer DoH.

I do agree that pushing DNS functionality into apps instead of the OS level is suboptimal, and I certainly hope that, if Firefox proves that DoH works well, it will be adopted by the major OS's (along with a way to query the OS resolver to check if it's using DoH or not) so that all apps can benefit from it instead of just web browsers that reimplement DNS.

Of course, IIRC Chrome at least (not sure about Firefox) has already been implementing DNS resolution itself for a long time rather than relying on the OS resolver, so the idea of a web browser doing DNS directly instead of relying on the OS is not a new one. I'm not sure why Chrome does this though.


> Ideally, using an OS-level resolver would have a way to tell the browser that the recursive resolution was encrypted

A simple flag to configure this would have done the job. I don't like how browsers are pretending that they have security needs that are special compared to any other application and thus need to pull in the whole network stack and bypass the OS on everything.

It causes duplicate effort if you want to secure your whole network instead of only the browser. It also limits technology choice. I'm forced to use DoH even though there are other options.


I am perfectly capable of having my DNS resolver use a different, encrypted route than the HTTPS traffic.

This is not mozilla's decision to make.


But if you're using CloudFlare through Firefox, Mozilla is doing collective bargaining on your behalf.

It's a different world, true, but technology can't be stopped. If Mozilla succeeds in being a agent negotiating on behalf of users, all your base might be governed by reasonable contracts.


Thing is, it does bargain, and trusts third-party privacy policy, but I, for example, do not trust Cloudflare.

"We’ve chosen Cloudflare because they agreed to a very strong privacy agreement" [0]. Like, legally agreed? With regular audits and full access for Mozilla people?

Where does that leave me, if it gets baked into my browser?

[0] https://blog.nightly.mozilla.org/2018/06/01/improving-dns-pr...


The quote you make there from your reference [0] has a link to the legal agreement with Cloudflare. It's here: https://developers.cloudflare.com/1.1.1.1/commitment-to-priv... So you can read the legal agreement.

Of course, if you still don't want to use Cloudflare for DoH you can just configure your favourite resolver in Firefox itself. The blog you refer to as [0] contains detailed instructions on how to do that.

So, where are you left? Right where you are today: you control the DNS resolver on your machine today. With Firefox Nightly you also control the DoH resolver (and can disable it entirely).


My concern is whether this integration with CF will make its way into default FF install.


But if you do DNS resolution yourself you'll loose privacy.

You'll probably always be able to run your own. If you so desire.


I use DNS over TLS via multiple resolvers, so not my case.

Privacy is a fragile thing. Is it better putting all my lookups in one basket (CF)? For hiding from ISP, nothing beats VPN, and in this case no need for ESNI. My point is its not up to Mozilla to turn my data to third party.

And, frankly, if choosing between ISP and CF (or Google), leaks to ISP impact your privacy much less. ISP have no global data to ML your history, no analytics cookies, no clear text traffic access.


You seem to complain, but I hear of no alternatives..

The world is moving towards more cloud computing, Mozilla can't stop the centralization of the internet. But if they can use collective bargaining to protect consumers that might do a lot of good.


Installing unbound, which supports DoT and multiple resolvers on your home router, for example


> To use ESNI you need DOH

This is a Firefox decision, not something required by the standard:

https://tools.ietf.org/html/draft-ietf-tls-esni-01#section-7

ESNI is best combined with DoH to prevent snooping (hence Firefox's apparent decision to tie the two features together), but obtaining the ESNI key does not strictly require DoH.


It's even worse because they talk like ESNI is some kind of standard, but there's only been a single draft at the IETF written by a Mozilla employee, and that draft is still at version 1. Calm down Mozilla, maybe other people would like to comment on the design before you go and implement it?

Doing it like this is a great way to end up in interopeability hell down the road when different parties have implemented different versions. I'm not saying they have to wait until it's an RFC, but atleast wait for a couple more versions of the draft and let the IETF discuss it a bunch first. This is a big change.


> but there's only been a single draft at the IETF written by a Mozilla employee

I can see why it might look that way, but actually draft-ietf-tls-esni-01 is the third draft of this document, and has been co-written by at least four named authors including Chris Wood at Apple. Also that "Mozilla employee" was one of the Working Group chairs.

draft-ietf-tls-esni-01 was preceded by draft-ietf-tls-esni-00 (it is usual for early drafts to have zero zero versions)

draft-ietf-tls-esni-00 was preceded by draft-rescorla-tls-esni which was Eric Rescorla's first write-up of this idea

Finally, though this document didn't exist twelve months ago, the "issues and requirements" document did. This document imports the thinking behind that document, it just provides an implementation and now Firefox is testing it.

The reason for the name change is a thing called "adoption". The TLS Working Group agreed by consensus to adopt this piece of work, rather than it just being independent stuff by a handful of people who coincidentally were working group members. When that happens the draft's name changes, to reflect the adoption (removing a single person's name) and sometimes to use more diplomatic naming (e.g. the "diediedie" draft got a name that didn't tell TLS 1.0 to "die" any more when it was adopted).


Why does Chrome get to do this but not Mozilla? Why is field testing such an implementation a bad thing? What do more comments add over real, concrete data (up to a point)?


IETF standards require prior implementation and testing.


Not really. Use whatever DoH provider you like. Nothing here says "You can only use Cloudflare".


It's not you. It's a combination of startups and incumbent tech behemoths attempting to operate outside of the formalized process for internet standards by using their market power to push for the change they deem appropriate.

There are benefits (censorship circumvention) to be reaped, but also great peril.


I'm very confused - are you saying that the announcement of experimental ESNI support is an example of companies "attempting to operate outside of the formalized process for internet standards"? If so, I really don't see how that is true - they implemented a draft IETF spec - https://datatracker.ietf.org/doc/draft-ietf-tls-esni/ - If that isn't working with a standards body, I'm not sure what is.


Funny, I think we've had exactly the opposite problem. See, for instance, Heartbleed, which is pure product of IETF standardization of a feature no mainstream commercial entity asked for.


It was not required to implement the tls heartbeat feature, and IIRC most tls implementations did not implement it - except for openssl. The real problem there was openssl (and it was a big problem, being both the widespread default choice, and too hairy for most competent engineers to bother to dig into, at the same time ...)


Again, nobody was asking for the Heartbeat feature. I went back and read through the (IIRC?) tls-wg posts about it. The same is true of extended-random† --- 4 different proposals! None of them were really pushed back on. It was just sort of assumed that if nobody strongly objected, it was going to become part of the standard.

DNSSEC is another great example. Look around. Nobody in the industry is asking for it (try that "dnssec-name-and-shame.com" site to confirm this), except the IETF and a very short list of companies with a rooting interest, like Cloudflare. In the very short time it's been around. DNS over HTTPS has done more to improve DNS security than 25+ years of DNSSEC standardization ever did. The cart has been dragging the horse here for a long time.

https://sockpuppet.org/blog/2015/08/04/is-extended-random-ma...


I don't disagree there are problems with not involving commercial stakeholders in the standardization process, and your Heartbleed example is poignant. I feel that there is a middle ground that would be more beneficial to all stakeholders in the long run. I'm just asking for some balance. The implementations of today evolve into the legacy systems that will need to be supported and maintained for years, if not decades.


Yes, and I think what you're looking at now is balance. The way standards are supposed to work is that companies (among other users) come up with features that they want, and get them working, and then the IETF is supposed to hammer out agreement on how to make those features interoperate. And that's it.

It was never the idea that IETF was meant to be an Internet legislature adjudicating what features can and can't be supported in protocols. But that's exactly what it has become.


Let's take TLS as an example. Nalini and co. wanted at first to put back RSA in TLS 1.3, they wanted that feature, the TLS Working Group felt that their charter effectively ruled it out. In your opinion was this working group acting as an "Internet legislature" by not having RSA in TLS 1.3?

Gradually Nalini's lot discovered a very important thing about the IETF: It is not a democracy. They tried sending more and more people, attempting the same thing that made Microsoft's Office into an ISO standard - pack the room with people who vote how you tell them. But there aren't any votes at the IETF, you've just sent lots of people half way around the world to at best get recruited for other work and at worst embarrass themselves and you.

After they realised that stamping their feet, even if in large numbers, wouldn't get RSA back in TLS 1.3, they came up with an alternate plan for what was invariably named "transparency" (when you have a bad idea, give it a name that sounds like a good idea, see also: most bills before US Congress) but is of course always some means to destroy forward secrecy or to enable some other snooping.

Now, IMNSHO the Working Group did the right thing here by rejecting these proposals on the basis that (per IETF best practice) "Pervasive Monitoring is an Attack". Was this, again, the "Internet legislature" since Nalini and co. wanted to do it and they'd expected as you've described that if they wanted to do it the IETF should just help them achieve that goal?

Well if you're sad for Nalini there's a happy ending. The IETF, unlike a legislature, has no power whatsoever to dictate how the Internet works.

ETSI (a much more conventional standards organisation) took all the exciting "Transparency" work done by Nalini's group and they're now running with it. They haven't finished their protocol yet, but in line with your vision it enables all the features they wanted, re-enables RC4 and CBC and so on. They've published one early draft, but obviously ETSI proceedings (again unlike the IETF) happen behind closed doors.

You are entirely welcome to ignore TLS 1.3 and "upgrade" to the ETSI proposal instead. Enjoy your "freedom" to do this, I guess?


After Heartbleed, a lot of things in the TLS ecosystem got better. CFRG is now chaired by Kenny Paterson, and he and others ran interference for an academically-grounded rebuild of TLS for 1.3. Google beat the living shit out of OpenSSL, and expedited the deployment of 1.3.

I agree: the 1.3 process is better than what came before it. But it's the exception that proves the rule: the 1.3 process was a reaction to the sclerotic handling of security standards at IETF prior to it.

My point is simple and, I think, pretty obviously correct: you can't look back over the last 10-15 years of standards group work and assume that either IETF approval or multi-party cooperation within IETF is a marker of quality. And that's as it should be: it's IETF's job to ensure interop, not to referee all protocol design. More people should work outside of the IETF system.


> There are benefits (censorship circumvention) to be reaped, but also great peril.

There is no way this encrypted SNI could enable censorship circumvention.


My understanding was that encrypted SNI was to avoid the sniffing of the host header that travels unencrypted with SNI, and that in combination with a large host (Cloudflare, Google, AWS), an encrypted SNI prevents censorship (unless you're dropping all traffic to the IP block or AS). Is this understanding inaccurate?


If you are state-level actor wanting to censor, the ISPs in your country you will ask politely the large host (Cloudflare, Google, AWS) to cooperate and drop any traffic coming for the censored domain from their IP ranges.

Only if the large host does not cooperate, the respective ISPs will block their entire IP range, (if they want to keep operating in given jurisdiction).


All roads lead to layer 8.


This is what was their public reasoning, but it ignores all the research and ideas in censorship circumvention from Tor, Signal, Telegram, why domain fronting didn't work and why collateral freedom doesn't work if there is a corporation to pressure, etc. At this point I believe it was entirely PR if not deception to centralize DNS.


I hope they add DNS resolution to the network activity tab.


DNS resolution is visible in one of the about: pages, iirc about:networking. But yeah, in dev tools would be much more convenient.


They can't, because that's handled at the OS level, not the application level.

If a browser starts (purposefully) subverting the hosts file or not adhering to resolv addresses, then we've got a bigger problem.

Think, a fat client resolving an address differently than a browser; then that's all sort of Pandora's Box.


DNS over HTTPS is still handled at the OS level?

Related, it should be possible to have “correct” dns in userland that behaves as you describe sans falling back to the system resolver. In my understanding the whole point of DNS over https is to avoid the DHCP assigned DNS address (and of course encrypt)

Finally, I’m pretty sure Firefox at least does its own dns caching. I’ve had to force reload to pick up dns changes already visible to the system resolver.


I think the typical way to do DNS over HTTPS is to run a DoH client/DNS proxy and then point your nameservers at localhost.

I'm not really sure what benefit there is to doing this compared to DNS over TLS with a resolver like Unbound but I suppose that's a different discussion.

What Firefox seems to be doing, unless I'm mistaken, is running their own resolver that implements DoH/connects to Cloudflare and bypasses OS settings.[1][2]

I haven't dug into the details yet to see how it interacts with the hosts file.

It does sound like it falls back to the OS if it fails to resolve with DoH but this solution at first glance appears unideal.

Wouldn't it be best if Microsoft/Apple/*nix distros/ISPs/third party nameservers used resolvers and nameservers that support DNS over TLS?

Then end users/administrators could choose who they trust and everything would still be encrypted.

[1] https://wiki.mozilla.org/Trusted_Recursive_Resolver

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1434852


DoH isn't done by the OS. But that's my point. In order to use DoH, you have to (purposefully) use an extension/browser addon/browser setting.

As a system admin myself; if user applications started overriding the DHCP DNS that I give them, not only could intranet sites be broken, but I'd start having fights with users about it.

Edit: Rather, not overriding but querying the DoH instead of the provisioned DHCP DNS. I'm no expert in DoH, or how any of that works under the hood.

Further, when/if browsers turn on DoH by default, then I can't really fight users, because they did nothing wrong but use a browser. Suddenly, I can't support a browser or two because of it.

DNS caching by the application is fine, because they made the request to the OS, and got the response. That being said, TTL might be violated by that, since the record has a TTL, and whatever the application cache TTL is.


See dnscrypt-proxy. That's what I use on my network.

It's the default DNS in the network. Computers do not need to know the detail it gets encrypted past that point.


If you're not using a CDN, you can just enable it for your own site. They explained in the article why they didn't think enough sites would do this to make it worthwhile.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: