In a big company blog post like this, I wish they'd call them onion services instead of hidden services, especially since they are even using a v3 onion service.
Also, curious if they wrote any code to support this since OnionBalance doesn't support v3 yet. I know they use Go a lot and I wrote a Tor control client myself recently [0].
I knew Tor Project has been shifting away from "hidden services" for a while but I missed the email where teor clarified things for blog posts and such in late April [0]. Also, I wanted to avoid using "onion resolver" as it would be a worse misnomer than "hidden resolver."
Re. OnionBalance: we're working on a few ideas for this, but nothing conclusive yet.
> I knew Tor Project has been shifting away from "hidden services" for a while
I read the article after your correction, and on reading the `What are Tor onion services?` section, I thought “How nice, they’re using the new terminology to go with the new v3 service address”.
This is a v3 onion service, it should be easier to find prefixes, plus CF (and other companies like FB) have done pretty long prefixes via brute force (they have a lot of computers)
For OnionBalance, on the mailing list they mentioned that while it is more difficult on v3, they are implementing HSFETCH/HSPOST[0]. I'm not sure of any other load balancing approaches outside of this. In the meantime I figure y'all have this all going through one hidden service endpoint or maybe share some priv keys or wrote some custom code or something.
Great work! Especially after Daniel (Stenberg) adding momentum to the DoH movement recently. Even though I wish you huge success let’s hope that many other ‚alternative‘ DNS providers jump on the bandwagon in order to make any (future) traffic correlation attacks less simple. Having just a single watering hole makes it too easy for predators to catch their prey - and I don’t assume cooperation from your side (as you already stated in other parts of this thread).
From "hidden" to "onion" is just a terminology change. For the v2/v3 change, not sure there is an in-depth blog post on the differences except for comparing the specs of v2[0] vs v3[1]. There's a wiki page with a high-level overview [2].
> That makes privacy worse than the default setup with Tor since there's no stream isolation. With the standard Tor Browser you get a different circuit for each first-party domain, that's not something you'd have with this.
So in the Tor browser, DNS resolution for non-onion addresses creates a new circuit each time? (not sarcasm, I really don't know) Because I consider this new onion service (aka a front for 1.1.1.1) to be a single first-party domain. How is this any worse than contacting any other onion service repeatedly? Or are you arguing they should provide a rotating list of onion addresses for this service?
> So in the Tor browser, DNS resolution for non-onion addresses creates a new circuit each time?
You can try it for yourself, open a tab in the Tor Browser with foo1.com, look at the circuit in the Torbutton. Then open another tab with foo2.com, look at the circuit and compare it with the earlier one.
The DNS resolution is done by the exit node. Once you go to foo1.com or foo2.com after the resolution you are still going to separate IPs and subject to the same benefits. There's really nothing to compare this CloudFlare service to except just like an HTTP API as an onion service (or bind or whatever).
There is safety in numbers accessing the CloudFlare service. It helps reduce traffic analysis attacks that could otherwise occur on exit nodes and the exit node's possible-non-authoritative resolver. Because correlation between the circuit to CloudFlare's resolution and the exit node's site access is a bit harder. Granted once you enter the HTTP world and ask that second level domain to start the TLS handshake, they can see where you're going anyways so it might not matter. But it can prevent you from being poisoned by the exit node's DNS resolver. It just removes one layer of trust away from the exit node.
Let's say you set this up with the Tor Browser, all your DNS resolutions are done using this onion, then they're all linkable--whereas with the default they're unlinkable since two different circuits were used for different first-party domains. That's the point.
I don't think that's much of a problem considering it's simply the DNS resolver. Linkable might be a problem if the site you want to reach is the site you want to visit.
On the other hand, I'm sure this is fixable in certain ways, just needs patches in the browser/proxy.
Right, if you don't trust CloudFlare or you think there is a flaw in Tor. Let's say you didn't set this up, the exit node can lie about the DNS resolution. Granted, to your point, unless I was worried about exit nodes, I probably would want my resolution on the same circuit/session as my access.
I don't think there's any reason you can't have both: for each site have a new circuit, and make a connection for that DNS resolution only, on that circuit or a separate one, using Cloudflare.
That’s the problem with anthropomorphising companies.
As far as I can tell Cloudflare single-handedly destroyed the usability of Tor Browser. It was just getting pretty fast when Cloudflare put literally half the Internet behind a spywall.
So should I be angry at them? Should I dismiss this valuable service to then remain consistent with my anger? Is Microsoft now “good” or “bad”?
Every action needs to be evaluated on its own. Our evolutionary social adaptation just doesn’t work in this case.
In the end all Cloudflare did is expose how centralized the Internet has become. The immediate emotion is anger because that is how you react when you’re suddenly awakened out of blissful ignorance and forced to face reality.
Just as your Tor browsing experience was becoming faster, it was becoming a more and more viable tool for DoS attackers. Someone has to protect the sites enough that they can stay up for traffic, Tor or otherwise.
There are many ways to do that are more efficient, less intrusive and provide for better UX than Cloudflare’s gatekeeper approach. However no one forced website owners to use Cloudflare, so it doesn’t matter.
> Someone has to protect the sites enough that they can stay up for traffic, Tor or otherwise.
That’s true for HTTP. You need a big corporate sponsor to allow you to host your website. Too bad when they don’t like what you have to say, right?[1]
Regardless of whether you agree or disagree with their "arguments", I'm pretty sure dismissive name calling has no place in an adult discussion. That was my point.
Thank you, it looks like you have your moral compass pointed to the right direction :-)
While I applaud the things above I'm concerned about Cludflare's (growing) size. If it handles so many websites' traffic it's an interesting target for NSA, hackers and other malicious actors. I assume that most of your users use the free SSL certs, meaning Cloudflare possesses their private keys.
The more Cloudflare grows, the faster and the more encrypted "the internet" becoems. But the more Cloudflare grows, the bigger the single point to attack gets (I'm even assuming Cloudflare is and always will be a good actor).
What's your stance on this? Could you comment on this?
I/we worry about hackers and malicious actors all the time. One of the reasons we're greatly expanding our infosec department and hired Joe Sullivan [1] is to help keep us safe. We're doing a lot of work with memory-safe languages (hello, Rust!) to help stop Cloudbleed from repeating itself. [2] We're doing stuff around physical location of private keys [3]. And so on and so on.
We're open about government requests [4] and we've been pretty robust with stuff like NSLs; we went to court to be able to release NSLs [5] and were able to release two. [6]
Call me a tinfoil hatter but I've always assumed that the likes of Cloudflare, knowingly or not, are a key part of the Internet surveillance state.
It would be relatively easy for the likes of the NSA to infiltrate DDoS protection companies, then DDoS dark target sites until they choose cheap DDoS mitigation and bring their users' traffic into the clear.
>" One of the reasons we're greatly expanding our infosec department and hired Joe Sullivan [1] is to help keep us safe."
I am assuming this is the same Joe Sullivan, the former CSO at Uber who was fired for failing to disclose the 2016 data breach to regulatory officials or notifying the 600K drivers and 57 million customers that were affected? [1][2][3]
And keeping it secret for more than a year? I am not sure that association instills confidence.
Then he should have blown the whistle no? I mean his title was CSO. Wasn't there a moral imperative there to notify millions of users who were affected? I don't think its a stretch to say by participating in a cover up you are complicit even if the original decision wasn't yours.
I don't believe we ever wrote it up, it was just an internal algorithm change made in 2016. I can see the internal pull request but don't think we blogged about it.
Seems like an oversight to not promote this change. The way Cloudflare completely crippled the user experience of using Tor, plus the subsequent condescending and poorly handed PR responses I saw on HN and elsewhere, was the reason why I completely stopped using Cloudflare and stopped recommending it to people.
> And who was it that changed their algorithm for handling TorBrowser traffic so that there's no need to show those CAPTCHAs? Oh. Cloudflare.
If you're checking for a custom user agent, you're doing it wrong. Not all people using Tor to try and browse the web limit their browser choice like that.
I still have the terrible experience of having to train Google's ANNs every 5 minutes when using regular Firefox and Chromium over a Tor SOCKS proxy and I blame CloudFlare for single-handedly destroying web browsing over Tor.
I just wanted to thank you for dnscrypt-proxy. I had seen it mentioned in another post so I had it saved in an open tab for later. Seeing it mentioned here again prompted me to actually install it. Very much worth the ~5 minutes it took to get it up and running!
I... get the idea, and support it. I don't understand the implementation.
What is the point of creating an onion address and then publicizing it? Why not just use Tor to get to 1.1.1.1 in the first place? Onion URLs are for services that don't want to reveal themselves.
Basically, what does this enable that generic Tor does not?
Just because they're called hidden services doesn't mean they have to be hidden. Tor hidden services offer a lot that the clear web and normal domain system does not. For one, you own your domain rather than lease it on the whims of some institution that can be easily pressured to kick you off (like cloudflare and registrars did against stormfront, for example). And of course going completely within Tor is a great speed-bump for preventing massive surveillance.
I run all of my clear web sites as tor hidden services too and publish the domains for both publicly on both.
But I wouldn't trust Cloudflare to not censor anything controversial. They're already proven themselves an enemy of free speech and an enemy of tor by their behavior. Words mean little.
> But I wouldn't trust Cloudflare to not censor anything controversial. They're already proven themselves an enemy of free speech and an enemy of tor by their behavior. Words mean little.
Preferably you wouldn’t have to trust any single provider at all, regardless if you deem them trustworthy.
Why can’t we have Alt-Svc for DNS that points to blockchain?
It’s perfectly suited for a name directory however. In fact it would be faster because every resolver would have the global view of the entire name space. Anyway, DNS propagation is not exactly instant, either; so it’s not like you can make a worse system very easily.
Perhaps, and that already exists with Namecoin, but if I had to use it for _all_ of my DNS records like the GP suggested I'd quickly shoot my computer.
If you're using DNSSEC then they can't mess with your DNS either. However, it is very readable by the exit node - just like Host header in HTTP or server_name in TLS.
Sure they can. Virtually nothing is DNSSEC-signed. And if you're using someone else's DNSSEC server, and not running your own server on your own machine, the link between you and your DNS server is completely unprotected.
Your circuit presumably terminates at CloudFlare infrastructure and not at a random exit node where the packets need to be routed again over the open internet to CloudFlare.
In order to use this, you no longer need to go via the regular internet, and use an exit node. Many people don't like running exit nodes, but would run a relay nodew.
Strange to see a company which deems a bunch of unpopular idiot white supremacist trolls to be too extreme of speech supporting a network which has allowed child pornography to flourish online at a scale never before seen. Bold move.
First sentence of the second paragraph: "As it was mentioned in the original blog post, our policy is to never, ever write client IP addresses to disk and wipe all logs within 24 hours. "
for the love of god. if you care about privacy...dont use this you are crippling the stream decoupling of the tor browser...how can you not get this? Its so obviously a stupid idea.
Also, how does that website have SSL? Are there Certificate Authorities that can supply certificates for .onion domains now? CloudFlare did the same trick for https://1.1.1.1 too so perhaps they are just able to do things most people can't.
To be even clearer, v2 service names are just a part of the RSA key hash whereas v3 service names are a full ed25519 pub key and a couple of other bytes.
> Are there Certificate Authorities that can supply certificates for .onion domains now
Yes, DigiCert does (maybe others too, haven't checked). Facebook was famously the first to have such a certificate. Currently needs to be an EV certificate though.
Getting a certificate for an IP also isn't a "trick", it's generally available, although I believe it needs to be in your IP space, so you can't just get it for any random IP you got from your provider.
"currently" sounds like that might change in the future. I thought the CA/B forum was pretty much opposed to non-EV certs for .onions. Is that not the case? What is the potential use case for non-EV certs on a .onion?
The EFF's representative to CA/B (Seth Schoen) suggested that DV certs for .onion make sense in the v3 onion service world.
There wasn't exactly rapturous support, but there were constructive comments from some of the usual suspects on the Browser side of the equation. So far I as know/ remember this discussion died out without anybody producing an actual ballot that could be voted on.
Most notably there wasn't a fierce backlash of CA reps yelling that this was an awful idea and they wanted no part in it. So either they're OK with it, or they've decided it won't go anywhere and they don't care. Without a ballot changing the actual Baseline Requirements you'd never know for sure.
Personally I'm not convinced DV .onion certificates are a great idea, but I'm not opposed to them either, and it seems some people who own .onion services that aren't legal, or need to be anonymous for whatever reason would like to have such certificates.
EDIT: and she said: "We had this very mysterious 1.1.1.1 white on black theme when we were just sort of trying to build hype guerilla-style and then once the announcement was made we flipped it into the colorful "here it is, it's great!" Sort of thing"
Thank you. I am specifically wondering if there's a technical reason that URL is repeated twice. Once as a hyperlink, and once as that gif. Or if it's just for fun.
Thanks for these services and your work generally at Cloudflare.
Also, curious if they wrote any code to support this since OnionBalance doesn't support v3 yet. I know they use Go a lot and I wrote a Tor control client myself recently [0].
0 - https://github.com/cretz/bine