Hacker News new | past | comments | ask | show | jobs | submit login
Encrypted traffic interception on Hetzner and Linode targeting Jabber service (valdikss.org.ru)
731 points by f311a on Oct 20, 2023 | hide | past | favorite | 306 comments



This is almost certainly related to some sort of Russian cybercrime investigation. If you ever read Krebs or peruse some of the seedier Russian forums, xmpp.ru will sound familiar to you, because it is. Not imputing anything to the operator, that's just the nature of operating an anonymous service.

https://ddanchev.blogspot.com/2021/03/exposing-currently-act...

https://ddanchev.blogspot.com/2019/07/profiling-currently-ac...

https://blog.talosintelligence.com/picking-apart-remcos/

https://flashpoint.io/wp-content/uploads/Plea-Agreement-USA-...

Really interesting writeup though. I guess it's a practical example of why everyone should get a CT monitoring service!


Drug gangs widely use XMPP as a secure communication channel in combination with Tor. Hydra (the darknet market) has been taken down by German police in 2022 after trying to extend their "service" to the EU; something similar might or might not have happened here.


[flagged]


I'd like to add that MTProto is impossible to MITM because there are no trusted third parties. The handshake that happens before the user logs into their account includes a step with RSA. The public keys for that step are hardcoded into clients.

To anyone who wants to complain about Telegram not encrypting chats by default and using "homebrew crypto" I'd like to say that it's XMPP we're discussing here. Telegram offers marginally better security than XMPP over TLS. The "homebrew crypto" is still not broken in 10 years and not for the lack of trying.


> Telegram offers marginally better security than XMPP over TLS.

I think you should compare apples to apples, that is, end-to-end encrypted XMPP using OTR/OMEMO/PGP. However, I agree that many XMPP clients were UX disaster when using E2E.


End-to-end encrypted XMPP should be compared to Telegram's secret chats then. Both are opt-in and both aren't very popular among users of these services.


A lot of clients have OMEMO on by default now. You can’t enforce it for all clients & across the entire network tho as XMPP is a ‘simple’ protocol without a let of bells & whistles meant to be eXtended like OMEMO/OTR/PGP built atop it. With the newer compliance suites tho, to be considered ‘modern’ OMEMO & other e2ee options are expected to be supported. Gajim makes the UI easy, & Conversations isn’t bad. Profanity is a bit more obtuse to use since you need to trust your own keys manually too & they’re not autocompleted, but all the tools are there even for a TUI client.


XMPP over TLS is secure though. Of course that is for the transport as the name implies. The difference to e2e is the same. Although if one party of any e2e exchange is compromised, you would have similar problems.


> The public keys for that step are hardcoded into clients.

Yeah, and the private keys are shared with Roskompozor.



And if you call someone, they can get your phone number!

Your IP address is public. Stop treating it like a secret.


Why don't you post yours? ;) Open up a TCP server on port 13729 for verification.

And no, your IP should not be made public in any modern chat app, that's ridiculous. The fact that Discord protects your IP from leaks is at least half of the reason it's so successful.


Good luck, I'm behind like 4 NATs.


And behind at least 7 proxies, you say?


> The popular messaging app Telegram can leak your IP address if you simply add a hacker to your contacts and accept a phone call from them.

Peer-to-peer communication reveals people's IP addresses to each other, what a sensational revelation! By the way, water is wet.


Since Telegram claims to be a secure messaging app, while leaking private details and since you used to work at Telegram, maybe stfu.

All voice can be E2E thru servers, like Signal does.


I actually think it depends on what your type of threat is. If secure means talking to trusted people, and no one else should know what you say or to whom, Telegram is still great and who cares if my confederate knows my IP. He doesn’t care.

If you’re using it to communicate to a hostile party who would love to unmask you, then very not so much. Personally I’m not in the second category but I’m sure some are.


Signal is peer to peer by default as well unless you go into advanced settings and enable "always relay calls"


You can disable peer-to-peer connections in settings, as well as disable calls from everyone or specific contacts.


Telegram is as secure as it can possibly be without the security requiring compromises and leaking into the UX. I thought that was clear to anyone who has ever used it. If you require 100% bulletproof security and are willing to compromise on convenience, it's not for you.

Since this thread is about XMPP, Telegram's security is effectively the same as XMPP over TLS, maybe a bit better because there isn't a trusted third party (the CA).

> All voice can be E2E thru servers, like Signal does.

I know that. In fact, I built the first VoIP implementation for Telegram, libtgvoip, myself from scratch. Relay servers add delay, can run out of capacity, and cost money in bandwidth. Of course relays are still used if a P2P connection can not be established (libtgvoip always started the call through the relays and only switched to P2P if pings went through and RTT was lower, that took at least several seconds before enough statistics was gathered).


Has Telegram been audited? Does it encrypt by default?


Did audit help Let's Encrypt to prevent issuing fake certificates? Such audit is useless. SSL infrastructure security is in a much worse state than Telegram's yet nobody seems to notice it. If any large ISP can issue fake certificates by doing MitM then this system is compromised completely. What's the point of having SSL if basically anyone can MitM it?

To be honest, Telegram using homemade crypto instead of relying on standard approaches like CA certs turned out to be good solution in the end. Apps should stop trusting CA and should hardcode public certificates instead.


Apples and oranges? Let's Encrypt is a bandaid for a (possibly inevitable) architectural weakness, one serving browsers which must communicate with a world of unfamiliar servers. Telegram is a messaging app primarily meant for communicating among people and groups one already knows.

Regardless, using open and verified cryptographic primitives is a best practice for a reason. As are audits. The likelihood any company can start from scratch and produce a flawless solution is a number approaching zero.

I'd feel much more comfortable with something using Signal protocol. Ideally built from independently audited source.


> They should have used Telegram

It requires phone number.


Telegram? Not Signal or Matrix?


Signal has massive usability issues: requires a phone number, requires a primary Android or iOS devices, can’t register multiple Android or iOS devices, requires Google Play services for notifications unless you get the APK which drains your battery (a fork, Molly, no supports UnifiedPush tho).


> Signal has massive usability issues

I'd say these are security issues


> requires a phone number

so does Telegram

> requires a primary Android or iOS devices

so does Telegram


Telegram is also bad


Yes, I tend to believe that as well. My guess it's either related to Genesis Market or Qakbot takedowns.


[flagged]


This criticism/argument was, in fact, made against domain validation (DV) when it was first introduced by earlier CAs. It's not an unreasonable criticism, but the economics of PKI mean that we would have much, much less encryption and authentication on the web today without DV.

DV represents a security trade-off, and Let's Encrypt took this and ran with it, with the net effect of much much much much less web traffic interception overall, albeit with known non-Let's-Encrypt-specific vulnerabilities to attackers who can manipulate infrastructure sufficiently.

As other people in this thread have noted, there are also other mechanisms that can help mitigate those vulnerabilities. Maybe we can come up with more over time!


what? nothing to do with Let's Encrypt, if someone gets to large-scale MITM your traffic then they can get anyone to issue a DV cert.


So for example, if traffic to Google goes through a Huawei router somewhere, then Huawei can issue a Let's Encrypt certificate for Google, right? And any large national ISP can use MitM to issue fake certificates for any site hosted within that country?

To me it looks like SSL cert infra is completely compromised and unreliable.


> then Huawei can issue a Let's Encrypt certificate for Google, right?

Google has CAA records set (https://www.entrust.com/resources/certificate-solutions/tool...) and I guess CAs will have denylists of "popular" domains, so no.

> And any large national ISP can use MitM to issue fake certificates for any site hosted within that country?

Yes. You can fix this by CAA and ACME-CAA.

I can't think of a way to validate DV certificates in a better way that will resist this kind of attack.


Multiperspective validation, if the attacker is far enough away from the target site. :-)

I think some of the researchers who wrote about BGP spoofing attacks against Let's Encrypt may have suggested something about logging BGP changes and delaying DV issuance if a network's BGP announcements are too recent, or something? I don't think Let's Encrypt currently checks that, but it could be an interesting data source in the future.


Let's Encrypt uses "multiperspective validation" to prevent a single backbone router or backbone network from being able to do this attack in many cases.

This doesn't help much if the attacker is sufficiently close on the network to the target, or if the attacker can perform a successful wide-scale BGP spoofing attack.

I'm not sure if that will reassure you, since it's not a complete mitigation in all cases, but the multiperspective validation was explicitly created in response to exactly this kind of concern about attacks on, or by, ISPs!


> This shows that Let's Encrypt security is a joke because now any large national ISP can use same MiTM to issue a certificate for any site hosted within a country. The SSL infrastructure is completely compromised.

the information available right now are too vague to come to a conclusion this bold.

Instead, I find it something like the following more plausible:

jabber.ru and xmpp.ru seem to use "exotic" DNS servers (at least as I checked right now).

https://uk.godaddy.com/whois/results.aspx?itc=dlp_domain_who...

All it then takes is an exploit there in the DNS server, or a badly set-up ACME DNS-01 there, in order for Let's Encrypt to grant an SSL certificate.

https://letsencrypt.org/docs/challenge-types/#dns-01-challen...

The moment you're able to write a (TXT) record for some domain name, you have proven to be eligible for getting an SSL certificate for that domain name.


Both you and the grandparent are correct in that both propose viable attacks; it is a known fact (and not news to any expert in the space) that "domain validation" certificates are vulnerable to "global" MitM in which an attacker can intercept all traffic to a domain (and therefore intercept the validation probes). A situation in which a service's hosting company is sitting on their "front door" (so to speak) and MitMing all traffic that goes their way is exactly such a situation (hence my recommended mitigation).


Hosting company is not the only one who can do MitM; any ISP through which the traffic passes can do that as well; and if there are backdoors in foreign network equipment then the manufacturer of equipment can do MitM too.


This is false, because Let’s Encrypt uses servers in multiple places to get a mix of routing paths to eliminate it as an attack vector.


ns2.jabber.ru is hosted at Akado (ordinary Russian ISP) in Moscow, as for ns1.jabber.ru it looks like it is hosted at Linode. So maybe ns1 was compromised as well, as for ns2 I doubt that.


So because one is hosted in Moscow, you find it less likely to be hacked? What a humorous conclusion.


Yes, because it is highly unlikely that Russian police or Russian ISP will cooperate with US or European police.


I agree with most of the mitigation suggestions.

For high risk targets, consider layering an additional auth mechanism that doesn't rely on trusted CAs: Tor onion services, SSH, or Wireguard.

> All issued SSL/TLS certificates are subject to certificate transparency.

+1. crt.sh has RSS feeds for this.

> Limit validation methods and set exact account identifiers

Using CAA is a good idea in general, but would it help in this case? The attacker would just request the exact cert configuration that is permitted by CAA. Maybe this helps if you can strengthen one validation method?

> Monitor SSL/TLS certificate changes on all your services using external service

+1. High-risk targets should be aware of what certs are valid at any time, and be checking for those.

> Monitor MAC address of default gateway for changes

A more sophisticated attack could preserve the MAC address.

> "Channel binding" is a feature in XMPP which can detect a MiTM even if the interceptor present a valid certificate.

TIL.


>Using CAA is a good idea in general, but would it help in this case? The attacker would just request the exact cert configuration that is permitted by CAA. Maybe this helps if you can strengthen one validation method?

Author of ACME-CAA (RFC 8657) here. ACME-CAA can mitigate this because you can put a unique identifier for your ACME account in the CAA record, so it is not possible for an attacker to do this unless they can get your ACME account private key (or coerce the ACME service). This assumes you have DNSSEC-secured nameservers, of course, otherwise DNS requests can potentially also be intercepted when queried by the CA.

See RFC 8657 for a full list of security caveats. The RFC is designed to be more readable than most.

Blogpost by me with more background: https://www.devever.net/~hl/acme-caa-live


Small update: I've just written a blogpost with my thoughts on the incident. https://www.devever.net/~hl/xmpp-incident

I usually brood on blogposts for days before publishing them since I care a lot about getting things accurate, but this is a bit of the moment, so here goes. Always happy to get feedback by email or IRC: https://www.devever.net/~hl/contact


Is it just me, or would a recurring RIPE Atlas measurement be a great way to detect fuckery like this?

https://atlas.ripe.net/


not at all, since there were no Internet-visible routing changes.

atlas would be good for detecting the time Pakistan announced 0/0 or whatever.


I think atlas allows tls/ssl probes, so that could still be used to track unexpected changes in certificates?


Wuuut, I missed they finally enabled it. Everyone who has infra MitM in their threat-model should look into and enable this.

Also, thank you for your work as well as popping in here.

Am I overstating it by seeing it as one of the most important milestones for Internet security in general over the past few years?

Do you have a take on DLT-based systems and if this is something that is (or could be) seriously discussed? It seems to me that the issues we have with PKI and certificate transparency could actually be mitigated very well if blockchain was seriously considered.


So I assume DLT is a generic/more general term for blockchain technology.

There's this thing called Zooko's triangle (https://en.wikipedia.org/wiki/Zooko%27s_triangle), basically the premise that an identifier can only have two of the following properties: secure, decentralized, human-readable.

But a blockchain can actually be used to square this triangle and get all three. The original example of this is the Namecoin project, a fork of Bitcoin using a lightly modified Bitcoin codebase which can be used as a key-value store.

The idea is that a blockchain can be used to create a decentralized database mapping keys (domain names) to values (which can be things like IP addresses, but also PKI trust anchors, similar to DANE). Thus this can eliminate the need for a CA. You can also use it to map human-readable names to .onion addresses. There is also root-of-trust transparency built in since the contents of the database is public and any changes are also public. The right to change a key-value entry belongs to the public key of the person who registered that key. Nobody can override this. Namecoin domain names use the .bit (unofficial) TLD.

It's a really neat technology and also makes things like censorship via domain suspensions, your registrar getting hacked, etc. infeasible.

Full disclosure, I previously worked on the Namecoin project, I authored the DNS resolution daemon and the technical specification for the format of DNS data in the KV store. Unfortunately public interest in the project has waned, and deployment is the real issue - you have to have the client software to be able to resolve these domain names. There is at least some conceivable possibility Tor might ship it in their bundle in the future though, so people could type example.bit in Tor Browser to get to a hidden service. The project remains active though.


Interesting and I can see some of the benefits, particularly in preventing DNS controllers from going to other CAs and making new keys, but... this all seems like a weird run-around for the MITM part.

Couldn't you exchange public keys with LetsEncrypt (in the web UI) and encrypt the response so you can't be MITMed? Why is http even an issue?


Because the attacker can simply request new keys. There’s nothing stopping them from going “hey LE, I need a new key! This is my domain, here is the challenge, give me my cert!” And LE will oblige, because as far as it can tell, they are you.

Edit: To be clear, this is a problem with a solution. But you asked why simply throwing a LE cert into the mix wouldn’t prevent the issue.


Ownership is already handled by checking DNS (and this thread covers a way to make that even more secure, which LE supports), and as far as I can tell neither has nothing to do with preventing MITM between LE and your servers.

And no, I don't mean throwing LE certs around to prevent MITM - this whole article is about the difficulties before having an LE cert, so that's necessarily excluded.

I'm wondering "why not client certificates". They're a well established way to stop MITM, seems like a simple choice for the ownership validation step.


Have you ever tried to explain key distribution/management to a normal person?

You should try it sometime.


Sure, but this is not particularly relevant when talking about a security product that you use to do key distribution and management (LetsEncrypt).

Explaining and guiding people through that is the whole point.


Could you please elaborate the reasons to implement this outside of DANE (RFC-7671) framework?


CAA is about preventing certificate mis-issuance, which is what happened in this attack. DNSSEC and CAA could have prevented this attack from being performed the way it was, by thwarting the MITM on ACME.

DANE is about changing the way certificates are authenticated. DANE makes it possible to authenticate certificates without getting them issued by a well-known CA. So CAA records are not particularly relevant to DANE. You can use DANE with certificates issued by a CA, which gives you two ways to authenticate the certificate; in this situation CAA secures one path and DANE the other.

I am one of the co-authors of the DANE SRV RFC https://www.rfc-editor.org/rfc/rfc7673 which is what XMPP would use. I don’t follow XMPP development so I don’t know if it has been deployed. I would like it if DANE were more widely used, but it’s not pertinent to this attack.


Yeah. I used to be 100% in on DANE and against CAs. I'm still 100% for DANE but I now think DANE using existing CAs is the better option in many cases because it means things get CT logged. We don't have a DNSSEC transparency situation right now. OTOH there is one undersung issue with CAs, which is that Let's Encrypt isn't as universally available as people think (see the US embargo list) and that does potentially make access to the internet harder for some.

There are some use cases where DANE is actually winning real victories and is actually more viable than the existing CA infrastructure - site-to-site SMTP, for example.


Yeah, Viktor Dukhovni has been impressively energetic and persistent at improving the security of email.


I feel like packet size was and continues to be a major obstacle for DNSSEC. Do you know why the DNSSEC/DANE world hasn't simply acknowledged this and switched to requiring ECC?

It is trivial to fit several compressed curve points (i.e. signatures) in a single packet, whereas you can't even fit two RSA signatures in a minimum-safe-to-assume DNS UDP reply packet after accounting for padding and ASN.1 overhead.

I get the feeling that there is some faction that really hates UDP and they sort of hijacked the DNSSEC situation to use as a lever to force people to allow DNS-over-TCP.

That seems to be backfiring, however, and DNSSEC has wound up taking a bullet for the UDP-haters.

Many very-large networks simply can't afford for their DNS traffic to be exposed to TCP's intrusion-detection malleability and slowloris (resource exhaustion) attacks. These networks appear to be simply ignoring the "thou must TCP thine DNS" edict. DNSSEC is not a good enough carrot for them. I think ditching RSA would have been a more pragmatic choice than ditching UDP or skipping DNSSEC.


Dunno why there are so many foot-draggers failing to deploy better DNSSEC algorithms. I’m grumpy about SHA-1 in particular https://datatracker.ietf.org/doc/html/draft-fanf-dnsop-sha-l...

When I query vjhv.verisign.com I get a response containing four 2048 bit RSA-SHA-2 signatures in 1049 bytes which is well within the EDNS MTU for unfragmented UDP, so I’m not convinced the problem is as bad as you paint it. There have been problems with EDNS trying to use fragmented UDP, but that has been reduced a lot by newer software being more cautious about message size limits for DNS over UDP.

DNS needs TCP even in the absence of DNSSEC, because there are queries you cannot resolve without it. Some operators might convince themselves they can get away without it, but they will probably suffer subtle breakage.


> four 2048 bit RSA-SHA-2 signatures in 1049 bytes which is well within the EDNS MTU for unfragmented UDP

I was referring to the non-EDNS 512-byte limit.

Yes, you get ~2.5 times more with EDNS. Still, four records is not a lot.

> DNS needs TCP even in the absence of DNSSEC, because there are queries you cannot resolve without it.

Theoretically? Perhaps. Some would argue that connectionless DNS is valuable enough that people should not create those resource records. Before DNSSEC that was a working consensus. And with ECC it could be once again.


That's the opposite of the direction Internet cryptography is going, given hybrid PQC and classical systems.


The bloaty key/signature size is only a problem with the PQ encryption systems.

For signing only there are much more efficient PQ cryptosystems, with signatures around the same size as ECC. If DNSSEC ever adopts PQC it will be one of those systems.

Here are two of the earliest, and easiest to understand. There are much better ones now.

https://en.wikipedia.org/wiki/Lamport_signature#Short_keys_a...

https://en.wikipedia.org/wiki/Merkle_signature_scheme


Mostly, it's because very few serious engineering organizations deploy DNSSEC at all, so the best practices and tooling support aren't there.


Unfortunately the DANE SRV RFC is kind-of mismatched with how SRV and TLS work in practise. It requires the server to serve a certificate matching its own hostname (the hostname of the SRV target) rather than a certificate matching the expected host (the hostname that the SRV record was on). This is fine and secure if you use only DANE but if you want to use DANE with CA-issued certs it makes it somewhere between hard and impossible.


Note the owner of a SRV record is a service name not a host name.

There are a few reasons for this oddity: partly so it matches with DANE for MX records, partly to support large scale virtual hosting without reissuing certificates.

You should be able get a cert with subject names covering the server host name(s) and the service name(s).


Why not? You could use "certificate usage" value 1 and (if the implementation does not neglect it) immediately notice that validation by CA disagrees with validation by DNS. That should be good enough, no?


DANE assumes we can successfully deploy this to the entire Internet. It is unclear that's ever possible, and it's certainly not possible today. Lots of things would be great if you can deploy them, for example you wouldn't build QUIC on top of UDP since you can "just" deploy a new transport protocol - except nope, for the foreseeable future that's undeployable.

A public CA generally has a more sophisticated relationship with their network transit provider or (hopefully) providers and can get DNSSEC actually working as intended for them.

So this means mything.example's DNS services and some public CA both need working DNSSEC, but the visitors to mything.example, such as your mom's sister or some guy who just go into mything but isn't entirely clear whether Apple make Windows - do not need DNSSEc, for them everything works exactly as before, yet the certificate acquisition step is protected from third parties.

Would that help? It depends.


Yep! This is one way of preventing this type of attack.


> All issued SSL/TLS certificates are subject to certificate transparency.

Until the nation state tells a CA within their borders to issue a cert without publishing the CT logs, along with a gag order. If they don't comply they go to jail. You think Billy Bob Sysadmin wants to go to jail to protect a russian jabber server?


The malicious certificate either won't have a SCT (which in itself would be highly suspicious and cause some clients to reject the certificate directly), or it will have one but it won't be recorded in the log.

In the latter case, the malicious certificate + any CT log issued after that certificate should have been included are evidence of the attack that is easily verifiable and would likely cause browsers to drop the (edit) CT log unless provided with a very plausible excuse that isn't "we complied with a court/gag order".

Also, this would require the police etc. to compel 2-3 different entities: The CA and two log operators (I think one of them could be the CA itself).

Using different logs than normal would stand out, so it'd typically be two specific log operators that would have to be compelled to hide this. The US may get lucky and have jurisdiction over all of them, other countries are much less likely to.


> and would likely cause browsers to drop the (edit) CT log unless provided with a very plausible excuse that isn't "we complied with a court/gag order"

a) Browsers aren't the only clients that matter. Will any XMPP client even look at the CT log? Probably not.

b) Browsers are not going to drop all US-based certificate authorities. And if they tried they might just get their own national security letter (reminder: all major browsers are US-based).

You can't work around the government using centralized infrastructure.


Even if the US got lucky, those SCTs wouldn’t match the CT chain and would still be discoverable misissuance.


Yes, assuming someone captured the malicious certificate and checked. Which would have happened in this case now (after someone forgot to renew), but likely not before.

Browsers would happily accept it and IIRC they don't check against the logs.


Yep - this is all because of the delayed insertion into the CT tree :(


If the certificate isn't logged in CT, it's not supposed to be trusted. That's the whole point. If it is logged, then the site owners would notice (Cloudflare, for example, has a service that emails when certificates are issued for your domains).


Browsers will check if a certificate is in the transparency log, and alert the user if it isn't if I am not mistaken.


But XMPP clients do not, as far as I'm aware? and browsers aren't connecting to XMPP server ports.


Yeah usually the TLS libraries used by XMPP clients don't check SCTs. Not even all browsers do it. Chrome does it, Firefox does not for example.


Source?


https://developer.mozilla.org/en-US/docs/Web/Security/Certif...

As the other commenter already pointed out, Firefox does not require this. Safari and Chrome do. This indeed is not directly applicable to this situation, since XMPP don't involve browsers. But for websites, the parents attack scenario is not applicable.


Just to be clear (with better details already given by other comments), addressing this is specifically part of the threat model for Certificate Transparency. You could argue that CT policies should be strengthened in various ways, but the threat model does intend to address detecting the compelled misissuance scenario.


CA being in one country and hosting in another greatly complicates this. Ideally the two countries shouldn't be part of Five Eyes or some such.


If they have physical access to your server, they'll just go for that.

With mild difficulty you can mirror the disk, net, and RAM of any "commodity" hardware.


[flagged]


How would you implement “strict checking” at scale and for free?


Implement a proper Proof of Possession system (PoP) and make Let's Encrypt opt-in instead of opt-out.

Given that in the end, Let's Encrypt/ACME relies on PoP of a DNS domain, DNSSEC setup on the domain should have been a prerequisite.

In addition, an explicit opt-in should be required, for example, with a requirement for a CAA record pointing to LE.

---

As a side note, this whole situation leaves me with a really weird feeling.

On one end, I will always be grateful for LE enabling tls/https generalization.

On the other, I kind of feel betrayed and/or ashamed that LE/ACME almost willingly introduced protocol flaws weakening the whole CA infrastructure and that we (me included) didn't challenge it more when LE was introduced.


If they used DNS validation it would be better because there would be only one point of failure (DNS) rather than many (DNS and every ISP between CA and a site).


The thread above (https://news.ycombinator.com/item?id=37958831) elaborates on how to force DNS validation and/or how to tie your private key with Let's Encrypt via DNS.


Everyone needs to opt-in to use more secure methods and by default non-secure validation methods, which allow easy issuing fake certificates, are allowed. This is wrong.


So, what should they do? No certificate without DNS record? Would this really help the overall state of affairs, or would most sites just not use HTTPS at all because it's "too complicated"?


The purpose of using HTTPS is to make connections more secure. Giving away SSL certificates to anyone does not serve this purpose.


It absolutely serves this purpose in a world in which there unfortunately is no TOFU/unauthenticated encryption for TLS (i.e. ours).

Thanks to widely available HTTPS certificates, "evil hackers stealing your cookies on public Wi-Fi" is not a thing anymore.

We should definitely have a discussion about whether it's made active attacks more feasible, but I think the goal of making passive sniffing less trivial than it was before can be considered achieved.


And DNS does not traverse several ISPs?


Perfectly pulling off an actual MitM attack and then forgetting to renew the certificate is certainly a very German thing :-)


I wonder if someone didn't "forget" on purpose, so that people learn about it.

Or, someone very diligently followed the orders - there was an order to set up a cert, but there was no requirement that it has to auto-renew :)


It was claimed to have been running for 6 months so it must have renewed certs at least once - LE certs are good for 90 days.


> 6 months [...] 90 days.

So they were told to renew the certificate, but not how many times to renew it?


They were told to renew the original certificate, but weren't told to also renew the renewed one.


Shouldn't the certificate transparency logs have it?


I suspect it's not trivial to distinguish between the legit and fake ones just based on CT logs. Unless Let's Encrypt publicly logs the account used to issue the certificate (I think they don't), only the logs held at Let's Encrypt will reveal this information. I expect their security team to be looking at those logs right now.


Why? Purely out of curios ity? Domain validation was successfully performed which enabled a certificate(s) to be issued.


A certificate was issued to someone who isn't the domain owner. Just because the CA can't be blamed because the requester was able to spoof domain validation in a way that the CA can't be expected to detect doesn't mean that a good CA isn't interested in what happened and whether it can somehow be prevented in the future.

One obvious possibility could be e.g. sending a notification to the previous ACME account: "hey, a new ACME account request a certificate for your domain".


Any sanely designed covert transparent proxy software will automatically stop proxying when the certificate expires.

I wonder why this didn't.


No, the German thing would have been to have the printout of the telefaxed scan of the court order collecting dust because the scanner in the receiving department is broken...


How is that possible while Letsencrypt keeps on sending reminder emails?


You expect German institutions to accept electronic mail?


Being Germany, perhaps the paperwork expired and the new court order didn't arrive in time.


Archive [1]

Their theories are interesting and would explain how they obtained the certs if true. Perhaps the take-away here is to have multiple probes on multiple providers checking all of ones TLS fingerprints and alerting on unknown fingerprints and then checking the certificate transparency log or lack thereof.

    openssl s_client -servername news.ycombinator.com -connect news.ycombinator.com:443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin
    SHA1 Fingerprint=7E:49:BA:40:86:87:B3:39:66:93:94:9E:9C:45:71:85:3C:8D:95:16
[1] - https://archive.ph/C0jYJ [updated]


I've added "Could you prevent or monitor this kind of attack?" section.

There are several indications which could be used to discover the attack from day 1:

* All issued SSL/TLS certificates are subject to certificate transparency. It is worth configuring certificate transparency monitoring, such as Cert Spotter (source on github), which will notify you by email of new certificates issued for your domain names

* Limit validation methods and set exact account identifier which could issue new certificates with Certification Authority Authorization (CAA) Record Extensions for Account URI and Automatic Certificate Management Environment (ACME) Method Binding (RFC 8657) to prevent certificate issue for your domain using other certificate authorities, ACME accounts or validation methods

* Monitor SSL/TLS certificate changes on all your services using external service

* Monitor MAC address of default gateway for changes


Nice addition. Me personally being the paranoid type, I don't trust the transparency log for monitoring but rather for writing up the root cause analysis. Reason being is that if someone can legally compel Hetzner and/or Linode and/or LetsEncrypt to do or not do something then the same entity can compel the certificate transparency site to ignore something. But you covered what I would do and that is to have multiple nodes doing active monitoring of TLS changes using an external service. That service being openssl s_client in my case.

In the case of Jabber it might be interesting to add some monitoring in the application that uses a cryptographic signed payload to list all the valid fingerprints and send an alert message to the server if something odd is happening like public key pinning but without a hard-fail. That list could be pre-loaded with new certificate fingerprints prior to being deployed. If the oddness is confirmed then perhaps add some way to tell the clients that certificate is likely forged. That way both server operator and client are aware that something evil this way comes.

[Edit] - Looks like based on the comment from MattJ100 that some Jabber servers and some clients can already do something like this.


> if someone can legally compel Hetzner and/or Linode and/or LetsEncrypt to do or not do something then the same entity can compel the certificate transparency site to ignore something

the major certificate transparency logs are operated by several independent global companies, Apple and Google for LE. it's unlikely that they will agree to forge their global CT logs for a single government. more importantly, SCTs allow cryptographic proof of anyone lying, making such an action very dangerous for their continued participation in the WebPKI ecosystem.


This sounds like the methodology for blockchain and multiple ledgers. Ultimately however crt.sh is hosted somewhere and while there may be multiple controllers that have access to logs on the front-end, someone hosting that site could be compelled or blackmailed into tinkering with the levers behind the scene to exclude activity on a domain. I'm not suggesting that is what is happening, just that it could and Apple, Google and others would have plausible deniability. On the other hand having active probes distributed around the world on multiple ISP's looking for fingerprint changes would be much harder to hide though more expensive to operate along similar lines to archive.is or using distributed Nagios NPRE agents or using ThousandEyes probes.


> Ultimately however crt.sh is hosted somewhere

So is it possible to run my own copy of crt.sh? How demanding it is (e.g. data size)?


The crt.sh code is all open on GitHub, so can be hosted yourself. Last I checked a few months ago, the main ‘certificates’ table and indexes etc was close to 20TB, and there’s more than just that. It’s big, but has everything. A slimmed down database of just some lightweight info on the certs and issuer (notBefore, not after, SANs, serials etc) runs maybe a TB in bigquery.


A log monitor? Or specifically the crt.sh codebase?

Yes, the logs are public and I ran equivalent monitoring for a previous employer before the pandemic. You will need to consume records at the rate they're created to keep up, Let's Encrypt have stats you can look at, if you can cope with twice their typical daily throughput you'll be fine on average, but peaks will swamp you temporarily so design for that.

You can choose whether to store everything, or just stuff you consider interesting, and you can choose whether to care forever or only until expiry (so 398 days)

If you want everything (full certificates), indefinitely, that's a lot of data. Um, several terabytes per year maybe?


this is a very very silly take. Apple and Google have armies of lawyers who spend all day telling cops with subpoenas from all around the world to fuck off.


The problem is that Let's Encrypt issued a fake certificate without proper validation.

UPD: I was initially wrong. It looks like many CAs will issue a valid certificate to attacker capable of doing MitM.


why do you keep making this incorrect claim?

it was a DV cert, and it was considered validated because someone was MITMing the traffic. that would have worked against any CA, nothing at all to do with Let's Encrypt.


Corrected the comment to reflect that SSL cert infrastructure is ridiculously easy to hack.


I mentioned this in the operators channel just now, but also worth noting is that channel binding (e.g. SCRAM-*-PLUS in combination with RFC 9266) also mitigates this attack. Essentially the connecting client is able to detect the mismatch between the certificate or handshake the server thinks it is presenting, and the one the client actually sees.

ejabberd and Prosody support it, and a number of clients too.


> I mentioned this in the operators channel just now, but also worth noting is that channel binding (e.g. SCRAM-*-PLUS in combination with RFC 9266) also mitigates this attack.

Doesn't that require the server to persistently store user passwords in plaintext?


Not at all. If you mean SCRAM, it was actually pretty novel when it was introduced because it allows the server to store a hash, the client to store a hash, and only a hash is exchanged over the wire.

SCRAM also allows the client to verify that the server possesses the password (or a hash of it), so a MITM that just says "yep, your credentials are correct - go ahead!" can be detected.

Channel binding is an addition to SCRAM (it's usable outside of SCRAM too) that allows securely checking the TLS stream as well. Specifically it allows verification that the TLS stream is terminated by the same entity that has (a hash of) your password.


no, scram is all about not exchanging and not storing plain text passwords. the plus variants add tls channel binding.


DANE (with DNSSEC) may be an option to look at too, with public keys in DNS. though probably client support isn't universal (i have no (recent) experience using jabber). the CA was one weak point here.

checking for new public keys in certificates when keeping private keys between renewals would also help.

i suppose any storage and possibly ram should also be considered compromised? e.g. private keys stored on disk without encryption, but also by being read into memory. especially on the vm.


DNSSEC operators can be strong-armed the same way. You will also lose out on transparency logs.

In the end law enforcement can also walk up to the machine/hypervisor and steal/monitor interesting things from that as well. This is funnily the exact evil maid threat scenario so many (especially Linux people) find unrealistic.


you could run your own dnssec servers (not too hard). and monitor the DS records for your domain at the TLD. have there been any reported cases of law enforcement changing DNSSEC (e.g. adding a DS record)?

i agree there's probably no way to keep a machine hosted at a company secure from law enforcement. also why i suggested storage and anything in ram on the machine can be considered compromised. this attack (swapping out network connection for a while to get a certificate through let's encrypt) was probably easiest/least intrusive. if it wasn't an option, the next easiest option would be taken. perhaps the options that are harder to execute are more likely to be detected, or less likely to be worthwhile.


Law enforcement routinely manipulates the DNS; famously so.


> have there been any reported cases of law enforcement changing DNSSEC (e.g. adding a DS record)?

It's not like anybody is watching or using DNSSEC like that. Also at best you might be able to detect a change but it won't prevent the attack and neither will it leave a long-term mark like CT would.


>keep a machine hosted at a company secure from law enforcement

Your own hardware + continuous video monitoring is probably good enough. The idea is not to keep it secure, but to know when a breach has happened.


> DNSSEC operators can be strong-armed the same way. You will also lose out on transparency logs.

Kepping DNS registry, CA, and hosting in different jurisdictions could be a noticeable improvement...


You don’t need DANE to thwart this particular attack: DNSSEC and CAA could have prevented the MITM certificates from being issued in the way they were. If the xmpp.ru and jabber.ru domains used DNSSEC and CAA, it would not have been enough for the MITM to strong-arm Linode and Hetzner: they would also have to strong-arm the domains’ DNS providers. I don’t think the attackers could have done it stealthily (unnoticed for 3 months) without the co-operation of the .ru registry and a MITM attack on all of the Let’s Encrypt validation vantage points.

I am one of the co-authors of the DANE SRV RFC https://www.rfc-editor.org/rfc/rfc7673 which is what XMPP would use. I don’t follow XMPP development so I don’t know if it has been deployed. I would like it if DANE were more widely used, but it’s not pertinent to this attack.


I think it's a great example why DNSSEC would be bad: at least here, we had transparency logs and there was a simple method to get an attack notification. With DNSSEC, there would be no such thing.


I disagree. Forging DNSSEC requires LE to strong arm the domain registrar to modify the DS keys per domain which would also require them pointing to an adversary DNS server to answer those requests with a valid signature. That action would definitely be noticed. Without DNSSEC you are back to square 1 of unsigned DNS that can be modified in-flight, a much worse situation. I personally believe we should be advocating for more DNSSEC, not less.

What I think you're advocating for is that DNSSEC should have it's own transparency log that shows when domains update or cycle their DNSSEC KSK/ZSKs which is a great idea, we would just need to get all the domain registrars on board as well which as we can already see with the transparency project, you can't get buy in from everyone.


Well, with transparency project we _did_ get buy in from everyone because browsers (like Mozilla and Google) forced it. And there are severe penalties for not obeying transparency log, and well as technical measures to enforce it (for example certs with no CT log would not work in Chrome).

This of course only works because there is competition. If TrustCor fails to behave, then this CA is dropped. Customers complain, but not very strongly - there are plenty of CAs around, it's annoying to have to switch to another one but still eminently doable. And TrustCor is out of business completely, all their investment is permanently gone.

With DNSSEC.. what do you do if you domain registrar does not play by the rules, say is caught double-issuing DS keys?

Answer is: likely nothing. Most orgs are not going to move .com to .io because of that -- there links, SEO, even things like business cards. People would complain, _maybe_ registrar would get a fine for 0.0001% of their annual income (but I would not count on this). And that's it, the system will stay insecure and they will continue to double-issue certificates.

The separation of CA from registrar is a _feature_, and a very helpful one. Competition is good, and we should not make our existing natural monopolies even more useful.


> LE to strong arm the domain registrar to modify the DS keys per domain

It is important to note that when a law enforcement with this capability would just legally take ownership of the domain instead. If the domain is under control getting domain validated certificates is trivial. There are many legal precedents with domain takedowns.

It is equally important to note that only TLDs under LE jurisdiction is ever an issue. It is not clear whether this specific attack under .ru had been ever remotely possible with any type of authenticated domains.


good point. had me looking for a transparency log for dnssec, arriving at https://www.huque.com/2014/07/30/dnssec-key-trans.html. it seems there hasn't been activity on that topic in years.

an attack would involve a tld operator being compelled to cooperate, or their signing keys being compromised (and then probably adding a DS record for your domain). if that were found out, it would be bad news for trust in dnssec. have there been any known case of this happening? it would be hard (impossible?) to fully detect without transparency log (like for ca certificates).


There's also a DNS CAA record that sets up a whitelist of authorities that can issue certificates for the domain.


At least for Let’s Encrypt, you can also use CAA to require a specific account as well or validation method. This could prevent your hosting provider from issuing on your behalf if they don’t control DNS.

https://letsencrypt.org/docs/caa/#the-accounturi-parameter


I use those as well. I am curious if there has ever been a case where LetsEncrypt were legally compelled to create exemptions for domains as in ignoring CAA and not logging to the transparency log or to just outright issue certs to an agency for specific names. CAA account and method restrictions would be negated at that point.

The more I think about it I would wager that Linode and Hetzner were just law enforcement having good taste in VPS providers. It's more likely to me that LE was the compelled target.


Let’s Encrypt publishes legal transparency reports on what law enforcement asks us.

https://letsencrypt.org/documents/ISRG-Legal-Transparency-Re...

We’ve never been ordered to issue a certificate.

While I am a Let’s Encrypt employee, this message is not an official communication of Let’s Encrypt and shouldn’t be interpreted as such.


I appreciate that. I assume that would not cover and NSL which is also a gag order but nice that you have something for all other above board legal requests. I see you have a column for NSL but have no idea how that could ever increment without violating the NSL.

For some background I've spent a good deal of time with lawyers and C-Levels playing devils advocate trying to find a way to indirectly notify customers but it's just not legally possible in the United States of America and there is nobody that would risk violating one. People here on HN often bring up canaries but they are not compatible with a NSL.


Other entities that publish transparency reports and that have received NSLs have reported them in bucketed ranges. I forgot the exact granularity.

It does not appear that NSLs compel arbitrary actions or require the recipient to actively lie about having received one.


they could go the extra mile there and put a canary page about not having accepted any gag orders (which they could remove when they do)


It's a scary world we're living in...


Let's encrypt (and other CAs) should not allow validation using unsecure schemes like HTTP. Especially for sites that have valid certificate from other CA.


I was a bit curious about the MAC address since it isn't locally administered which indicates to me that someone likely deliberately chose this range of addresses.

https://www.google.com/search?q=%2290%253Ade%253A01%22+linod...

Googling for '"90:de:01" linode' and '"90:de:00" linode' indicates that addresses from these blocks have been assigned to other linode VMs recently. I sadly don't have a linode VM of my own to compare with right now but it would seem like the traffic has been routed to another VM on linode infrastructure


This is baseless speculation, but I'm assuming Jabber is being targeted as it's famously used on darknet markets for drug trades (or other illicit activity). Goes to show that you should never just trust "it's encrypted, bro". You need to PGP your messages at the very least. Is PGP crackable by quantum computers? Will there be hardening against those kinds of attacks in the future? Since, if the messages have been hoovered up in encrypted form, it's just a matter of time until they get decrypted. And this appears to be done for just about all web traffic they can get their hands on... see https://en.wikipedia.org/wiki/Utah_Data_Center


Jabber/XMPP has had e2e encryption for at least like 10-15 years. I used to use it with even my normie friends back when Facebook/Google Talk supported XMPP and you could use pidgin, kopete, etc.

Obviously securely exchanging keys with an anonymous drug dealer over the Internet is error-prone though...


This mitm may be as a part of Genesis Market takedown, but it's just out of the blue.

https://therecord.media/genesis-market-takedown-cybercrime


A second layer of encryption would help, but I don't recommend PGP in particular.

If you haven't heard, it has lots of problems and a lot of people recommend avoiding it (for example https://www.latacora.com/blog/2019/07/16/the-pgp-problem/ / https://news.ycombinator.com/item?id=20455780)


"The PGP Problem" is generally misleading and is straight out wrong in some places. I ended up writing an article to save time:

* https://articles.59.ca/doku.php?id=pgpfan:tpp

PGP certainly has its problems, but isn't really special compared to other similar things. The big advantage that PGP has is that it is a stable and well known standard. There is a tendency to imply that it is insecure in some way, but no real evidence seems to exist to that effect.


> isn't really special compared to other similar things.

If you define "similar thing" as "kitchen-sink thing that tries to do everything like PGP does", then this is true, as no full alternatives exist, nor should they.

But for all practical applications? Pretty much every "other similar thing" that I have tried is _vastly_ more simple and more reliable and easier to debug and infinitely easier to script. For example, "seccure", "minisign", "age", even "ssh-keygen -Y". Especially cool are "seccure" which uses passphrases as private keys (no more private key files ever!) and "ssh-keygen -Y" which uses ssh keys which everyone already has anyway.

If you are writing a new software and thinking about integrating PGP, do yourself a favor and look for alternatives. If this is something developer-oriented, I recommend using something based on SSH keys, like git does.


Most clients also support OMEMO now


doesn't OMEMO have the problem that you have to verify every session from all your sessions, which is practically infeasible?


If you are serious about needing e2ee, using few sessions (well, devices) and actually verifying fingerprints OOB is a must, and that's true for all E2EE methods AFAIK?


> and actually verifying fingerprints OOB is a must, and that's true for all E2EE methods AFAIK?

most E2EE messaging services (e.g. Matrix, Signal, WhatsApp) enable verifying other people instead of devices, reducing the required verifications for one person to 1 instead of 1 per session


More like every device from all your devices. That comes from the Signal protocol. If you want one verification per user then that would be PGP.


> More like every device from all your devices.

no, you could have multiple sessions per device, e.g. desktop client and browser tab

> That comes from the Signal protocol.

no, Signal doesn't require this


>End-to-end encrypted communications, such as OMEMO, OTR or PGP, are protected from the interception only if both parties have validated the encryption keys. The users are asked to check their accounts for new unauthorized OMEMO and PGP keys in their PEP storage, and change passwords.

The attacker would need to set up a separate MiTM for the particular E2EE scheme used. Some of the XMPP clients I have encountered will not let you use a particular cryptographic identity unless you have explicitly claimed to have verified it.

Still a good reminder. If you have not seen and dealt with a ridiculously long number (or equivalent) you have not achieved end to end.


There's one thing I can't understand in this story: if that's lawful interception, why Hetzner and Linode bothered to set up MitM interception with different LE certificate and key, rather than extract the TLS private key directly from the RAM and/or storage device of the VPS? Even if this is a physically dedicated server, they can extract the private key from the RAM by dumping the RAM contents after unscheduled reboot. Extraction of the private key isn't visible in CT logs, much more stealthier, practically undetectable.


Because it was easier, most likely.

There's also a possibility that one would be a "search" and the other would be an "interception" with different levels of approvals requested, but I don't know what the current legal situation in Germany is right now.


Likely because it's 'more illegal'. I'd bet they are not allowed to hack into the server if it's not directly involved in the cybercriminal activity.


On a physical server, couldn't you just hotplug a PCIe card in there and DMA out any data you are interested in? Something like a network card with firmware specifically for the purpose should do it. It sounds so much a standard thing for law enforcement that I imagine such equipment should be available off the shelf?


A user reported this on Reddit 3 days ago: https://www.reddit.com/r/hetzner/comments/17ankoh/does_hetzn...


I wonder if there is a legal way to demand Hetzner/Linode comments on this situation. Likely, the entity behind the interception is some government agency or police.


For Hetzner as German Company: If it is legal interception no. You can’t demand an answer, without going to a lawyer first.

On the other hand, if it is not lawful interception, I doubt Hetzner would allow it. Because that’s also against law.


there isn't even any guarantee that the wiretapping was done through them instead of e.g. the carriers which pretty much in any country have since decades been forced to help with lawful wiretapping...


Carriers meaning the interconnect providers eg Level3, Cogent etc? How would this intercept be implemented in practice? Surely it'd be much easier to add a node as close as possible to the origin host, i.e. within the Hetzner network, rather than redirecting traffic from the outside with some sort of BGP hijack?


any carrier on any level

but like other have pointed out this seems to have been in the hetzner network

through wire taping laws also extend to datacenter internal interconnects I mean servers of different people can communicated with each other without the traffic leaving the server so it kinda makes sense


Yeah, agreed. But if all you need is to control a response from an IP to a verification query from LetsEncrypt, then it would be easier to just ask the entity controlling that IP space (in this case Hetzner) to setup the route for you. If you do it at the BGP level then you need the cooperation of all the peers.


I think the observed TTL 64 means the interceptor is on the same segment? (of course unless they have set it to e.g. 66 at the interceptor that is 2 hops away, but I guess if they were to mangle TTL, they would set it to the original value to avoid detection)


This really wants me to re-evaluate all the times I have gracefuly TOFU'd while SSHing into a machine or even logging onto a website.


Consider including SSH known hosts in your cfgmgmt.

I have all my machines' public keys, and all major git forge's SSH keys in my git-managed cfgmgmt repo.

For NixOS users: https://search.nixos.org/options?channel=23.05&show=programs...


one issue is that most hosting providers don't display ssh host keys anywhere in the control panels so unless you have a... lot of fun (and spend time) with the virtual console... (or maybe a custom cloud-init script?) you're SOL for the very very first connection


Isn't more important that the known key does not change unless you made the change, than having the "right" key to begin with?


hmmmm... if someone MITMs you the first thing could they 'in theory' just change the key on the host to match the MITM one?


Given that the attack subverted the TLS certificates, I'd say key-based TOFU is still a better alternative.

Of course, checking the server key is better than not doing so, but any knee-jerk response to TOFU will probably make your security worse.


What does TOFU mean in this context? I haven’t heard that word related to tech before OpenTofu just recently!


Trust On First Use. No relation to OpenTofu where Tofu is a word play in TerraForm.



Trust On First Use


At DEFCON, fifteen years ago, MITM'ing TOFU ssh connections was what you did when you were bored in between talks. If you wanted to capture a new first connection, just keep dropping connections until they try a different host or device.

I don't really get why people use ssh at all if they aren't using certs. Might as well use telnet.


An unknow key when connecting to a known server will raise all kinds of alarms in my head.


I've always thought a solution like LetsEncrypt would be an ideal backdoor for LEAs wanting to circumvent SSL/TLS.

The same thing applies to VPN services. Why bother trying to develop tools etc that can decrypt traffic when you can just have people send their traffic to you?


Why are your concerns specific to LetsEncrypt? They're just another CA, no?


Wouldn't be surprised if this happened in Russia; but not in Europe. Also, Let's Encrypt security is a joke - they issued fake certificate without serious checking. Using unencrypted HTTP for confirmation is a vulnerability - doesn't this mean that large national ISPs can easily issue ceritificates for any site hosted within a country?


> Also, Let's Encrypt security is a joke - they issued fake certificate without serious checking. Using unencrypted HTTP for confirmation is a vulnerability

They can’t use HTTPS if you don’t have a certificate yet


This doesn't mean you should use unsecure methods instead and issue certificates to anyone capable to do MitM.


How could Letsencrypt even verify a server setup if not via DNS/HTTP? Also, verify against what? The servers are basically random strangers without identity when they first talk to LE.


In this case there has been a valid certificate for the site; this alone should raise suspicion.

Also, if they cannot do secure validation then maybe they should stop issuing certificates for sites that already have a proper certificate.


This happens all the time when a server is rebuilt from scratch - same cert using a different keypair.


So should we conclude that SSL cert infrastructure is completely compromised and now any country can issue fake certificates?


No, there is no reason to jump to such extremes.


There are approximately 10 Tier-1 ISPs through which majority of Internet traffic passes, and unless I misunderstood something, they can issue valid certificates for almost any domain. To me it looks like "completely compromised".


Every CA can issue valid certificates for every domain? And it always has been that way.


CA has a risk to get their root cert removed from browsers; ISP doesn't risk anything especially when asked by the govt.


They risk having their peerings cancelled. Also it might be a crime in some countries.


Over the DNS challenge, of course.


This is exactly the same as all other CA's do this.... for DV-certificates you basically place a key/special file on the webserver, or receive a verficiation code via (plaintext) email.

For EV certs there might be more validation, but users will never see the difference between EV and DV certificates.


So SSL certificates are completely unreliable; we should only wait until Russian or Chinese comrades find a good use for this attack (e.g. temporary redirecting Western traffic using BGP to validate a Let's Encrypt's cert for Western site).


yes? this is a well-known problem, which is why CAA-ACME etc and certificate transparency logs exist.


Police raiding / intercepting / modifying servers does happen here too. Last big case: https://www.bleepingcomputer.com/news/security/encrochat-tak...


Could this be a blue pill attack? A vulnerability in the xmpp server exploited to inject a rootkit, which then hides itself inside the kernel?

Or creates network/pid namespaces and puts you in them, while leaving the mitm server in the original one?

If so, the mitm could be on the same host, and wouldn't need the cooperation of the hosting provider.

I'm not sure how to check for either of these without restarting (which the admin does not seem to want to do, as it is a live service).

https://en.wikipedia.org/wiki/Blue_Pill_(software)


If this had happened, the attacker would have likely stolen the servers TLS certificate and keys

Whereas this attack generated new keys (and was detected!), suggesting the attacker didn't compromise the server itself.


No because traffic redirection occured in provider's infrastructure, not on a server.


Let's say you are blue-pilled and what you see as your eth0 interface is actually a virtual interface controlled by the rootkit.

In that case, where the redirection happened is no longer something you would be able to tell, right?


>The attacker managed to issue multiple SSL/TLS certificates via Let’s Encrypt for jabber.ru and xmpp.ru domains since 18 Apr 2023

Why is it even possible to issue more than 1 certificate on the same domain via Let’s Encrypt? Shouldn't the previous certificate be revoked when a new one is issued?


It's fairly common for people to obtain multiple certificates for different machines or services, so they can be selectively revoked and they don't have to share keys across machines.

More use-cases:

- You might obtain a new certificate, but deploy it gradually, so you want the old one to remain valid while you do that.

- One certificate may cover different sets of domain names. If you have a certificate for "example.com, foo.example.com" and then request a certificate for only "foo.example.com", should the earlier one be revoked? (leaving "example.com" without a certificate).


> Why is it even possible to issue more than 1 certificate on the same domain via Let’s Encrypt?

it commonly used in a "normal" way all the time

- e.g. when there are multiple data-center for the same domain (e.g. using geo-location based routing) it's a good practice to give them different certs so that if you need to revoke one the operation in other regions is unaffected

- or when rolling over from on cert to another

- or when moving certs into hardware security keys/module (HSK) you preferably do have one per HSK (so that if e.g. hardware breaks and gets replaced you can just revoce the cert for the affected HSK module not all of them), you also normaly do not keep backups to make sure it can't be leaked at all (as long as the HSK isn't hacked which is normally quite hard)

- or losing access to a cert (e.g. in the case above a HSK breaks)

Lastly the whole CA system is in the end designed to provide good security for the industry while having the backdoor of issuing certs the legal organs to allow the police some degree of wiretapping (oversimplified, it's slightly more complex then that).


You should always have more than a single certificate for your domain honestly.

Cloudflare for example, tries to optimize certificate delivery (and have backup certificates available for you just in case a CA needs to revoke theirs).

Also, on distributed systems its less safe to share private keys between the various frontends.


This is actually a great suggestion and ACME providers should provide it as an opt-in feature via CAA record. Not even the provider having access to system memory could issue a mitm cert without you noticing.


The provider having access to system memory can copy the private key and use your original key+cert for MITM, unless you are using some fancy HSM.


provisioning a 2nd machine into your webserver cluster before activating it?


You could sync certificates across hosts for this purpose, though. The advantage of multiple certificates is being able to revoke a subset of certificates if you can determine only a subset of your hosts have been compromised.


you could, but unfortunately the LE certs have a very short lifetime, and renewals are a thing

so you need a master server to handle the renewals, periodic sync, and to handle the case when the master goes away

this would be considerably more complicated than having a second independent certificate (assuming you've automated the entire frontend provisioning process)


Did that, can confirm.

For other more sensible reasons but still.


> Why is it even possible to issue more than 1 certificate on the same domain via Let’s Encrypt? Shouldn't the previous certificate be revoked when a new one is issued?

First, you want to have to have some leeway so you don't need to rotate certs at exact second the old one expires

Second, you might want to have cert-per-server rather than cert-per-domain, as that's frankly easier to implement vs having common store for certs+key


Are there any services that monitor the certificate transparency logs and send an email when a new certificate is issued for your domain? That would alert you to this kind of MITM


As mentioned there are services for doing this (I've been using for example the free service from Facebook [1])

Since the certificates are nowadays renewing quite often, it's very easy to just end up ignoring these notifications. To really get the benefits, I think you should somehow combine these with logs from the legit cert renewals and only alert when something strange pops up.

[1] https://developers.facebook.com/tools/ct/search/


Indeed, a CT monitor which sends alerts about legitimate certificates is almost useless due to noise. My service, Cert Spotter, provides an API endpoint[1] which you can upload your CSRs to, so you don't get alerted about certificates using the same key as the CSR. The open source version of Cert Spotter can invoke a script[2] when a certificate is discovered, and the script can cross reference against a list of legitimate certs.

[1] https://sslmate.com/help/reference/certspotter_authorization...

[2] https://github.com/SSLMate/certspotter/blob/master/man/certs...


Could it check against public key for the certificate?


Yes, the script can consult the $PUBKEY_SHA256 environment variable to get the hash of the certificate's public key.


Yes. Several. They're called CT "Monitors".


There are but this assumes that services like the transparency log servers and LetsEncrypt are not being legally compelled to do or not do something.


If the CA doesn't log a certificate, the certificate won't work in CT-enforcing clients (Chrome, Safari, Edge) because it lacks receipts (SCTs) from CT logs.

If a log fails to publish a certificate despite issuing a receipt for it, Chrome's SCT auditing infrastructure can detect that, as it did recently with the Nessie 2023 log: https://groups.google.com/a/chromium.org/g/ct-policy/c/5x1A6... (this was also detected by monitors operated by myself and Cloudflare)

But this is all moot when it comes to XMPP, since XMPP clients don't check for SCTs, and there is no requirement for CAs to log certificates if they don't need to work in non-CT-enforcing clients. Some CAs (e.g. DigiCert) will sell you unlogged certificates - no need to compel them.

tldr: situation is good and improving with browsers, not so much with non-browser clients


XMPP has some adoption of channel binding (it could be better, but it's heading in the right direction) which mitigates these kinds of attacks in a different way. It is able to do that because the client and server already share a secret (the user's credentials), unlike most HTTP clients/servers (at least as far as the protocol is concerned).

But SCT validation would indeed be something we should investigate for the ecosystem, it could be beneficial for certain use cases.


It may be better for the XMPP ecosystem to focus on increasing adoption of channel binding rather than CT enforcement. I wrote briefly about the challenges faced by non-browser clients at <https://news.ycombinator.com/item?id=36436303> but the tldr is that all SCT-checking clients have to be able to respond to ecosystem changes very quickly. This is so important that Chrome and Safari both disable SCT checking entirely if the browser hasn't been updated in more than 10 weeks.

Feel free to contact me (email in profile) or ask here if you want to talk more about this. I use XMPP (thank you for Prosody!) and have been involved in the CT space for many years now.


> but the tldr is that all SCT-checking clients have to be able to respond to ecosystem changes very quickly.

This is fine. If you're using a tool that's connecting to the internet, you probably need to monitor for updates anyway.

I'd like to see a separate package, similar to Mozilla's `ca-certificates`, but for CT that can be updated independently of the actual useragent.


> This is fine. If you're using a tool that's connecting to the internet, you probably need to monitor for updates anyway.

People should do this, but in practice it often doesn't happen - as witnessed by the Appmattus library fiasco earlier this year.

> I'd like to see a separate package, similar to Mozilla's `ca-certificates`, but for CT that can be updated independently of the actual useragent.

Considering how slowly updates to the various ca-certificates packages propagate, this would be absolute disaster for CT.


> Could you prevent or monitor this kind of attack? There are several indications which could be used to discover the attack from day 1:

> All issued SSL/TLS certificates are subject to certificate transparency. It is worth configuring certificate transparency monitoring, such as Cert Spotter (source on github), which will notify you by email of new certificates issued for your domain names


> All issued SSL/TLS certificates are subject to certificate transparency. It is worth configuring certificate transparency monitoring, such as Cert Spotter (source on github), which will notify you by email of new certificates issued for your domain names

This seems like the kind of thing domain registrars should do for you.


Who could in theory also be told “don’t notify your client about this specific one”. You’re best off self hosting if you think you’d be a target for these kind of attacks.


It seems that trust is a highly variable quantity, maybe similar to intuition.


Cloudflare does that


Cloudflare is a MITM you voluntarily setup yourself, innit?


You don't have to setup anything


Kerberos right?


As an end user, how can I monitor if I’m being MITM’d? If everything I do is proxied through an attacker’s network, nothing I do online could be trusted to properly test if I were being proxied, right? I know apps can do cert pinning, but as an end user how can I validate that when connect to anything that it’s the real one and not a request from being middled?


One can get fingerprints of TLS endpoints ahead of time and/or from a known clean network then compare and contrast to what you see from the location you suspect might be MITM'd. I believe there are browser addons that display certificate fingerprints and alert if a certificate changes however one can click on the lock symbol and drill down to the same info.

    openssl s_client -servername news.ycombinator.com -connect news.ycombinator.com:443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin
    SHA1 Fingerprint=7E:49:BA:40:86:87:B3:39:66:93:94:9E:9C:45:71:85:3C:8D:95:16
A higher friction method would be to use a browser addon that would pin a certificate to a site. Not useful in the case of the Jabber client but useful if one was visiting Jabbers website to validate something. Some Jabber clients and servers have their own method described in another part of this thread [1] to protect against interception.

The openssl s_client method is likely the most versatile for testing from different locations, ports and applications.

[1] - https://news.ycombinator.com/item?id=37956911



@dang: .org.ru should probably be treated as TLD


Next time someone in Matrix discussion will ask "why we need Matrix when there is jabber/xmpp" show them this:

> All jabber.ru and xmpp.ru communications between these dates should be assumed compromised. Given the nature of the interception, the attacker have been able to execute any action as if it is executed from the authorized account, without knowing the account password. This means that the attacker could download account's roster, lifetime unencrypted server-side message history, send new messages or alter them in real time.


The same attack could be mounted against a Matrix server tbh. But the fact Matrix is E2EE by default would mitigate it a bit, and if folks verified who they were talking to then the attack would be defeated.


Matrix clients typically complain very loudly when keys change or unverified sessions get added to your chats.

Id's say it would mitigate it A LOT. Though I have to agree that there is still a lot of room for improvement.

I personally prefer the approach taken by briar: your public key is your address. Pretty cut and dry, not much room for shenanigans.


> The attacker managed to issue multiple SSL/TLS certificates via Let’s Encrypt for jabber.ru and xmpp.ru domains since 18 Apr 2023

> We tend to assume this is lawful interception Hetzner and Linode were forced to setup based on German police request.

> Another possible, although much more unlikely scenario is an intrusion on the internal networks of both Hetzner and Linode targeting specifically jabber.ru — much harder to believe but not entirely impossible.

And what if the attacker tricked somehow the letsencrypt challenges?

Or this is supposed to be impossible?


Even after obtaining certificates allowing you to MITM, you have to actually find TM you can be a MI. In a targeted attack, this could be as easy and discreet as spoofing the target’s hotel WiFi. In a country without plentiful cross-border connections or a diverse backbone, a tap in the right IX could also work. (Famously, the NSA exploited the oligopoly the major consumer ISPs have in the US. Somewhat less famously, Roskomnadzor had to embark on a multi-year boiling of the frog to make Internet censorship in Russia even remotely workable, due to the diversity of the market and of the interconnects left over from the late nineties and early oughts, culminating in a requirement for every ISP in the country to patch MITM hardware into their network.)

But for this kind of thing to happen on every connection to the server being impersonated, you either have to bring a very big and publicly noisy hammer like a BGP hijack, or have the Internet upstream of the server cooperate. If the traceroute info in the post is to be trusted, in this particular case Linode and Hetzner themselves—or perhaps their datacenter operators—seem to be performing the intercept.


> If the traceroute info in the post is to be trusted, in this particular case Linode and Hetzner themselves—or perhaps their datacenter operators—seem to be performing the intercept.

Ok, now I get it. Thanks.


The attacker effectively controlled the IP the domain was pointed to. If you have this, getting a cert issued from any CA is trivial - you've proved to them you control the domain in question.


As mentioned elsewhere in the thread, RFC 8657 can prevent this.

https://news.ycombinator.com/item?id=37958831


They don't need to, they controlled the domain IP and trivially got the certificates. This is not a novel technique, see this Twitter thread:

https://twitter.com/billmarczak/status/1710348549794185279

We need to go back to snail mail or something, this whole .well-known thing just stinks. We added layers on top like CT and while sound ideas, they don't tend to do anything unless you are Google or FB.

> Yes, the fraudulent certificate is memorialized in Certificate Transparency (CT) databases (the indelible, publicly accessible records of ~all issued TLS certs). But, most website owners don't know what CT is, have no idea how to check it, and wouldn’t know what the results meant


CAA account binding is the response to this.


If that were the case, they wouldn't have seen weird connectivity issues/behaviors.


why not? They discover the issue only because the attacker failed to renew the certificates.


Yeah, but just getting a hold of a valid certificate doesn't automatically mean a MITM. For that the network connections need to be intercepted too.


Using DANE or so (RRs CAA/TLSA) can help in this case (when provider route traffic to specific port), but does not solve the problem completely. If DNS record will be compromised, then hacker can setup "correct" TLSA/CAA record to trust his fake certificate. As result, problem can be solved, if we will have reliable DNS subsystem, where is impossible to compromise DNS provider or infrastructure.

I think, emerDNS can be used in such critical application: https://emercoin.com/en/documentation/blockchain-services/em...


And this is why, kids, you need to enforce certificate pinning on your critical infrastructure!


Public key pinning has been mostly abandoned on popular websites due to high risk of prolonged outages should someone make a mistake and mistakes do happen.

There are browser addons that replicate this behavior however and some Jabber clients/servers apparently have their own mechanism for this [1] but one can only hope everyone is using the combination of servers and clients that support this feature.

[1] - https://news.ycombinator.com/item?id=37956911


We are using self developed application layer security. It requires specialized client code and is terminated at a gateway that proxies requests to our normal applications.

I really, really hate the idea of this kind of eavesdropping.



let's encrypt is a public non-profit, who would definitely assist in finding out how domain control verification (DCV) happened - somehow that obvious line of pursuit was missing from the investigation. The mystery of the MAC 90:de:01:49:76:ae is interesting though, but CIA-custom-Made NIC might be a bit superfluous as an idea. Linode (acquired by Akamai) might use something in-house designed and OEM-manufactured, so I wouldn't treat it as strong evidence. Talk to Let's Encrypt - that will likely unveil what happened.

Interestingly, search "90:de:01" in this page: https://www.cyberciti.biz/faq/howto-linux-configuring-defaul...

^^ looks like another victim VM of the alleged mysterious interceptor :-)


What a nefarious move by Hetzner and Linode. How to trust them after this?


first the article jumped the gun when saying it's by Hetzner/Linode, as it as much could have been done by the carrier the data-center connects to and from what I know about German espionage/wiretapping by police law that would be far more likely

second pretty much _every_ country has laws which require carriers to help them wiretap in case of an investigation with appropriate court orders

thirdly even if it went through Hetzner/Linode instead of carriers it wasn't done "a move from them" but something they where legally binding ordered to silently tolerate

lastly if as unlikely as it seems it was not lawful interception of police or similar then they (or a carrier) would have been hacked, i.e. there is absolutely no chance that server providers will do such an attack on their own violation, especially Hetzner (they also don't have the legal means to get the necessary certificates)


That "lawful" interception allows certificate issuance to be a means of wiretap completely undermines any trust one should have in CAs. It seems that an alternative is greatly needed.


The fact that they were legally required to MITM their customers does not make them more trustworthy (in the sense of unlikely to do it in the future), just the opposite! Of course that applies equally to any other cloud host (modulo jurisdiction games) but that does little to restore my interest in running my software on other peoples computers.


It's not about "other peoples computers" (there is no evidence of the system itself to be backdoored). If you are running on your own hardware in your own house, you still need an ISP that can do exactly this time of MITM.


Or, presumably, using other people's internet peering, since the MITM was outside of the XMPP server host.

I think jurisdiction games is all you have, because outside of that there's going to be _someone_ close to you network-wise who will fold when faced with a lawful intercept order.


yes it applies to pretty much any datacenter and carrier in the world

but that's why running thing at home doesn't help that much, because it also applies to carriers, too

and when it comes to sizing data it tends to not make much difference whether they physically size disk at your home or in the data center, actually if legal order for a sizure like that is confirmed by a judge it's normally applied to all the computers such a person has, both at home and in datacenters


If it's a "lawful intercept" then they most likely have no choice.


I’m wondering if mTLS (aka zero-trust) is used, would take prevent this kind of MITM attack? My understanding is that it should since the cert is self-signed to be used from both ends.


A reminder e2ee is a must.


so is jabber.ru still secure now? what needs to be done on the user side to make it so? can one reissue/request new server certificate from the server and not from let's encrypt? thanks


If you have used OMEMO or OpenPGP to encrypt your messages, the content of your messages is secure, provided you have verified the public keys of your contacts. If you didn't verify the keys, you should do it now and check that no suspicious keys have been injected.


The way that certbot works with let's encrypt the only surprise is that this does not occur more often. We have monitoring on several certs used for TLS. We should probably add an alert if an A-record changes as well.

I assume the root cause is DNS tricks.


> I assume the root cause is DNS tricks.

Maybe read the article


well i must admit monitoring certificate issuance with LetsEncrypt is quite boring even if you HAVE alerts set up (I do)

so… not surprised. Still cool. What a time to be alive


So, if I rent server in those services, it all potentially can be compromised and there is no way to make sure things are secure?..


? did you not read the article?


I briefly read the article, maybe you can explain what is your point?


The point is probably the heading of the final chapter "Could you prevent or monitor this kind of attack?" which enumerates the ways how you could detect this attack.


lets say I opened account on hentzner, rented server, they sent me email with password, I SSH-ed to it and connection got wire tapped. What way you think could help me to detect this?


Well you could check the SSH server fingerprint that it matches what the SSH client sees. In principle if the SSH connection is MitM'd, they could change the output from the shell so that it would look like it wouldn't be changed. But ultimately they created the keys to begin with, so they can just keep another copy for the purpose of MitM. I doubt they do this nowadays, but they _could_.

edit: I actually thought this story was about virtual machines, but I noticed it was about physical hosts, so those have a bit more hope; but ultimately you're still trusting devices that are not physically under your control, so they cannot be trusted.

But let's say that in the case of virtual hosts: basically there is no way to ensure you have non-wiretapped connection to a virtual computer provided by hosting service. They don't even need to MitM it, they can just look at the memory of the virtual machine. The only attempt at trying to do that properly has been AMD's "Secure Encrypted Virtualization" https://www.amd.com/en/developer/sev.html, but I think it was broken by some researchers and I don't know if it could currently provide some means to do this safely. I suspect even in that case it might be challenging to install the initial operating systems so that it won't contain the MitM functionality in itself.

And even then you would be trusting AMD's implementation of the security layer.

Frankly that the wiretap was discovered this time is just a learning experience for the provider to do better next time. I doubt there are good reasons why it should be possible to detect it, if implemented competently.


> Basically there is no way to ensure you have non-wiretapped connection to a virtual computer provided by hosting service.

so, I relied on market forces here, I hoped that major provider would protect their brand and do not do or allow to do such things, but looks like it is not the case for hetzner and linode.


what are you talking about?

the suggestion in these posts is that the German cops made them do it, which is exactly what happens to every provider in every country - they get a subpoena, they may fight it, if they lose they do whatever it is. every hosting (and transit and peering, I assume) provider in every country has an elaborate "lawful intercept" system already set up for secretly copying traffic to the feds / cops / intelligence services.

a hijacking of said LI system is what caused a huge scandal / deaths in greece ~twenty years ago: https://en.wikipedia.org/wiki/Greek_wiretapping_case_2004–05


> which is exactly what happens to every provider in every country - they get a subpoena

do we know cases of silent wiretapping in Google, MS, Amazon clouds?


Absence of proof isn't proof of absence.

Do we know cases of Google/MS/Amazon fighting a targeted wiretapping subpoena and winning it? Do you think they would not have been served one?


> Absence of proof isn't proof of absence.

absence of proof is a proof that your statement about "any company in any country" is just speculation without much ground.

> Do we know cases of Google/MS/Amazon fighting a targeted wiretapping subpoena and winning it?

we know somehow similar case for Apple: https://en.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption_d...


Is it not a fair assumption that companies would try to follow local laws, though? It seems rather perilous for businesses to do otherwise.

It looks like in these cases either the FBI dropped the case or the request was found to be unlawful in the first place. Arguably Apple indeed was following the law even when opposing it within the framework of the law, but what the FBI asks them to do is not the law.

We might be looking at a different story had the case fallen to the side of the government—or perhaps no story at all had that requests also contained a gag-order, which I understand is quite common in these cases and could be the key reason why people hardly ever hear of these cases.

I also wouldn't say the case of basically cracking customer device the same as providing wiretapping of a service a company provides similar at all—and even in this case it looks like the FBI would have been given access to the iCloud storage, but the problem was that the latest data was not backed up there, it needed to be retrieved directly from the device.

There were 42 digital wiretaps (not including combinations) in the year 2023 in US according to https://www.uscourts.gov/statistics-reports/wiretap-report-2... "Types of Surveillance Used, Arrests, and Convictions for Intercepts Installed". I wasn't able to find how many of them were in datacenters from that data, though..


> There were 42 digital wiretaps

my understanding is that courts are eligible to request wiretaps by current law, but forcing service provider to break e2e encryption is not so, that's why google, apple, telegram etc can defend such cases.

Your link says that in 180 out of 190 government could't decrypt traffic despite wiretaps.



This is a deal breaker for me. Which is a shame because Hetzner has such great pricing. Oh well.


In fairness to them if it's a lawful intercept then they have no choice. Non compliance can shut down a business.

Your only alternatives in that case would be the underground style hosters sometimes marketed as bullet-proof hosting though they are just shady resellers with bold claims. They do not typically last long. There are a couple I know of in Amsterdam that are right down the street from The Hague that have been around for a while so I suspect they are just honeypots.


It's likely legally enforced, and any other provider would be subject to the same.


Fair enough


> We tend to assume this is lawful interception Hetzner and Linode were forced to setup based on German police request.

This seems to be somewhat jumping the gun I think.

Given the certs and the target assuming it's an lawful interception seems reasonable.

But there is nothing there which requires Hetzner or Lindoe complying or knowing about this.

Given the nature of the attacks you can do the interception on the carrier level, and carriers being forced to comply with lawful wiretapping is pretty much anywhere in the world pretty much standard and many laws are based around that approach. Much less so around approaches involving data centers.


according to the article, the added hop visible in traceroute to Linode machine is after two 10/8 hops, so it's likely an internal machine. more importantly, the affected VM has a strange gateway MAC address, which can only be controlled by the local network administrator, not an intermediate carrier.


Getting a valid cert for a domain you don't own is getting close to trivial. All of the mitigation strategies are defeated a number of ways, have been for years. I've commented on incidents like these for years on HN. I'm not some internet big wig, and any blog I write isn't going to trend, so this never gets traction. But anyone who understands the tech can find a half dozen ways to get a cert. If you really need to trust your connection, don't rely on internet PKI / public certs. It was secure enough for general purpose in the aughts, it's not anymore.


What are these trival steps to get a valid certificate for news.ycombinator.com?


This would not be trivial and would require successful phishing. I've sent an email requesting to make this even harder to do. leaving out those details on purpose


this is left as an exercise for the reader


Bold claim lacking substantiation. Care to share? Maybe it will get traction this time.


It's really not that bold. Ask anyone who works for a CA, or an ISP, or a registrar, or a mail hoster, or a browser manufacturer, or DNS provider... If you just read up on how PKI works, on how certificates get issued, how the network protocols work, etc, it's all just right there. The holes are very well known and understood and not fixed because they're in the design. Every so often somebody comes up with another mitigation, which of course you have to opt into, and still doesn't solve all the holes.

It's like SS7, or SWIFT. It's not bold to claim that all public phone networks and bank transfers are insecure, it's just a fact. People who know what they are know they're vulnerable.

As an aside to the design issues, social engineering any of the thousands of organizations involved with issuing certs (not just CAs) is trivial. A bored teenager could get certs issued for most domains by using nothing more than a telephone or e-mail. But that's cheating, so I don't count it.


Fair enough and this HN submission[0] goes into some details.

[0]https://news.ycombinator.com/item?id=37961166


If you are your hoster and can reroute requests, you can get all kinds of valid SSL certs for any domains that you host.

Just grab these IP packets when CA comes to validate that you own that domain. Perhaps EV could solve that to some extent but it is never mandated.

Even if you tried to put any stuff into WHOIS to mitigate this, your hoster can serve any bullshit on this channel too.

It does look very bad and SSH approach to certificates is just infinitely better. If Jabber used SSL keys instead, they will be alerted immediately.

Come to think of it, your hoster can also find ways to steal keys directly from hardware, though.


Step 1: Become a hoster popular enough that interesting targets are being hosted there


If you're a government you may just bully existing ones.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: