What's the end game here? I agree with the dissent. Why not make it 30 seconds?
Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours? I am willing to bet money this threshold will never be crossed.
This feels like much more of an ideological mission than a practical one, unless I've missed some monetary/power advantage to forcing everyone to play musical chairs with their entire infra once a month...
I'm on the team at Let's Encrypt that runs our CA, and would say I've spent a lot of time thinking about the tradeoffs here.
Let's Encrypt has always self-imposed a 90 day limit, though of course with this ballot passing we will now have to reduce that under 47 days in the future.
Shorter lifetimes have several advantages:
1. Reduced pressure on the revocation system. For example, if a domain changes hands, then any previous certificates spend less time in the revoked state. That makes CRLs smaller, a win for everyone involved.
2. Reduced risk for certificates which aren't revoked but should have been, perhaps because a domain holder didn't know that a previous holder of that domain had it, or an attack of any sort that led to a certificate being issued that wasn't desired.
3. For fully short-lived certs (under 7 days), many user-agents don't do revocation checks at all, because that's a similar timeline to our existing revocation technology taking effect. This is a performance win for websites/user-agents. While we advocate for full certificate automation, I recognize there are cases where that's not so easy, and doing a monthly renewal may be much more tractable.
Going to shorter than a few days is a reliability and scale risk. One of the biggest issues with scale today is that Certificate Transparency logs, while providing great visibility into what certs exist (see points 1 and 2), will have to scale up significantly as lifetimes are cut.
Why is this happening now, though? I can't speak for everyone, and this is only my own opinion on what I'm observing, but: One big industry problem that's been going on for the last year or two is that CAs have found themselves in situations where they need to revoke certificates because of issues with those certificates, but customers aren't able to respond on an appropriate timeline. So the big motivation for a lot of the parties here is to get these timelines down and really prove a push towards automation.
When I first set up Let's Encrypt I thought I'd manually update the cert one per year. The 90 day limit was a surprise. This blog post helped me understand (it repeats many of your points) https://letsencrypt.org/2015/11/09/why-90-days/
It's a decision by Certificate Authorities, the ones that sell TLS certificate services, and web browser vendors. One benefits from increased demand on their product, while the other benefits by increasing the overhead on the management of their software, which increases the minimum threshold to be competitive.
There are security benefits, yes. But as someone that works in infrastructure management, including on 25 or 30 year old systems in some cases, it's very difficult to not find this frustrating. I need tools I will have in 10 years to still be able to manage systems that were implemented 15 years ago. That's reality.
Doubtless people here have connected to their router's web interface using the gateway IP address and been annoyed that the web browser complains so much about either insecure HTTP or an unverified TLS certificate. The Internet is an important part of computer security, but it's not the only part of computer security.
I wish technical groups would invest some time in real solutions for long-term, limited access systems which operate for decades at a time without 24/7 access to the Internet. Part of the reason infrastructure feels like running Java v1.3 on Windows 98 is because it's so widely ignored.
It is continuously frustrating to me to see the arrogant dismissiveness which people in charge of such technical groups display towards the real world usage of their systems. It's some classic ivory tower "we know better than you" stuff, and it needs to stop. In the real world, things are messy and don't conform to the tidy ideas that the Chrome team at Google has. But there's nothing forcing them to wake up and face reality, so they keep making things harder and harder for the rest of us in their pursuit of dogmatic goals.
It astounds me that there's no non-invasive local solution to go to my router or whatever other appliances web page without my browser throwing warnings and calling it evil. Truly a fuck up(purposeful or not) by all involved in creating the standards. We need local TLS without the hoops.
Simplest possible, least invasive, most secure thing I can think of: QR code on the router with the CA cert of the router. Open cert manager app on laptop/phone, scan QR code, import CA cert. Comms are now secure (assuming nobody replaced the sticker).
The crazy thing? There is already two WiFi QR code standards, but they do not include the CA cert. There's a "Wi-Fi Easy Connect" standard that is intended to secure the network for an enterprise, and there's a random Java QR code library that made their own standard for just encoding an access point and WPA shared key (and Android and iOS both adopted it, so now it's a de-facto standard).
End-user security wasn't a consideration for either of them. With the former they only cared about protecting the enterprise network, and with the latter they just wanted to make it easier to get onto a non-Enterprise network. The user still has to fend for themselves once they're on the network.
This is a terrible solution. Now you require an Internet connection and a (non-abandoned) third party service to configure a LAN device. Not to mention countless industrial devices where operators would typically have no chance to see QR code.
The solution I just mentioned specifically avoids an internet connection or third parties. It's a self-signed cert you add to your computer's CA registry. 100% offline and independent of anything but your own computer and the router. The QR code doesn't require an internet connection. And the first standard I mentioned was designed for industrial devices.
Not only would that set a questionable precedent if users learn to casually add new trust roots, it would also need support for new certificate extensions to limit validity to that device only. It's far from obvious that would be a net gain for Internet security in general.
It might be easier to extend the URL format with support for certificate fingerprints. It would only require support in web browsers, which are updated much faster than operating systems. It could also be made in a backwards compatible way, for example by extending the username syntax. That way old browsers would continue to show the warning and new browsers would accept the self signed URL format in a secure way.
Your router should use acme with a your-slug.network.home (a communal one would be nice, but more realistically some vendor specific domain suffix that you could cname) domain name and then you should access it via that, locally. your router should run ideally splitbrain dns for your network. if you want you can check a box and make everything available globally via dns-sd.
All my personal and professional feelings aside (they are mixed) it would be fascinating to consider a subnet based TLS scheme. Usually I have to bang on doors to manage certs at the load balancer level anyway.
I’ve actually put a decent amount of thought into this. I envision a raspberry pi sized device, with a simple front panel ui. This serves as your home CA. It bootstraps itself witha generated key and root cert and presents on the network using a self-issued cert signed by the bootstrapped CA. It also shows the root key fingerprint on the front panel. On your computer, you go to its web UI and accept the risk, but you also verify the fingerprint of the cert issuer against what’s displayed on the front panel. Once you do that, you can download and install your newly trusted root. Do this on all your machines that want to trust the CA. There’s your root of trust.
Now for issuing certs to devices like your router, there’s a registration process where the device generates a key and requests a cert from the CA, presenting its public key. It requests a cert with a local name like “router.local”. No cert is issued but the CA displays a message on its front panel asking if you want to associate router.local with the displayed pubkey fingerprint. Once you confirm, the device can obtain and auto renew the cert indefinitely using that same public key.
Now on your computer, you can hit local https endpoints by name and get TLS with no warnings. In an ideal world you’d get devices to adopt a little friendly UX for choosing their network name and showing the pubkey to the user, as well as discovering the CA (maybe integrate with dhcp), but to start off you’d definitely have to do some weird hacks.
What can I say, I am a pki nerd and I think the state of local networking is significantly harmed by consumer devices needing to speak http (due to modern browsers making it very difficult to use). This is less about increasing security and more about increasing usability without also destroying security by coaching people to bypass cert checks. And as home networks inevitably become more and more crowded with devices, I think it will be beneficial to be able to strongly identify those devices from the network side without resorting to keeping some kind of inventory database, which nobody is going to do.
It also helps that I know exactly how easy it is to build this type of infrastructure because I have built it professionally twice.
Why should your browser trust the router's self-signed certificate? After you verify that it is the correct cert you can configure Firefox or your OS to trust it.
Because local routers by definition control the (proposed?) .internal TLD, while nobody controls the .local mDNS/Zeroconf one, so the router or any local network device should arguably be trusted at the TLS level automatically.
Training users to click the scary “trust this self-signed certificate once/always” button won’t end well.
Honestly, I'd just like web browsers to not complain when you're connecting to an IP on the same subnet by entering https://10.0.0.1/ or similar.
Yes, it's possible that the system is compromised and it's redirecting all traffic to a local proxy and that it's also malicious.
It's still absurd to think that the web browser needs to make the user jump through the same hoops because of that exceptional case, while having the same user experience as if you just connected to https://bankofamerica.com/ and the TLS cert isn't trusted. The program should be smarter than that, even if it's a "local network only" mode.
I wonder what this would look like: for things like routers, you could display a private root in something like a QR code in the documentation and then have some kind of protocol for only trusting that root when connecting to the router and have the router continuously rotate the keys it presents.
Yeah, what they'll do is put a QR code on the bottom, and it'll direct you to the app store where they want you to pay them $5 so they can permanently connect to your router and gather data from it. Oh, and they'll let you set up your WiFi password, I guess.
I wonder if a separate CA would be useful for non-public-internet TLS certificates. Imagine a certificate that won't expire for 25 years issued by it.
Such a certificate should not be trusted for domain verification purposes, even though it should match the domain. Instead it should be trusted for encryption / stream integrity purposes. It should be accepted on IPs outside of publicly routable space, like 192.0.0/24, or link-local IPv6 addresses. It should be possible to issue it for TLDs like .local. It should result in a usual invalid certificate warning if served off a pubic internet address.
In other words, it should be handled a bit like a self-signed certificate, only without the hassle of adding your handcrafted CA to every browser / OS.
Of course it would only make sense if a major browser would trust this special CA in its browser by default. That is, Google is in a position to introduce it. I wonder if they may have any incentive though. (To say nothing of Apple.)
But what would be the value of such a certificate over a self-signed one? For example, if the ACME Router Corp uses this special CA to issue a certificate for acmerouter.local and then preloads it on all of its routers, it will sooner or later be extracted by someone.
So in a way, a certificate the device generates and self-signs would actually be better, since at least the private key stays on the device and isn’t shared.
The value: you open such an URL with a bog standard, just-installed browser, and the browser does not complain about the certificate being suspicious.
The private key of course stays within the device, or anywhere the certificate is generated. The idea is that the CA from which the certificate is derived is already trusted by the browser, in a special way.
Compromise one device, extract the private key, have a "trusted for a very long time" cert that identifies like devices of that type, sneak it into a target network for man in the middle shenanigans.
A domain validated secure key exchange would indeed be a massive step up in security, compared to the mess that is the web PKI. But it wouldn't help with the issue at hand here: home router boostrap. It's hard to give these devices a valid domain name out of the box. Most obvious ways have problems either with security or user friendliness.
Frankly, unless 25 and 30 year old systems are being continually updated to adhere to newer TLS standards then they are not getting many benefits from TLS.
Practically, the solution is virtual machines with the compatible software you'll need to manage those older devices 10 years in the future, or run a secure proxy for them.
Internet routers are definitely one of the worst offenders because originating a root of trust between disparate devices is actually a hard problem, especially over a public channel like wifi. Generally, I'd say the correct answer to this is that wifi router manufacturers need to maintain secure infrastructure for enrolling their devices. If manufacturers can't bother to maintain this kind of infrastructure then they almost certainly won't be providing security updates in firmware either, so they're a poor choice for an Internet router.
Or, equivalently, it's being pushed because customers of "big players", of which there are a great many, are exposed to security risk by the status quo that the change mitigates.
It might in theory but I suspect it's going to make things very very unreliable for quite a while before it (hopefully) gets better. I think probably already a double digit fraction of our infrastructure outages are due to expired certificates.
And because of that it may well tip a whole class of uses back to completely insecure connections because TLS is just "too hard". So I am not sure if it will achieve the "more secure" bit either.
It makes systems more reliable and secure for system runners that can leverage automation for whatever reason. For the same reason, it adds a lot of barriers to things like embedded devices, learners, etc. who might not be able to automate TLS checks.
Putting a manually generated cert on an embedded device is inherently insecure, unless you have complete physical control over the device.
And as mentioned in other comments, the revocation system doesn't really work, and reducing the validity time of certs reduces the risks there.
Unfortunately, there isn't really a good solution for many embedded and local network cases. I think ideally there would be an easy way to add a CA that is trusted for a specific domain, or local ip address, then the device can generate its own certs from a local ca. And/or add trust for a self-signed cert with a longer lifetime.
This is a bad definition of security, I think. But you could come up with variations here that would be good enough for most home network use cases. IMO, being able to control the certificate on the device is a crucial consumer right
There are many embedded devices for which TLS is simply not feasible. For remote sensing, when you are relying on battery power and need to maximise device battery life, then the power budget is critical. Telemetry is the biggest drain on the power budget, so anything that means spending more time with the RF system powered up should be avoided. TLS falls into this category.
Unless I misunderstood, GP mentions that the problem stems from WebPKI's central role in server identity management. Think of these cert lifetimes as forcefully being signed out after 47 days of being signed in.
> easier for a few big players in industry
Not necessarily. OP mentions, more certs would mean bigger CT logs. More frequent renewals mean more load. Like with everything else, this seems like a trade-off. Unfortunately, for you & I, as customers of cert authorities, 47 days is where the now the agreed cut-off is (not 42).
I think a very short lived cert (like 7 days) could be a problem on renewal errors/failures that don't self correct but need manual intervention.
What will let's encrypt be like with 7day certs? Will it renew them every day(6 day reaction time), or every 3 days (4 days reaction time). Not every org is suited with 24/7 staffing, some people go on holidays, some public holidays extend to long weekends etc :). I would argue that it would be a good idea to give people a full week to react to renewal problems. That seems impossible for short lived certs.
There's no such thing as an ideal world, just the one we have.
Let's Encrypt was founded with a goal of rapidly (within a few years) helping get the web to as close to 100% encrypted as we could. And we've succeeded.
I don't think we could have achieved that goal any way other than being a CA.
Sorry was not trying to be snarky, was interested in your answer as to what a better system would look like. The current one seems pretty broken but hard to fix.
In an ideal world where we rebuilt the whole stack from scratch, the DNS system would securely distribute key material alongside IP addresses and CAs wouldn't be needed. Most modern DNS alternatives (Handshake, Namecoin, etc) do exactly this, but it's very unlikely any of them will be usurping DNS anytime soon, and DNS's attempts to implement similar features have been thus far unsuccessful.
People who idealize this kind of solution should remember that by overloading core Internet infrastructure (which is what name resolution is) with a PKI, they're dooming any realistic mechanism that could revoke trust in the infrastructure operators. You can't "distrust" .com. But the browsers could distrust Verisign, because Verisign had competitors, and customers could switch transparently. Browser root programs also used this leverage to establish transparency logs (though: some hypothetical blockchain name thingy could give you that automatically, I guess; forget about it with the real DNS though).
.com can issue arbitrary certificates right now, they control what DNS info is given to the CAs. So I don't quite see the change apart from people not talking about that threat vector atm.
This is a bad faith argument. Whatever measures Google takes to prevent this (certificate logs and key pinning) could just as well be utilized if registrars delegated cryptographic trust as they delegate domains.
It is also true that these contemporary prevention methods only help the largest companies which can afford to do things like distributing key material with end user software. It does not help you and me (unless you have outsourced your security to Google already, in which case there is the obvious second hand benefit). Registrars could absolutely help a much wider use of these preventions.
There is no technical reason we don't have this, but this is one area where the interest of largest companies with huge influence over standards and security companies with important agencies as customers all align, so the status quo is very slow to change. If you squint you can see traces of this discussion all the way from IPng to TLS extensions, but right now there is no momentum for change.
It's easy to tell stories about shadowy corporate actors retarding security on the Internet, but the truth is just that a lot of the ideas people have about doing security at global Internet scale just don't pan out. You can look across this thread to see all the "common sense" stuff people think should replace the WebPKI, most of which we know won't work.
Unfortunately, when you're working at global scale, you generally need to be well-capitalized, so it's big companies that get all the experience with what does and doesn't work. And then it's opinionated message board nerds like us that provide the narratives.
This is a great point. For all of the "technically correct" arguments going on here, this one is the most practical counterpoint. Yes, in theory, Verisign (now Symantec) could issue some insane wildcard Google.com cert and send the public-private key pair to you personally. In practice, this would never happen, because it is a corporation with rules and security policies that forbid it.
Thinking deeper about it: Verisign (now Symantec) must have some insanely good security, because every black hat nation state actor would love to break into on their cert issuance servers and export a bunch of legit signed certs to run man-in-the-middle attacks against major email providers. (I'm pretty sure this already happened in Netherlands.)
This isn't about the cert issuance servers, but DNS servers. If you compromise DNS then just about any CA in the world will happily issue you a cert for the compromised domain, and nobody would even be able to blame them for that because they'd just be following the DNS validation process prescribed in the BRs.
> every black hat nation state actor would love to break into on their cert issuance servers and export a bunch of legit signed certs to run man-in-the-middle attacks
I might be misremembering but I thought one insight from the Snowden documents was that a certain three-letter agency had already accomplished that?
This was DigiNotar. The breach generated around 50 certificates, including certificates for Google, Microsoft, MI6, the CIA, TOR, Mossad, Skype, Twitter, Facebook, Thawte, VeriSign, and Comodo.
That only works for high profile domains, a CA can just issue a cert, log it to CT and if asked claim they got some DNS response from the authoritative server.
Then it's a he said she said problem.
Or is DNSSEC required for DV issuance? If it is, then we already rely on a trustworthy TLD.
I'm not saying there isn't some benefit in the implicit key mgmt oversight of CAs, but as an alternative to DV certs, just putting a pubkey in dnssec seems like a low effort win.
It's been a long time since I've done much of this though, so take my gut feeling with a grain of salt.
This isn't a contest of two vibes, one pro-DNSSEC and one anti-. You can just download the Tranco list and run a for loop over it checking for DS records. DNSSEC adoption in North America has actually fallen sharply in North America since 2023 (though it's starting to tick back up again) and every chart you'll find shows a pronounced plateauing effect, across all TLDs --- damning because the point at which the plateau flattens is such a low percentage.
Right. 'You can't "distrust" .com.' is probably not true in that situation. (If it were actually true, then "what happens" would be absolutely nothing.) I think you're undermining your own point.
A certificate authority is an organisation that pays good money to make sure that their internet connection is not being subjected to MITMs. They put vastly more resources into that than you can.
A certificate is evidence that the server you're connected to has a secret that was also possessed by the server that the certificate authority connected to. This means that whether or not you're subject to MITMs, at least you don't seem to be getting MITMed right now.
The importance of certificates is quite clear if you were around on the web in the last days before universal HTTPS became a thing. You would connect to the internet, and you would somehow notice that the ISP you're connected to had modified the website you're accessing.
> pays good money to make sure that their internet connection is not being subjected to MITMs
Is that actually true? I mean, obviously CAs aren't validating DNS challenges over coffee shop Wi-Fi so it's probably less likely to be MITMd than your laptop, but I don't think the BRs require any special precautions to assure that the CA's ISP isn't being MITMd, do they?
Nobody has really had to pay for certificates for quite a number of years.
What certificates get you, as both a website owner and user, is security against man-in-the-middle attacks, which would otherwise be quite trivial, and which would completely defeat the purpose of using encryption.
Much as I like the idea of DANE, it solves nothing by itself and you need to sign the zone from tampering. Right now, the dominant way to do that is DNSSEC, though DNSCurve is a possible alternative, even if it doesn't solve the exact same problem. For DANE to be useful, you'd first need to get that set up on the domain in question, and the effort to get that working is far, far from trivial, and even then, the process is so error prone and brittle that you can easily end up making a whole zone unusable.
Further, all you've done is replace one authority (the CA authority) with another one (the zone authority, and thus your domain registrar and the domain registry).
Certificate pinning is probably the most widely known way to get a certificate out there without relying on live PKI. However, certificate pinning just shifts the burden of trust from runtime to install time, and puts an expiration date on every build of the program. It also doesn't work for any software that is meant to access more than a small handful of pre-determined sites.
Web-of-trust is a theoretical possibility, and is used for PGP-signed e-mail, but it's also a total mess that doesn't scale. Heck, the best way to check the PGP keys for a lot of signed mail is to go to an HTTPS website and thus rely on the CAs.
DNSSEC could be the basis for a CA-free world, but it hasn't achieved wide use. Also, if used in this way, it would just shift the burden of trust from CAs to DNS operators, and I'm not sure people really like those much better.
How relevant is that since we don't live in such a world? Unless you have a way to get to to such a world, of course, but even then CAs would need to keep existing until you've managed to bring the ideal world about. It would be a mistake to abolish them first and only then start on idealizing the world.
What alternatives come to mind when asking that question? Not being in the PKI world directly, web of trust is what comes to mind, but I'm curious what your question hints at.
I honestly don’t know enough about it to have an opinion, have vague thoughts that dns is the weak point anyway for identity so can’t certs just live there instead but I’m sure there are reasons (historical and practical).
I love the push that LE puts on industry to get better.
I work in a very large organisation and I just dont see them being able to go to automated TLS certificates for their self managed subdomains, inspection certificates, or anything else for that matter. It will be interesting to see how the short lived certs are adopted into the future.
Could you explain why Let's Encrypt is dropping OCSP stabling support, instead of dropping it for must-staple only certificates and letting those of us who want must-staple to deal with the headaches? I believe that resolving the privacy concerns involving OCSP raised did not require eliminating must-staple.
Must-staple has almost zero adoption. The engineering cost of supporting it for a feature that is nearly unused just isn’t there.
We did consider it.
As CAs prepare for post-quantum in the next few years, it will become even less practical as there is going to be pressure to cut down the number of signatures in a handshake.
> Must-staple has almost zero adoption. The engineering cost of supporting it for a feature that is nearly unused just isn’t there.
> We did consider it.
That is unfortunate. I just deployed a web server the other day and was thrilled to deploy must-staple from Let's Encrypt, only to read that it was going away.
> As CAs prepare for post-quantum in the next few years, it will become even less practical as there is going to be pressure to cut down the number of signatures in a handshake.
Please delay the adoption of PQAs for certificate signatures at Let's Encrypt as long as possible. I understand the concern that a hypothetical quantum machine with tens of millions of qubits capable of running Shor's algorithm to break RSA and ECC keys might be constructed. However, "post-quantum" algorithms are inferior to classical cryptographic algorithms in just about every metric as long as such machines do not exist. That is why they were not even considered when the existing RSA and ECDSA algorithms were selected before Shor's algorithm was a concern. There is also a real risk that they contain undiscovered catastrophic flaws that will be found only after adoption, since we do not understand their hardness assumptions as well as we understand integer factorization and the discrete logarithm problem. This has already happened with SIKE and it is possible that similarly catastrophic flaws will eventually be found in others.
Perfect forward secrecy and short certificate expiry allow CAs to delay the adoption of PQAs for key signing until the creation of a quantum computer capable of running Shor’s algorithm on ECC/RSA key sizes is much closer. As long as certificates expire before such a machine exists, PFS ensures no risk to users, assuming key agreement algorithms are secured. Hybrid schemes are already being adopted to do that. There is no quantum moore's law that makes it a forgone conclusion that a quantum machine that can use Shor's algorithm to break modern ECC and RSA will be created. If such a machine is never made (due to sheer difficulty of constructing one), early adoption in key signature algorithms would make everyone suffer from the use of objectively inferior algorithms for no actual benefit.
If the size of key signatures with post quantum key signing had been a motivation for the decision to drop support for OCSP must-staple and my suggestion that adoption of post quantum key signing be delayed as long as possible is in any way persuasive, perhaps that could be revisited?
Finally, thank you for everything you guys do at Let's Encrypt. It is much appreciated.
All of that in case the previous owner of the domain would attempt a mitm attack against a client of the new owner, which is such a remote scenario. In fact has it happened even once?
Based on that Wikipedia article, no. This is just more of the same friendless PKI geeks making the world unnecessarily more complicated. The only other people that benefit are the certificate management companies that sell more software to manage these insane changes.
Realistically, how often are domains traded and suddenly put in legitimate use (that isn't some domain parking scam) that (1) and (2) are actual arguments? Lol
Domain trading (regardless if the previous use was legitimate or not) is only one example, not the sole driving argument for why the revocation system is in place or isn't perfectly handled.
The whole "customer is king" doesn't apply to something as critical as PKI infrastructure, because it would compromise the safety of the entire internet. Any CA not properly applying the rules will be removed from the trust stores, so there can be no exceptions for companies who believe they are too important to adhere to the contract they signed.
How would a CA not being able to contact some tiny customer (surely the big ones all can and do respond in less than 90 days?) compromise the safety of the entire internet?
And if the safety of the entire internet is at risk, why is 47 days days an acceptable duration for this extreme risk, but 90 days is not?
> surely the big ones all can and do respond in less than 90 days?
LOL. old-fashioned enterprises are the worst at "oh, no, can't do that, need months of warning to change something!", while also handling critical data. A major event in the CA space last year was a health-care company getting a court order against a CA to not revoke a cert that according to the rules for CAs the CA had to revoke (in the end they got a few days extension, everyone grumbled and the CA got told to please write their customer contracts more clearly, but the idea is out there and nobody likes CAs doing things they are not supposed to, even if through external force).
One way to nip that in the bud is making sure even you get your court order preventing the CA from doing the right thing, your certificate will expire soon anyways, so "we are too important to have working IT processes" doesn't work anymore.
I have a feeling it'll eventually get down even lower. In 2010 you could pretty easily get a cert for 10 years. Then 5 years. Then 3 years. Then 2 years. then 1 year. Then 3 months. Now less than 2 months .
The "end game" is mentioned explicitly in the article:
> Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
Shorter-lived certificates make OCSP and other revocation mechanisms less of a load-bearing component within the Web PKI. This is a good thing, since neither CAs nor browsers have managed to make timely revocation methods scale well.
(I don't think there's any monetary or power advantage to doing this. The reason to do it is because shorter lifetimes make it harder for server operators to normalize deviant certificate operation practices. The reasoning there is the same as with backups or any other period operational task: critical processes must be continually tested and evaluated for correctness.)
I don't see why it would; the same basic requirements around CT apply regardless of certificate longevity. Any CA caught enabling this kind of MITM would be subject to expedient removal from browser root programs, but with the added benefit that their malfeasance would be self-healing over a much shorter period than was traditionally allowed.
I think their point is that a hypothetical connection-specific cert would make it difficult/impossible to compare your cert with anybody else to be able to find out that it happened. A CA could be backdoored but only “tapped” for some high-value target to diminish the chance of burning the access.
> I think their point is that a hypothetical connection-specific cert would make it difficult/impossible to compare your cert with anybody else to be able to find out that it happened.
This is already the case; CT doesn't rely on your specific served cert being comparable with others, but all certs for a domain being monitorable and auditable.
(This does, however, point to a current problem: more companies should be monitoring CT than are currently.)
the power dynamic here is that the CAs have a "too big to fail" inertia, where they can do bad things without consequence because revoking their trust causes too much inconvenience for too many people. shortening expiry timeframes to the point where all their certificates are always going to expire soon anyways reduces the harm that any one CA can do by offering bad certs.
it might be inconvenient for you to switch your systems to accomodate shorter expiries, but it's better to confront that inconvenience up front than for it to be in response to a security incident.
> Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours?
Well you see, they also want to be able to break your automation.
For example, maybe your automation generates a 1024 bit RSA certificate, and they've decided that 2048 bit certificates are the new minimum. That means your automation stops working until you fix it.
Doing this with 2-day expiry would be unpopular as the weekend is 2 days long and a lot of people in tech only work 5 days a week.
If the service becomes unavailable for 48 straight hours then every certificate expires and nothing works. You probably want a little more room for catastrophic infrastructure problems.
> 48 hours. I am willing to bet money this threshold will never be crossed.
That's because it won't be crossed and nobody serious thinks it should.
Short certs are better, but there are trade-offs. For example, if cert infra goes down over the weekend, it would really suck. TBH, from a security perspective, something in the range of a couple of minutes would be ideal, but that runs up against practical reasons
- cert transparency logs and other logging would need to be substantially scaled up
- for the sake of everyone on-call, you really don't want anything shorter than a reasonable amount of time for a human to respond
- this would cause issues with some HTTP3 performance enhancing features
- thousands of servers hitting a CA creates load that outweighs the benefit of ultra short certs (which have diminishing returns once you're under a few days, anyways)
> This feels like much more of an ideological mission than a practical one
There are numerous practical reasons, as mentioned here by many other people.
Resisting this without good cause, like you have, is more ideological at this point.
Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours? I am willing to bet money this threshold will never be crossed.
This feels like much more of an ideological mission than a practical one, unless I've missed some monetary/power advantage to forcing everyone to play musical chairs with their entire infra once a month...