Hacker News new | past | comments | ask | show | jobs | submit login
TPM provides zero practical security (gist.github.com)
76 points by osy on Sept 8, 2023 | hide | past | favorite | 108 comments



Unfortunately this sounds like a typical pro-Linux rant with the usual scare words such as "Microsoft", "UEFI", "secure boot", etc. To be clear, I am attacking the piece itself, not the author.

The reason there is no explicit threat model defined in the TPM specs is because it defines a general-purpose hardware security module. It is up to integrator to define the threat model (TPM's security properties also depend on the rest of the system) and the application.

Even if a TPM is not perfect and depends on other pieces of the puzzle to also be secure, it at least opens the possibility of making it secure in the future once those vulnerabilities are discovered & fixed. Furthermore, even in this vulnerable state, it still increases the effort required for a successful attack.

Support for TPM-backed full disk encryption means you can now have FDE on by default for everyone with no usability impact at all. Even if it's not secure and a dedicated attack will still break it, it means a casual attacker can't just pull a drive or reboot the machine and run chntpw or steal sensitive data from discarded drives that haven't been properly wiped.

I like TPMs. I like the fact that a rogue datacenter employee or intruder can't just pull one of my servers' drives out and get sensitive data. I like not having to worry about having sensitive keys on the filesystem somewhere because every secret is in memory and is ultimately derived from the TPM doing remote attestation at boot and handing ephemeral keys. I like not having to worry about unattended reboots or entering LUKS passphrases remotely.


While that's a very good use case, the desired one where you're not allowed to use the Internet unless you're using a big three approved device that can attest you're not using an ad blocker isn't so much.


> he desired one where you're not allowed to use the Internet unless you're using a big three approved device that can attest you're not using an ad blocker isn't so much.

There's no reason to believe this will require a TPM or depend on the presence of one. As far as I know, Widewine and similar DRM schemes successfully achieved this without any hardware assistance. Yes, bypasses exist and all the major piracy groups have them, but the objective of preventing the masses from having access to a working bypass is clearly achieved and doesn't require hardware.


Widewine and similar DRM schemes pretty much require hardware assistance; the lower levels that do not will provide you with 720p, which is exactly what you are getting in Linux. For 4k, it requires tee application, or similar mechanism that's not in the reach of mere mortals.

The early bypass of widewine meant burning an nvidia shield (invalidating its keys) for each and every single rip.


Do the DRM schemes interact with HDCP at the hardware level? I know HDCP is necessary, but my understanding has always been that the decrypted video data is always available to the OS (at the kernel level) and the "requirement" of it being outputted only to an HDCP-enabled sink was purely done in software through layers of obfuscation?


Higher-resolution ones definitely do. That's why you only get Widevine L3 on PCs and Macs, which most content providers limit to 720p or below.

You need something else (like Apple's FairPlay or Microsoft PlayReady) beyond that, and these definitely check your HDCP version. I believe 4k output commonly requires HDCP 2.2.

FairPlay on macOS might be based on obfuscation still (there was an interesting article on that here some days ago), but high-resolution playback on Windows definitely does involve the GPU driver somehow.


> Widewine and similar DRM schemes successfully achieved this

Do you have any references to back that statement up? Software-only DRMs are ultimately always either plain obfuscation or some variant of white-box cryptography, which is also anything but proven to actually work.


Widevine and other schemes are trivially defeated as far as manipulating the results of what you see on the screen. The best they've been able to do is sometimes protect the compressed original stream, but they also routinely fail at that, and that's not the kind of security that can defeat an adblocker. The kind of security you're talking about would require some kind of TPM-like solution to attest you're running approved software and don't have root.


Touche, not a TPM, those are usually separate hardware whereas TEEs are integrated.


A core argument the post makes is that TPMs are insufficient for verifying full stack integrity and thus ineffective for FDE. (Eg by exploiting vulnerable drivers, an attacker can dump the disk encryption key from kernel memory.)

But in such a scenario, an attacker can also use such an attack to bypass any remote attestation/DRM/etc!

I guess you could argue that such attacks are too much work for consumers, and that low fences control big dumb animals…but I think, fundamentally, the same argument applies to consumer security functions like FDE!

Tl;dr: I think it’s hard to argue that TPMs are both useless for practical user security and a threat to free computing. It’s gotta be one or the other!


What's the point of TPM-backed full disk encryption with no usability impact (meaning password/pin-less) for the average user who is more likely to get their device stolen vs some covert disk image shenanigan?


If the device is stolen it can still enforce OS-level authentication (including potentially phoning home, invalidating its access to remote resources, or erasing itself), except now you can't bypass it by rebooting and running chntpw.

Will this stop a dedicated attacker? Probably not, although a fTPM with an up to date OS would require the attacker to find an exploit for this machine's early boot firmware (UEFI, etc) or burn a Windows zero-day, both of which are very costly.

It does however prevent your casual thief from watching a YT video "how to reset windows password using linux live cd" and then getting access to your sensitive data (browser's saved passwords, etc), so it's a major improvement.


I prefer to require entering a master password on boot manually and then configuring the OS to auto login to my non root user (with a different password than the disk). The longer and more complex your dependency chain for security, the more opportunity for it to be compromised. The encrypted “password on boot” partition then contains the keys to mount the other disks.

I’d really like Apple’s model on my machine where the root image is just the stock OS image unencrypted and the co-processor owns the responsibility of managing IO (and done efficiently) using my master key. TPM seems like it misses the mark from that perspective.


Using a decryption password on boot is less secure than TPM + measured boot/secure boot. Specifically, it’s vulnerable to a two-touch attack. In the first touch, the attacker replaces your bootloader with one that looks identical but steals your password. On the second touch, they now use the password to steal your data.


If the attacker can install a custom boot loader the system is already defective by design.


If the attacker can replace your bootloader, why can't they just get the decryption key from the kernel later? And if you did have Secure Boot, then using a password with encryption at rest is just as secure : you can't change the bootloader and you can't change the OS (since it's encrypted), so you can't exfiltrate the password. The end result is that the TPM doesn't have a practical benefit.


The bit about "two touches" seems to imply physical access, so in absence of TPM the attacker can replace your bootloader with little effort vs with TPM they'd need to break TPM.


You can fix this by asking for the password before letting the attacker replace the bootloader.


Sorry, I missed the bit about Secure Boot.

Yes, with Secure Boot and password your data is safe. But you have to type the password to boot your system, which is impractical for remote and headless systems, or even local systems that need to be available remotely.


You would still use the TPM to verify the software chain. But don’t use the TPM to Auto unlock disks. That’s the part that feels like a bad idea


The issue is that data disks and system disks get conflated. For the system disk (anything outside of /home) you generally only care about signing - which FDE does as a side-effect. Each user should have their own disk/partition/subvolume with a distinct key that is retrieved from the PAM.

This achieves two things: I know that I am typing my password into the OS that I or a trusted third party compiled (not one planted by a hacker), and my home directory gets decrypted as part of my normal login routine.


An attacker still needs to use some kind of semi-advanced attack in the boot chain or DMA to steal the user's data, instead of just plugging in a LiveUSB and going to town.

Yes, there are a lot of vulnerabilities in the Secure Boot process on most devices, because the surface area is huge, but the attacker still needs _some sort_ of vulnerability to gain a foothold.

I agree with the frustration in the gist - Secure Boot and TPM-sealed disk encryption aren't nearly as good as they could be, because the surface area is gigantic and sure to get exploited. But this is a classic Security Nerd vs Reality scenario: while it is absolutely _possible_ to pwn Secure Boot + TPM-sealed encryption in almost any scenario, using it still makes it _much harder_ for an attacker to do so, and most will give up.


For the typical user, losing their data is a greater risk than someone with physical control over their machine being able to access it. The logic board in your computer fails or you forget your password and all your data is gone.

And the default way of mitigating it is an even worse security risk. Now all your data is on some cloud somewhere, waiting for that vendor to get breached or your account to get phished which is now possible without physical control over your device. Plus, if you couldn't get into your computer because you lost access to your account, you also lost access to the data in the cloud.

Whereas if you really do have sensitive data, you still don't need a TPM and get better security without one. You keep a Yubikey in your pocket or memorize a strong passphrase and then the key physically isn't stored on your device.


If your data is this valuable, you certainly do backups? I suppose something like cloud backups is now built-in into windows, and would save your Documents (and maybe more) also by default.


We're talking about ordinary people here. Their data is valuable to them because it's their pictures of their grandkids and their draft of the Great American Novel and their recipe collection. They're not backing it up themselves, they don't even know how.

But it's also their copy of all their bank statements that include their routing number, which nobody who is physically in their house is going to use against them but is a serious fraud risk if it can be accessed remotely on some cloud.


Windows backups are subpoenable by half the governments on the planet, who have bad actors in them, and may also have exploits for dedicated attackers because they present a huge target.


If your threat model includes state-level actors, I wonder why you consider running Windows at all, or at least not in a highly secured transient VM.


I hate this. People are claiming "state-level" actors are all the same. Microsoft backups are subpoenable by local cops, hell, by your ex-wife in a divorce proceeding in some jurisdictions.

Yes, if the NSA has a decent reason to think you're going to nuke a sports game you'll still have a problem with very, very good security measures.

That doesn't mean there isn't a very large in-between zone where you're fine with better security measures.


You can store the full disk encryption key in the TPM and rate-limit PIN attempts using its secure non-volatile storage, as far as I know. That's very useful in case of loss/theft, given that users don't like typing long passwords or PINs for every login.

I'm not sure if this is what Windows actually does, though, or if the TPM just hands over the disk encryption key after Windows passes system attestation and then verifies the screen unlock PIN/password in software – that would be significantly less secure.


Why go pin less? I like the fact that the TPM can restrict retry counts and I do not need a rediculously long password.

The only thing I do not get is why this is not done by a simple SIM card like in any mobile phone. Then one could choose a TPM. Even more: I do not get why I cannot encrypt my android phone with my SIM card.


I have many machines, some headless.


> This sounds like the rant of a typical Linux fanboy

Hi, it's me the Linux fanboy whose entire personality is making Hackintosh and VM apps for iOS. Just a friendly reminder that attacks on the author's credentials have no baring on the weight of the arguments.

> The reason there is no explicit threat model defined in the TPM specs is because it defines a general-purpose hardware security module

It sounds like you have zero experience in security :)

> it defines a general-purpose hardware security module

No it doesn't. I think you're hinting at HSM which is another beast I may write another fanboy FUD piece about at some point. But no, HSMs are not the same as TPMs. And TPMs are not HSMs. For one thing, and HSM defines something called a trust boundary where keys should never leave. TPMs will happily hand you the keys when you meet a certain condition. HSMs support key migration and provides a secure way to transfer keys from one HSM to another without leaving the trust boundary. I can go on and on...

> TPM's security properties also depend on the rest of the system

The argument isn't TPM versus no security. The argument is TPM versus the existing security you have on Windows. (Passwords, FDE, etc). Of course all this depends on the system. TPM doesn't add anything (* with the exception already listed in the article).

> it at least opens the possibility of making it secure in the future once those vulnerabilities are discovered & fixed

Nope. Architecturally flawed. But I'd just be repeating the argument from the article.

> means a casual attacker can't just pull a drive or reboot the machine and run chntpw or steal sensitive data from discarded drives that haven't been properly wiped

They can with a $80 FPGA. Read the appendix.

> I like the fact that a rogue datacenter employee or intruder can't just pull one of my servers' drives out and get sensitive data.

They can with a $80 FPGA. (Unless your datacenter uses Intel TXT and tboot and other prerequisites that were talked about in the article)

> I like not having to worry about having sensitive keys on the filesystem somewhere

If you use BitLocker, they are always in kernel memory

> derived from the TPM doing remote attestation at boot

That's not what "remote attestation" means :)

> I like not having to worry about unattended reboots or entering LUKS passphrases remotely

If you like that, just disable your password and you'll get the same result


> TPMs will happily hand you the keys when you meet a certain condition. HSMs support key migration and provides a secure way to transfer keys from one HSM to another without leaving the trust boundary.

You can create non-exportable keys on TPM's, and there are mechanisms to securely transfer keys between devices.

Granted, doing so is kind of a mess, but nonetheless possible.


> Just a friendly reminder that attacks on the author's credentials have no baring on the weight of the arguments.

> It sounds like you have zero experience in security :)

Seems like you're countering an ad hominem with an ad hominem here...?

I don't know the TPM specifications in detail myself, but I do know that TPMs are in fact quite general-purpose HSMs, of which assisting in attestation/measurements for the purpose of trusted computing is only one (although certainly the most controversial) subfeature.

If I can store my SSH keys on my TPM and just don't use trusted computing at all... How is that "zero practical security"?


I lol'ed pretty hard when I read the appendix. What an amateur job. All this time I was sure someone thought of the possibility of attaching a logic analyzer, or an $80 FPGA, to one of the pins.


> whose entire personality is making Hackintosh and VM apps for iOS

Congrats and thanks as I'm fairly sure I must've used your work at some point.

> Just a friendly reminder that attacks on the author's credentials have no baring on the weight of the arguments.

I didn't check nor care about the author nor their credentials because my comment was purely on the piece itself and what it sounded like to me and not an ad-hominem to the author. It did after all contain the usual scary terms such as "Microsoft", UEFI, Secure Boot as well as dismisses an entire concept just because of some flaws that can be rectified incrementally.

> It sounds like you have zero experience in security :)

I never claimed to be a security expert, but maybe my layman's approach allows me to overlook the pedantry and avoid dismissing something entirely just because it doesn't perfectly conform to some ideals? (I think the TPM's threat model will be up to the integrator to determine, as it depends on other things such as discrete vs firmware TPM, UEFI/Option ROMs and their security flaws, etc).

> No it doesn't.

I used "HSM" to mean "dedicated hardware device that does security-related things", rather than a 1-to-1 equivalent of a commercial HSM. But to the best of my knowledge a TPM can also act as a (low-throughput) actual HSM if you so desire, allowing operations with a secret key without ever disclosing it?

> The argument is TPM versus the existing security you have on Windows. (Passwords, FDE, etc)

My argument is that the TPM enables frictionless FDE for the masses without any change in user experience and without even relying on a password (which would often be weak and thus useless in practice).

Tell me how this is the same level of security as no FDE or FDE with weak password. Even if it can be broken using various methods (some of which you've described), surely you see that it still significantly increases the barrier to entry and cost of a successful attack?

> They can with a $80 FPGA. (Unless your datacenter uses Intel TXT and tboot and other prerequisites that were talked about in the article)

Those machines use fTPM which isn't vulnerable to this attack, but regardless, $80 is still more expensive than the $1 a Linux live-CD/USB costs, not to mention the requirement for lengthy physical access and ability to solder/connect wires onto the mainboard.

I'm not arguing that TPM is unbreakable or will resist sophisticated, prepared, targeted attackers. But it raises the bar by at least $80 (and in practice by a lot more on modern machines with fTPM), with zero additional effort from the user (thus it can even be used where conventional passworded FDE is impractical, such as unattended servers). It's literally free security, and yet you chose to shit on it just because it's not perfect (even though the flaws would get patched up over time, as with any product).

I think it would be good if this level of security could become the baseline (even if it's not perfect) and would rather not have FUD getting in the way. You are of course welcome to use something stronger depending on your requirements, but this becoming the baseline is still an improvement over no FDE at all (still seems to be the norm on PCs).

> If you use BitLocker, they are always in kernel memory

Yes I understand, it would still means you'd need to either be root already or have a privilege escalation exploit to extract them.

I'm not necessarily talking about FDE keys here though (for FDE keys, if you can execute code just read the filesystem directly, no need to even care about the FDE).

TPM allows a machine to prove (with reasonable levels of security, requiring at least $80 to break) to another one that it's in a given state, and be able to obtain ephemeral credentials based on that claim, this avoiding needing to persist those anywhere.

> That's not what "remote attestation" means :)

See above.

> If you like that, just disable your password and you'll get the same result

Well no, because then any guy with a Linux live CD can get the data (or someone at the recycler if the drives are swapped and discarded without being sanitized), where as now they'd at least need to shell out 80 bucks plus a soldering iron and lengthy & suspicious-looking physical access to the machine.


I might be a bit ignorant towards the topic but so far everything I've seen about TPM has no actual benefits for the home user. For cloud, sure, there are benefits.

It seems like it's a sunken cost fallacy that big tech spent a lot of money on and they are trying to get it back by convincing average Joe that a TPM is good for your PC.


Home users take their PC to airports, hostels and cafes. Criminals who break into homes steal laptops. Of course there are benefits.


you are right that tpm and such technologies are a right step, you have to start somewhere. (i know it didnt start at tpm ofcourse). The author is also right that some claims are too markety while in the real world device are still vulnerable. The piece also goes on to say theres further efforts such as TXT etc. taking place, so in essence for me it reads like you mostely agree. microsoft, uefi etc. arent scare words here imho, they are very closely related to claims made and technologies involved.

I thought the piece was ok, but it doesnt add anything new. its another piece pointing at the puzzle a lot of people including you already know exists. its not aimed at you for that matter. (so fair comments, but perhaps a bit harsh?)


There are two issues. One is a false sense of security. You think that you have the same level of security of a full disk encryption, but you don't. On a full disk encryption only who knows the password can access your data. On this system the disk is automatically decrypted at boot, so any flaw in Windows that permits a privilege escalation done by that PC can give access to your data.

If somebody that wants your data steals your PC he will be likely to find a way to access your personal data. It's a protection against a casual thief that is probably not interested in your data and can't probably even figure out how to bypass the Windows password screen.

But if this doesn't make harm, why not have it? Because having disk encryption enabled by default to a user that doesn't know that is enabled by default is not necessary a good thing. Let's face it: users don't do backups. I know even companies that have all their data on a single server with no backups.

Now if the motherboard breaks and you don't have backup... you can't just take the disk out of that computer, connect to another PC and recover the data. You have lost your data!

But wait, you say Microsoft tought about that, indeed if you signed in with a Microsoft account you can recover your Bitlocker encryption key from the Microsoft portal... wait what? Exactly. No security at all! Microsoft knows your encryption keys and it stores it on their servers... again: false sense of security is worse than no security at all!

Finally, even if this system was 100% secure: do you trust the hardware? The same hardware produced by the same manufacturer of the products where nearly once in a year a big security flaw is discovered? The same hardware where we know that the NSA, and probably other government agencies, placed backdoors?

Whatever, typing a password when booting up the computer (that is once in a day) is such a big deal?


> But wait, you say Microsoft tought about that, indeed if you signed in with a Microsoft account you can recover your Bitlocker encryption key from the Microsoft portal... wait what?

Can't you alternatively also export a copy of the actual disk encryption key and write that on a piece of paper? The last time I used Windows, that was possible, at least (but I think I didn't use the TPM back then).

On macOS, you can do either, for example, and it uses a similar construction (although using Apple's proprietary secure element and hardware encryption engine rather than a TPM and secure boot).


My argument isn't that the TPM provides bulletproof security equivalent to a strong FDE passphrase, but that the TPM allows effortless, passwordless FDE with reasonable levels of security to users who otherwise wouldn't use FDE at all.


What type of password are users going to choose if they can't use a password manager at boot time? Its no more secure to a DMA attack then a TPM protected drive. Attackers are getting the both types passwords from memory.


> indeed if you signed in with a Microsoft account you can recover your Bitlocker encryption key from the Microsoft portal. ... No security at all! Microsoft knows your encryption keys and it stores it on their servers.

This is complete disinformation. Microsoft gives you the OPT-IN OPTION to save your Bitlocker encryption key to your account.

It's clearly labeled and you need to click on it to activate it:

https://allthings.how/content/images/wordpress/2021/11/allth...


Likewise, I am not too concerned about the NSA breaking into my laptop. I just want that if some kid finds it and tries to plug the SSD into his computer, my files aren't all there to be read.


This "middle-brow dismissal" should be downvoted and flagged on the first sentence alone. Not be the top comment.

The datacenter use case sounds useful—should have led with that.


I've edited my comment to hopefully clarify that I am talking about the piece and not accusing the author himself.

My problem with the piece is that it reads like the usual knee-jerk pro-Linux FUD that typically originates around scare words like "Microsoft", "secure boot" and "UEFI".


I didn’t read it that way. Was spirited, but no insults that I noticed.

And it’s not like those “scare words” didn’t become scary for a reason. Your response starts with reverse FUD.


> You can also use the TPM + PIN as a sort of Yubikey

That's not zero. In my mind that's the main thing a TPM is really useful for. It's a secure enclave for a private key used for U2F/WebAuthn style attestation. I agree that the threat model not being explicitly discussed is a huge miss. But to that point, a TPM is still useful because it prevents someone who has hacked into my computer from commanding the TPM's authentication factor.

The other useful application is to prevent block device data extraction without knowing the passkey. And the author's argument there hinges on the notion that Microsoft won't patch OS security vulnerabilities that enable key extraction from memory. Which, OK, third-party drivers suck, but Microsoft's effort to patch is also not zero, and the most common (OS+browser/sandbox) threat model requires a chain of vulnerabilities that are hard to come by.


This is not the way TPMs are used by most of the industry. For example, Microsoft and now Canonical are advertising it as a way to do FDE which Microsoft has known to be broken since 2006. They are requiring it for Windows 11 because of "security" and have provided no software feature on Windows for this kind of use case. It is only done by the OSS community.

> The other useful application is to prevent block device data extraction without knowing the passkey.

Nope, read the appendix. Since 2006, BitLocker without PIN is vulnerable to physical extraction with $80 worth of equipment. And to enable enhanced PIN for BitLocker you have to jump to a lot of hoops that most people don't even know about.


> This is not the way TPMs are used by most of the industry. [...] It is only done by the OSS community.

So some industry stakeholders are doing bad things with an inherently neutral technology. Does that mean we need to get rid of the entire thing, thereby also killing the OSS use cases?

Yes, trusted computing can be used in user-hostile ways, but the solution here seems to be to not use OSes and applications using it in that way, rather than throwing out the technology as a whole.


The trouble is we keep conflating two different things.

Something that works like a hardware security module, where it stores your keys and tries to restrict who can access them, has some potential uses. The keys are only in your own device, so someone can't break an entirely different device or a centralized single point of failure to get access. And this can't be used against the user because both the device and the key itself are still fully in their control -- they could put a key in the HSM and still have a copy of it somewhere else to use however they like.

Whereas anything that comes with a vendor's keys installed in it from the factory is both malicious and snake oil. Malicious because it causes the user's device to defect against them and some users aren't sophisticated enough to understand this or bypass it even if malicious attackers can, and snake oil because you can't rely on something for actual security if a break of any device by anyone anywhere could forge attestations, since that is extremely likely to happen and has a long history of doing so.


> Anything that comes with a vendor's keys installed in it from the factory is both malicious and snake oil.

I don't agree that all trusted computing use cases are inherently user-hostile. DRM is a well-known example, but e.g. Signal used to do interesting things server-side using (now no-longer trusted, ironically) Intel SGX/TXT, like secure contact matching or short PIN/password security stretching for account recovery.

Android Protected Confirmation [1] is also trusted computing at its core, but can be used to increase security for users (although I could also see that usage encourage a device vendor monoculture, since every app vendor needs to select a set of trusted device manufacturers).

> snake oil because you can't rely on something for actual security if a break of any device by anyone anywhere could forge attestations

Attestation keys are usually per-device, so if indeed only one device gets compromised at great attacker expense, it's usually possible for a scheme to recover. If all devices just systematically leak their keys as has certainly happened in the past, that won't help, of course.

[1] https://android-developers.googleblog.com/2018/10/android-pr...


> e.g. Signal used to do interesting things server-side using (now no-longer trusted, ironically) Intel SGX/TXT

Because this is the "snake oil" prong of its failure -- and why it's no longer trusted.

> Android Protected Confirmation

This could be implemented without any vendor keys. You associate the user's own key with the user's account.

> Attestation keys are usually per-device, so if indeed only one device gets compromised at great attacker expense, it's usually possible for a scheme to recover.

That's assuming it matters at that point. The attacker doesn't care if you revoke the keys after they steal your money.

And once they extract a key from one device, they have a known working procedure to get more. For non-software extraction most of the expense is the equipment which they'd still have from the first one.

> If all devices just systematically leak their keys as has certainly happened in the past, that won't help, of course.

And is likely to happen in the future, so any design that makes the assumption that it will not happen is clearly flawed.


> This could be implemented without any vendor keys. You associate the user's own key with the user's account.

But how would you bootstrap this? How do you make sure the initial key was actually created in the secure exceution environment and not created by MITM malware running on the main application processor?

If this was that easy, FIDO authenticators wouldn't need attestation either.

> That's assuming it matters at that point. The attacker doesn't care if you revoke the keys after they steal your money.

If attacking a single device costs a few millions, it definitely does matter, since you'd need to expend that effort every single time (and you'd be racing against time, since the legitimate owner of the device can always report it as stolen and have it revoked for transaction confirmation, transfer their funds to another wallet etc.)

> And is likely to happen in the future, so any design that makes the assumption that it will not happen is clearly flawed.

How does some implementations falling apart imply all possible implementations being insecure? Smartcards are an application of trusted computing too, and there have been no successful breaches there to my knowledge. The fact that the manufacturers specialize in security, not in general-purpose computing like Intel and only occasionally dabble in security, probably helps.


> But how would you bootstrap this? How do you make sure the initial key was actually created in the secure exceution environment and not created by MITM malware running on the main application processor?

The device comes with no keys in it, but includes firmware that will generate a new key, put it in the HSM and provide the corresponding public key. The public key is authenticated to the service using whatever means is used to authenticate the user rather than the device, because what you're doing here is assigning the key in this device to this user, so it's the user and not the device you need to authenticate.

But now if the user wants to they can use a different kind of device.

> If this was that easy, FIDO authenticators wouldn't need attestation either.

They shouldn't.

> If attacking a single device costs a few millions, it definitely does matter, since you'd need to expend that effort every single time (and you'd be racing against time, since the legitimate owner of the device can always report it as stolen and have it revoked for transaction confirmation, transfer their funds to another wallet etc.)

You're talking about the HSM case where the user's own key is in the device and you need to break that specific device. In that case you don't need to prove that the device is a specific kind of device from a specific manufacturer (remote attestation), you need to prove that it is that user's device regardless of what kind it is (user's key in the HSM).

> How does some implementations falling apart imply all possible implementations being insecure?

Because for remote attestation the attacker can choose which device they use for the attack, so if there is any insecure implementation the attacker can use that one.

And if you deploy a system relying on it and then a vulnerability is discovered in millions of devices you're screwed, because you now have a security hole you can't close or you have to permanently disable all of those devices and have millions of angry users. But this is historically what has happened, so relying on it not happening again is foolish.

> Smartcards are an application of trusted computing too, and there have been no successful breaches there to my knowledge.

Smartcards don't require any kind of a third party central authority. You know this is Bob's card because Bob was standing there holding it in his hand while you scanned it and assigned it to Bob in the system. Bob could have made his own card and generated his own key and it works just as well. It's a completely different thing than remote attestation.


> In my mind that's the main thing a TPM is really useful for.

Unfortunately, it's not much good for that either.

A yubikey has a button to confirm the user's presence - so even if a remote attacker has completely compromised the machine, because they can't press the button, they can't get anything out of the key.

The TPM has no button, so it has to rely on the OS to keep your pin safe from keyloggers. If your OS is that trustworthy, you might as well just store your secrets in the OS keyring.

The TPM is also about 50x more complicated than a yubikey, to support things like multi-user systems. This means there's a much bigger attack surface.


The button of a Yubikey doesn't add as much security as you might think: Since you don't know what you are actually confirming (due to the lack of a display), what prevents an attacker with control over your OS to just wait until you want to confirm something legitimate and then front-run that request?


I don't agree with this. Yes, any TPM is necessarily possible to bypass, but it's not easy. I know I could bypass normal password-based FDE with physical access to a machine without any special hardware or software, but not TPM-based. I assume, by the Pareto principle, that there are lots of people with my ability but exponentially fewer who could bypass a TPM. So it's definitely more secure than password-based FDE, and it's good enough for me.


>I know I could bypass normal password-based FDE with physical access to a machine without any special hardware or software

How?


Change the bootloader and /bin/init so that it captures the disk encryption password as you enter it. It could then send it to me, or add a second password that I know so that I can decrypt it with later physical access.

Actually, on second thought, it's secure boot that protects against this attack, which doesn't require or use a TPM? So maybe I'm wrong


Not a great take. The TPM provides the primitive of "non-extractable keys"; it's not supposed to magic up secure boot.

Even then, the argument that a TPM is worthless because it can't guarantee that software is free of vulnerabilities just belies an un-seriousness of the post. Like okay, that argument applies to every threat model ever.

A boot chain can be secure with or without a TPM. The TPM just says "I'll record what your boot chain told me and spit it back out with a signature that is verifiable by public key cryptography, so that you can tell it's what your boot chain told me. How much you trust your boot chain is up to you."


TPM relies on every link in the chain up to your OS being free of vulnerabilities. If any part has a bug, then the TPM is broken. For this kind of model, why not just put the data in one of those layers then? You've said that it's secure already.

(Most other threat models go "ok we trust some part of this is secure, and that means we can guarantee x, y, z; if that part is not secure then we cannot do this.)


> the signature of the BIOS is checked against a public key whose hash is stored in fuses

> Each of dozens (up to hundreds) of UEFI drivers written by various OEMs with varying levels of competence and care are loaded

Doesn't the BIOS signature encompass those drivers? Put another way, isn't the BIOS vendor attesting those drivers are non-malicious with their signature?

I think the TPM will turn out to be a net negative for consumers since it's going to get used to get used for attestations users can't control (ie: against the will of the user), but there are some benefits. Having a BitLocker key unlocked via a PIN where the TPM can protect against brute force attacks is useful for me. That alone covers most of my threat model which is having my data extracted from a lost or stolen PC.


Ask Google exactly how they enforce their zero trust, VPN-less remote work environment. Hint: it has to do with the TPM. DRTM + Device Certificates + TLS Token Binding is a huge deal for proving that the endpoint is trusted, and that the principal actually logging in is using an approved device. DRTM prevents boot time tampering by assuring that the measured boot state is consistent with what the network expects.


Yes, when implemented correctly (I've never seen Google's implementation so I can't comment), D-RTM + Secure Boot is good. If Microsoft would give us this before shoving TPM down our throats, it would be good :) But they haven't even fixed the weaknesses they identified on their own in 2006.


> D-RTM + Secure Boot is good. If Microsoft would give us this before shoving TPM down our throats, it would be good

D-RTM requires TPM.


None of my machines when I was at Google implemented this. The attestation was a bunch of scripts running on my computer that cobbled together the output of various things they cared to validate.


On the one usage scenario that benefits a PC user, the TPM makes for a really bad yubikey. You can't carry it between computers, you can't back it up, and you are certain to lose it at some point when the computer breaks of gets outdated.

That means it either requires a second protocol for authentication, or that you will lose your accounts with all kinds of services all the time.


The TPM covers cases where you want to authenticate the machine, not the user (who'd have a Yubikey they'd carry with them between machines).

There are plenty of valid use-cases where you'd want the machine to authenticate itself to services (VPN to enterprise network?) before anyone logs in (or ever logs in, as in the case of servers who operate unattended).


> There are plenty of valid use-cases where you'd want the machine to authenticate itself to services (VPN to enterprise network?)

This one is huge: always-on VPNs mean enterprise security mandates don’t delay patching or other remote management tasks just because someone is on vacation or sick, and that stuff can happen at 3am on Sunday rather than when they start work. No more “please leave your computer on overnight” messages.


Depends on the implementation? For many services, I register my Yubikey, but also the Android fingerprint authentication (as well as TOTP as another fallback).

So for example, if I login to Gitlab on my phone, I can use my fingerprint (lockscreen auth). It's more convenient than using the TOTP app.

Similarly, I could register a TPM from my desktop that could be the same as using the fingerprint auth? It would only work from that desktop, but it's the same logic as my phone, and in a sense, that's a nice benefit.


Every fallback method adds risk. Realistically though, I don't think any of it really matters. By far the weakest link everywhere is SMS/Email based account recovery and it's almost impossible to avoid those.

Sometimes I think the average person would be better of with a highly secured email account and magic links for everything else. Even for me, I have YubiKeys, TPMs, etc. configured for everything, but if I forget to lock my laptop and someone walks off with it, they have access to my email which is basically my entire digital life due to account recovery via email.


> the TPM makes for a really bad yubikey. You can't … you can't back it up

Technically speaking, the exact same restrictions apply to a Yubikey.

That’s what makes it secure.


TPM isn’t about security, it’s about DRM


I’ve seen many widely deployed applications of TPM for security, never for DRM.


You've never used Widevine? If you ever tried to use a streaming website, you almost certainly did.

EDIT: To clarify, Widevine doesn't actually use the TPM, but Widevine L2 uses a TEE for key exchange and decryption, which are all things that modern TPMs support. The use a crypto coprocessor for key exchange and decryption is widely used.


Are you sure Widevine uses the attestation functionalities of a TPM on Windows?

I thought Widevine on computers (whether Windows, macOS or Linux) is always L3, i.e. software only, and L1 needs a TEE on Android or an embedded OS such as on a set top box or streaming dongle.


Indeed, it doesn't use it on Windows. Widevine L2 however uses the TEE in exactly the same way as a TPM is used, for attestation and for cryptography (you can do AES decryption using a TPM).


A TPM isn't the same thing as a TEE at all. You can't run copy protection logic in a TPM, for example. The two are complementary, though:

A TEE can run "trusted" logic, such as DRM decryption code, and you can use a TPM-like device to hold the attestation keys and measurement funcitonalities for the TEE. I'm saying "TPM-like" because some TEEs have their own proprietary or embedded secure elements and don't need a TPM proper. (I'm actually not sure if TPMs are "TEE-aware", which would be required to e.g. only let some keys be used from the secure context, as otherwise storing DRM keys in the generally-accessible portion defeats the purpose.)

Without a TEE (and a TPM itself does not imply one on x86), what you can do is declare your entire system a TEE, and then use the TPMs measurements as an assertion over that system's untampered state.

This is pretty infeasible to do securely though, given the size of the codebase of most OSes, which is why the "DRM in TEE" approach is much more common. That's what Android does, for example.


Please stop spreading FUD.


DRM like full disk encryption, passkeys, etc. What DRM actually uses TPM? Pretty sure wildvine or any other common DRM tech use it to wrap the encryption of the stream. Why would it need?


No, TPM can be about both – and more.


In theory, yes. In practice, TCBs are so big, that DRM solutions based on it are nearly meaningless for general purpose computers.


So it boils down to "we shouldn't attempt to build new security stuff because what it's built on could have vulnerabilities"?

Time to go back to kernel mode everything I guess. Just run everything as root, get rid of sudo.


The exact argument was made in the article

> There are a plethora of attacks on TPM in the past but we need to be clear that a system that is widely attacked does not necessarily mean it is fundamentally insecure but only that there are many implementation issues. Most of these implementation issues do not touch upon the points raised in this article (it doesn't matter if the gate to your garden is strong or weak if there is no fence around the garden). Nevertheless, many of the attacks demonstrate the lack of care and consideration in the TPM ecosystem.

The issue is that TPM is being heavily pushed while it provides no security value. When you have Secure Boot (no additional hardware required), you get everything that Microsoft promises. The entire idea of TPM is that it gives you an extra level of security and I argue that it doesn't.


Ring-0 only, and 640x480 as God intended.


That's why I use TempleOS exclusively.


There's an argument to be made that security vulnerabilities the user can't fix are worse than ones they can, but this article doesn't make it.


Should the title at least be "the trusted computing/measurement functionality of TPMs provide..." rather than "TPM provides..."?

TPMs can do other useful things besides performing attestation measurements for trusted computing, including acting as a secure element to safeguard and rate-limit keys used for SSH, disk encryption and much more.


> The Trusted Platform Module(TPM) requirement enables Windows 11 to be a true Passwordless operating system

Good luck trying to remote (RDP) into a Windows box with a passwordless account or to access a fileshare.

While passwordless Microsoft accounts are very convenient it is only according to MS Marketing department that windows can be a true passwordless system. In reality it is not. There any several components in Windows that does not work with a passwordless account. The RDP and network issues has been know for many years and is a PITA for home networking.


Wouldn't the threath mitigation model be "prevent 80% of normies watching Disney on their HDMI monitor without paying"?


The threat model is "corporation wants hardware attestation so they can implement zero-trust models" but these articles always ignore that, because they don't have any alternatives to suggest.


Been wondering if I should enable these things in the firmware for several years, so the discussion is welcomed.

I do have a travel laptop and recently installed LUKS to it. I like having my long password, but being able to tie unlocking to the hardware sounds like a good idea too. Is there a way to have both? A long password and require the local TPM?


Yes you can set up your LUKS to require both. Have a look at systemd-cryptenroll.


Zero is a stretch. I think they have largely failed to serve their purpose in the consumer device realm, beyond decent integration with BitLocker.

Despite the shortcomings, I think they are very useful devices from the perspective of running data centers. I consider it useless against evil maid attacks though.


I feel like BitLocker is a great feature. So great it's one of the few things keeping me on Windows. Would not call it "largely failed".


I think they failed to serve their purpose because until relatively recently they've only been present on premium-grade machines, so software couldn't rely on the presence of one.

Nowadays with Microsoft making it a requirement (as well as fTPM which means the TPM no longer requires dedicated hardware) we might see more use-cases.


All the hardware based attacks require opening up the laptop and doing something with the motherboard.

I'd have to check if bottom cover tampering on my Lenovo actually requires me to put in the BitLocker keys again.


The absolute dumbest shit here gets up voted regarding Linux.

No dang, I don't care about the spirit of the site when absolute ludicrous mindrot garbage is up voted here constantly. You'll note on the Wayland thread that, despite being the 30th Wayland thread, the only substantive reply agreed with me. It's a joke.

Don't worry, I'm changing my password to a random guid, you'll be free of me in 45 seconds.


A lot of this is plain naive and wrong.


How so?


well sure, TPM is mostly about limiting what the average Joe can run on his computer. It isn't meant to stop adversaries.


Its just more garbage DRM. Its Sony telling you that you don't own your PS3 all over again except it's the PC you built and Microsoft telling you what you can and can't do with it. Try to crack your CPU key to fake the TPM and Microsoft will sue you out of existance just like Sony did with Geohotz. Thanks to the DMCA and it's anti-circumvention laws, we no longer own any hardware, we're just borrowing it.


At this point I don't understand why hardware vendors can't just do it like Apple. Put a small ARM SoC with some firmware in ROM onto the mainboard that starts before the main CPU and initializes it, ensuring that the system is in a known state before any components boot.


That's actually how modern Intel and AMD CPUs work.


Indeed, and TPM + secure boot broadly define how these built-in firmware TPMs (fTPMs) implement verification and validation of system components.

TPM is the specification and standard for a predictable way this is implemented, and most modern CPUs do this as you say, with option ROM validation, UEFI firmware integrity checking, etc.


The author claims that this is not the case:

> Note that at any point in this process, if the attacker is able to control code execution, there is no way for TPM to know that the measurement it was just handed wasn't a lie. Now let's assume you are an attacker trying to get the BitLocker keys, what can you do? [0]

[0] https://gist.github.com/osy/45e612345376a65c56d0678834535166...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: