> This is not the way TPMs are used by most of the industry. [...] It is only done by the OSS community.
So some industry stakeholders are doing bad things with an inherently neutral technology. Does that mean we need to get rid of the entire thing, thereby also killing the OSS use cases?
Yes, trusted computing can be used in user-hostile ways, but the solution here seems to be to not use OSes and applications using it in that way, rather than throwing out the technology as a whole.
The trouble is we keep conflating two different things.
Something that works like a hardware security module, where it stores your keys and tries to restrict who can access them, has some potential uses. The keys are only in your own device, so someone can't break an entirely different device or a centralized single point of failure to get access. And this can't be used against the user because both the device and the key itself are still fully in their control -- they could put a key in the HSM and still have a copy of it somewhere else to use however they like.
Whereas anything that comes with a vendor's keys installed in it from the factory is both malicious and snake oil. Malicious because it causes the user's device to defect against them and some users aren't sophisticated enough to understand this or bypass it even if malicious attackers can, and snake oil because you can't rely on something for actual security if a break of any device by anyone anywhere could forge attestations, since that is extremely likely to happen and has a long history of doing so.
> Anything that comes with a vendor's keys installed in it from the factory is both malicious and snake oil.
I don't agree that all trusted computing use cases are inherently user-hostile. DRM is a well-known example, but e.g. Signal used to do interesting things server-side using (now no-longer trusted, ironically) Intel SGX/TXT, like secure contact matching or short PIN/password security stretching for account recovery.
Android Protected Confirmation [1] is also trusted computing at its core, but can be used to increase security for users (although I could also see that usage encourage a device vendor monoculture, since every app vendor needs to select a set of trusted device manufacturers).
> snake oil because you can't rely on something for actual security if a break of any device by anyone anywhere could forge attestations
Attestation keys are usually per-device, so if indeed only one device gets compromised at great attacker expense, it's usually possible for a scheme to recover. If all devices just systematically leak their keys as has certainly happened in the past, that won't help, of course.
> e.g. Signal used to do interesting things server-side using (now no-longer trusted, ironically) Intel SGX/TXT
Because this is the "snake oil" prong of its failure -- and why it's no longer trusted.
> Android Protected Confirmation
This could be implemented without any vendor keys. You associate the user's own key with the user's account.
> Attestation keys are usually per-device, so if indeed only one device gets compromised at great attacker expense, it's usually possible for a scheme to recover.
That's assuming it matters at that point. The attacker doesn't care if you revoke the keys after they steal your money.
And once they extract a key from one device, they have a known working procedure to get more. For non-software extraction most of the expense is the equipment which they'd still have from the first one.
> If all devices just systematically leak their keys as has certainly happened in the past, that won't help, of course.
And is likely to happen in the future, so any design that makes the assumption that it will not happen is clearly flawed.
> This could be implemented without any vendor keys. You associate the user's own key with the user's account.
But how would you bootstrap this? How do you make sure the initial key was actually created in the secure exceution environment and not created by MITM malware running on the main application processor?
If this was that easy, FIDO authenticators wouldn't need attestation either.
> That's assuming it matters at that point. The attacker doesn't care if you revoke the keys after they steal your money.
If attacking a single device costs a few millions, it definitely does matter, since you'd need to expend that effort every single time (and you'd be racing against time, since the legitimate owner of the device can always report it as stolen and have it revoked for transaction confirmation, transfer their funds to another wallet etc.)
> And is likely to happen in the future, so any design that makes the assumption that it will not happen is clearly flawed.
How does some implementations falling apart imply all possible implementations being insecure? Smartcards are an application of trusted computing too, and there have been no successful breaches there to my knowledge. The fact that the manufacturers specialize in security, not in general-purpose computing like Intel and only occasionally dabble in security, probably helps.
> But how would you bootstrap this? How do you make sure the initial key was actually created in the secure exceution environment and not created by MITM malware running on the main application processor?
The device comes with no keys in it, but includes firmware that will generate a new key, put it in the HSM and provide the corresponding public key. The public key is authenticated to the service using whatever means is used to authenticate the user rather than the device, because what you're doing here is assigning the key in this device to this user, so it's the user and not the device you need to authenticate.
But now if the user wants to they can use a different kind of device.
> If this was that easy, FIDO authenticators wouldn't need attestation either.
They shouldn't.
> If attacking a single device costs a few millions, it definitely does matter, since you'd need to expend that effort every single time (and you'd be racing against time, since the legitimate owner of the device can always report it as stolen and have it revoked for transaction confirmation, transfer their funds to another wallet etc.)
You're talking about the HSM case where the user's own key is in the device and you need to break that specific device. In that case you don't need to prove that the device is a specific kind of device from a specific manufacturer (remote attestation), you need to prove that it is that user's device regardless of what kind it is (user's key in the HSM).
> How does some implementations falling apart imply all possible implementations being insecure?
Because for remote attestation the attacker can choose which device they use for the attack, so if there is any insecure implementation the attacker can use that one.
And if you deploy a system relying on it and then a vulnerability is discovered in millions of devices you're screwed, because you now have a security hole you can't close or you have to permanently disable all of those devices and have millions of angry users. But this is historically what has happened, so relying on it not happening again is foolish.
> Smartcards are an application of trusted computing too, and there have been no successful breaches there to my knowledge.
Smartcards don't require any kind of a third party central authority. You know this is Bob's card because Bob was standing there holding it in his hand while you scanned it and assigned it to Bob in the system. Bob could have made his own card and generated his own key and it works just as well. It's a completely different thing than remote attestation.
So some industry stakeholders are doing bad things with an inherently neutral technology. Does that mean we need to get rid of the entire thing, thereby also killing the OSS use cases?
Yes, trusted computing can be used in user-hostile ways, but the solution here seems to be to not use OSes and applications using it in that way, rather than throwing out the technology as a whole.