> But how would you bootstrap this? How do you make sure the initial key was actually created in the secure exceution environment and not created by MITM malware running on the main application processor?
The device comes with no keys in it, but includes firmware that will generate a new key, put it in the HSM and provide the corresponding public key. The public key is authenticated to the service using whatever means is used to authenticate the user rather than the device, because what you're doing here is assigning the key in this device to this user, so it's the user and not the device you need to authenticate.
But now if the user wants to they can use a different kind of device.
> If this was that easy, FIDO authenticators wouldn't need attestation either.
They shouldn't.
> If attacking a single device costs a few millions, it definitely does matter, since you'd need to expend that effort every single time (and you'd be racing against time, since the legitimate owner of the device can always report it as stolen and have it revoked for transaction confirmation, transfer their funds to another wallet etc.)
You're talking about the HSM case where the user's own key is in the device and you need to break that specific device. In that case you don't need to prove that the device is a specific kind of device from a specific manufacturer (remote attestation), you need to prove that it is that user's device regardless of what kind it is (user's key in the HSM).
> How does some implementations falling apart imply all possible implementations being insecure?
Because for remote attestation the attacker can choose which device they use for the attack, so if there is any insecure implementation the attacker can use that one.
And if you deploy a system relying on it and then a vulnerability is discovered in millions of devices you're screwed, because you now have a security hole you can't close or you have to permanently disable all of those devices and have millions of angry users. But this is historically what has happened, so relying on it not happening again is foolish.
> Smartcards are an application of trusted computing too, and there have been no successful breaches there to my knowledge.
Smartcards don't require any kind of a third party central authority. You know this is Bob's card because Bob was standing there holding it in his hand while you scanned it and assigned it to Bob in the system. Bob could have made his own card and generated his own key and it works just as well. It's a completely different thing than remote attestation.
The device comes with no keys in it, but includes firmware that will generate a new key, put it in the HSM and provide the corresponding public key. The public key is authenticated to the service using whatever means is used to authenticate the user rather than the device, because what you're doing here is assigning the key in this device to this user, so it's the user and not the device you need to authenticate.
But now if the user wants to they can use a different kind of device.
> If this was that easy, FIDO authenticators wouldn't need attestation either.
They shouldn't.
> If attacking a single device costs a few millions, it definitely does matter, since you'd need to expend that effort every single time (and you'd be racing against time, since the legitimate owner of the device can always report it as stolen and have it revoked for transaction confirmation, transfer their funds to another wallet etc.)
You're talking about the HSM case where the user's own key is in the device and you need to break that specific device. In that case you don't need to prove that the device is a specific kind of device from a specific manufacturer (remote attestation), you need to prove that it is that user's device regardless of what kind it is (user's key in the HSM).
> How does some implementations falling apart imply all possible implementations being insecure?
Because for remote attestation the attacker can choose which device they use for the attack, so if there is any insecure implementation the attacker can use that one.
And if you deploy a system relying on it and then a vulnerability is discovered in millions of devices you're screwed, because you now have a security hole you can't close or you have to permanently disable all of those devices and have millions of angry users. But this is historically what has happened, so relying on it not happening again is foolish.
> Smartcards are an application of trusted computing too, and there have been no successful breaches there to my knowledge.
Smartcards don't require any kind of a third party central authority. You know this is Bob's card because Bob was standing there holding it in his hand while you scanned it and assigned it to Bob in the system. Bob could have made his own card and generated his own key and it works just as well. It's a completely different thing than remote attestation.