Which is what the provider themselves have, by definition. The people who run these services are literally sitting next to the box day in and day out... this isn't "provably" anything. You can trust them not to take advantage of the fact that they own the hardware, and you can even claim it makes it ever so slightly harder for them to do so, but this isn't something where the word "provably" is anything other than a lie.
yeah, for a moment I was reading it as being a holomorphic encryption type setup, which I think is the only case where you can say 'provably private'.
It's better than nothing, I guess...
But if you placed the server at the NSA, and said "there is something on here that you really want, it's currently powered on and connected to the network, and the user is accessing it via ssh", it seems relatively straightforward for them to intercept and access.
If you trust the provider then it does not make it much better to use such architecture. If you do not then at least the execution should be inside a confidential system so that even soldering would not get you to data
1) If you were GCP (as they are the attacker in this scenario), you'd attach the analyzer to ANY (!) ONE (!) server and then you migrate the user's workload that you wanted to snoop on (or were required to snoop on by the FBI) to your evil server. Like, you are clearly trying to say this makes it harder (though even if this were true that doesn't make it at all "provable")... but, if you support migration, you actually made it EASIER for you (aka, GCP) to abuse your privileged position.
2) These attacks are actually worse than what I am pretty sure you are assuming (and so where I started my response), as you actually just need one hacked server and then you can simulate working servers on other hardware that isn't hacked by either stealing an attested key or stealing the attestation key itself. You often wouldn't even then need to have the hacked server anymore.
Apple’s PCC has an explicit design goal to mitigate your point 1, by designing the protocol such that the load balancer has to decide which physical server to route a request to without knowing which user made the request. If you compromise a single physical server, you will get a random sample of requests, but you can’t target any particular user, not even if you also compromise the load balancer. At least that’s the theory; see [1] under the heading “non-targetability”. I have no idea whether OpenPCC replicates this, but I have to imagine they would. The main issue is that you need large scale in order for the “random sample of requests” limitation to actually protect anyone.
There are many things one can do to mitigate the (weaker) point 1, including simply not supporting any kind of migration at all. I only bothered to go there to demonstrate that the ability to live migrate is a liability here, not a benefit.
> targeting users should require a wide attack that’s likely to be detected
Regardless, Apple's attacker here doesn't sound like Apple: the "wide attack that's likely to be detected" is going to be detected by them. We even seemingly have to trust them that this magic hardware has the properties they claim it does.
This is way worse than most of these schemes, as if I run one of these on Intel hardware, you inherently are working with multiple parties (me and Intel).
That we trust Apple to not be lying about the entire scheme so they can see the data they are claiming not to be able to see is thereby doing the heavy lifting.
https://news.ycombinator.com/item?id=45746753