misc0110 suggests those are implementation details, but I'm a little skeptical -- I bet it's pretty dang hard for a Chrome extension to close off every way to get access to a given function in javascript.
So I think it's probably best to look at this as a user interface testbed -- basically a test of how annoying or effective it would be if browsers asked users to opt into these things, and which set of policies would be least annoying for the maximum protection. I suppose it also sets a ceiling on the performance impact, but it's not obvious the impact would be the same if the same rules were set at the browser level.
The disclaimer in that issue seems pretty clear - yes, this can be bypassed, this is not production ready, this is proof of concept. They explain that they could attain 'first runner' status in Chrome but ultimately the protection belongs built into the browser.
What's the use case for this? Is this for crypto sensitive code or password matching that is vulnerable to timing attacks and such? Or is this for avoiding things like Spectre in a more general sense?
https://github.com/IAIK/ChromeZero
Note this closed issue from karthikbhargavan pointing out some of the ways a malicious page could get access to unprotected javascript features:
https://github.com/IAIK/ChromeZero/issues/2
misc0110 suggests those are implementation details, but I'm a little skeptical -- I bet it's pretty dang hard for a Chrome extension to close off every way to get access to a given function in javascript.
So I think it's probably best to look at this as a user interface testbed -- basically a test of how annoying or effective it would be if browsers asked users to opt into these things, and which set of policies would be least annoying for the maximum protection. I suppose it also sets a ceiling on the performance impact, but it's not obvious the impact would be the same if the same rules were set at the browser level.