1. Decide when (not!) to respond to requests by examining incoming headers, paying special attention to the Origin header on the one hand, and various Sec-Fetch- prefixed headers on the other, as described in [resource-isolation-policy].
2. Restrict attackers' ability to load your data as a subresource by setting a cross-origin resource policy (CORP) of same-origin (opening up to same-site or cross-origin only when necessary).
3. Restrict attackers' ability to frame your data as a document by opt-ing into framing protections via X-Frame-Options: SAMEORIGIN or CSP’s more granular frame-ancestors directive (frame-ancestors 'self' https://trusted.embedder, for example).
4. Restrict attackers' ability to obtain a handle to your window by setting a cross-origin opener policy (COOP). In the best case, you can default to a restrictive same-origin value, opening up to same-origin-allow-popups or unsafe-none only if necessary.
5. Prevent MIME-type confusion attacks and increase the robustness of passive defenses like cross-origin read blocking (CORB) / opaque response blocking ([ORB]) by setting correct Content-Type headers, and globally asserting X-Content-Type-Options: nosniff.
This is... annoying. This sounds like Chromium team throwing up their hands, "We can't keep attackers out, so don't be giving out any data attackers can use." Which is basically going to mean, "don't give out data". This is going to cause roadblocks that frustrate legitimate users.
Cross-origin policies are a clusterfuck of bad design with defaults that assume things that are frequently not true. They seem more like a darkhorse to enforce DRM rules than true security features.
I mean, what's really going to happen? A legitimate user, using a browser app to try to link to a resource, can't access the resource because of CORS, then blames the app for being broken. An attacker, finding they can't access a resource because of CORS, writes a script to spoof the HTTP headers and gets access regardless.
Maybe I'm wrong. Someone explain it to me. I miss the weird web, CORS broke the weird web, but I've yet to see any explanation for how it actually prevents malicious behavior.
> This is... annoying. This sounds like Chromium team throwing up their hands, "We can't keep attackers out, so don't be giving out any data attackers can use."
It's actually the correct approach to realize that side-channels on shared processors are inevitable, and it's much easier to reason through the model of "anything your process can access, a malicious script running in that process can access too," instead of "well, maybe we'll patch HighResTimer to give a reduced precision, and change SharedArrayBuffer a little, to prevent the one known Spectre-like attack we've seen, and keep spaghetti-and-meatballing patches when more attacks surface."
> A legitimate user, using a browser app to try to link to a resource, can't access the resource because of CORS, then blames the app for being broken.
Yes, if your web app can't load a resource for legitimate user, it is broken.
> An attacker, finding they can't access a resource because of CORS, writes a script to spoof the HTTP headers and gets access regardless.
CORS rules are enforced by the browser, not the server, so that isn't a possible attack.
> Maybe I'm wrong. Someone explain it to me. I miss the weird web, CORS broke the weird web
You'll have to be more specific. What's broken?
> but I've yet to see any explanation for how it actually prevents malicious behavior.
Things like CORS and Frame-options prevent a malicious site from embedding your bank's login page beneath an innocent-looking button and clickjacking your session cookie to transfer money, for example. There's a lot more information about these measures readily available on MDN etc.
Yes, but his point is that when you write a server (any server), you still have to consider non-browser connections on top of browser connections, thus any security policy that handles the former can subsume the latter.
If you don't have any data that needs to be shared only with some users, then none of this advice applies.
But let's say you have https://social.example/moron4hire/photos/1234 which should only be visible to you and your contacts. If an attacker sends a manual HTTPS request, outside the browser, they won't have your credentials and so won't be able to view it -- good!
Now let's say https://evil.example puts in <img src="https://social.example/moron4hire/photos/1234">. This succeeds, because the browser automatically sends credentials, but it wouldn't normally allow evil.example to read the image contents. Unfortunately, Spectre etc changes this: since the image is getting loaded into an address space controlled by JS from evil.com, the contents can probably be read.
Can someone knowledgeable on the topic comment if the general premise is sufficiently plausible that an attacker can access all browser memory in one process? Have Spectre-class attacks been observed in the wild?
Is this all solved by just spawning a new process for each browser tab?
If so do it and spawn a single process for the browser window UI. Communicate between the browser window and tab processes with whatever flavor of IPC you desire, and secure it with whatever security layer you trust (TLS 1.3+ for example). We've basically reinvented the operating system desktop, which isn't that far off from how we treat browsers and the web today anyways...
From reading the threat model though I think the issue they're bringing up is that even in browsers that use per tab processes (Chrome, Firefox, etc.), they don't ubiquitously use per iframe processes in each tab. And there's a non trivial amount of work needed to get to a point where you can rely on that.
As an aside, you don't have to protect local IPC with TLS. Processes that can tap IPC generally have the tools to do brain surgery on the processes and pull the secrets out of their memory anyway. Your time is better spent understanding the local IPC security mechanisms that the kernel will enforce.
Yeah it's more just defense in depth. The browser is already going to have a super optimized and battle-hardened TLS engine, so might as well use that IMHO. Today we can reasonably assume process IPC is secure... but who knows what exploits will surface tomorrow.
It also opens up an interesting possibility where the tab process isn't on the same machine. That could be handy for power users, or even for interesting things like browser testing apps on different, remote devices.
I'm all about defense in depth, but for speculative security stuff like that I like to see a well thought out threat model where you actually protected against anything. Otherwise you have unbounded amounts of work (both developer human/cognitive and compute time) dedicated to security that isn't actually helping your users in a meaningful way.
The issue with not trusting that the kernel can protect IPC like it's supposed to is that so many ways to get root are ultimately protected by that same IPC protection scheme. If you don't trust the kernel, in the general case you've already lost with any scheme like TLS that ultimately relies on the kernel provided primitives to do stuff like hide the secrets in the first place.
You're also opening a new can of worms around boot strapping local certificate distribution to make TLS meaningful that bring you right back to the original problems.
Yep that's a great point, and really the security of TLS isn't so much algorithms and bits over the wire but all of the certs, processes, machinations etc. around generating and trusting them. There definitely would be a lot of headaches around building that chain of trust with dozens and hundreds of tab processes.
Meh, my three year old phone has 8GB of DDR4 RAM and 8x cores clocked at over 2ghz. In the time it takes my brain to send signals to move my muscles to move my mouse to a different tab my computer has lived multiple lifetimes of operations. For handling unbelievable load at scale, sure processes will kill scaling. But for handling a user clicking and touching their inputs of dozens or hundreds of tabs.. I doubt anyone will notice on today's machines.
Processes use RAM, and a lot of people use a lot of tabs. This isn't to say that each website couldn't have a separate process (and this is true if you browse different websites already), but memory usage isn't something you can just freely ignore even on today's hardware.
Wait, what? You mean, you're claiming that browsers are aggressively written to minimize memory usage for a given unit of functionality? Because that would be a hard case to make.
I just opened example.com (a tiny, static site) in Chrome. Chrome's task manager says that tab is using 16 MB. You're saying that the minimal overhead that a new process would add is significant compared to that? Because I see numerous processes in the OS task manager right now with a significantly smaller memory footprint.
If you just dropped the /s, and I didn't get the joke, consider me whooshed :-p
I am a bit confused, the only thing I've claimed is that making processes isn't necessarily something you can do because they use up non-negligible amounts of RAM. Sure, for a small static site this amount may not be a lot, but for most websites a hundred MB or more is the norm per tab. When browsers first started going multi-process memory usage ballooned because of the overhead, and while improvements are continually being made keeping everything in the same process would still be of significant benefit to memory usage.
Yes, a tab might 100s of MB of RAM. The question is how much more a tab would use by virtue of being a separate process vs not. Do you have a figure for this, and a reason a process spawn inherently requires a significant amount as a percentage of what browsers are using anyway?
1. Decide when (not!) to respond to requests by examining incoming headers, paying special attention to the Origin header on the one hand, and various Sec-Fetch- prefixed headers on the other, as described in [resource-isolation-policy].
2. Restrict attackers' ability to load your data as a subresource by setting a cross-origin resource policy (CORP) of same-origin (opening up to same-site or cross-origin only when necessary).
3. Restrict attackers' ability to frame your data as a document by opt-ing into framing protections via X-Frame-Options: SAMEORIGIN or CSP’s more granular frame-ancestors directive (frame-ancestors 'self' https://trusted.embedder, for example).
4. Restrict attackers' ability to obtain a handle to your window by setting a cross-origin opener policy (COOP). In the best case, you can default to a restrictive same-origin value, opening up to same-origin-allow-popups or unsafe-none only if necessary.
5. Prevent MIME-type confusion attacks and increase the robustness of passive defenses like cross-origin read blocking (CORB) / opaque response blocking ([ORB]) by setting correct Content-Type headers, and globally asserting X-Content-Type-Options: nosniff.