Really, different trust domain cannot seriously benefit from a common cache: their datasets are by definition disjoint.
So it would be safest to execute them on separate CPUs not sharing a common cache, e.g. pinning them to different CPU sockets on a multi-socket machine, or to different physical machines altogether.
This may be still faster than running on old ARMs.
I wonder if dedicated cloud boxes, where all VMs you spin on a particular physical machine belong only to your account, will become available from major cloud providers any time soon. In such a setup, you don't need all the ZombieLoad / Meltdown / Spectre mitigations if you trust (have written) all the code you're running, so you can run faster.
Scaleway is one such provider. They call it not virtualization but physicalisation.
I should add that AWS has "dedicated instances" which are exclusive to one customer. They are more expensive than standard instances but they are used by e.g. the better financial services companies for handling customer data.
- Same OS, server: you likely control all the processes in it anyway, no untrusted code.
- Different OSes in different VMs, server: they likely run different versions of Linux kernel, libc, and shared libraries anyway. They don't share the page caches for code they load from disk.
- Browser with multiple tabs, consumer device: likely they might share common JS library code on the browser cache level. The untrusted code must not be native anyway, and won't load native shared libraries from disk.
- Running untrusted native code on a consumer device: well, all bets are off if you run it under your own account; loading another copy of msvcrt.dll code is inconsequential compared to the threat of it being a trojan or a virus. If you fire up a VM, see above.
There's also Same OS, operating system level virtualization, like Docker, it's the less expensive default for https://www.ramnode.com/vps.php for example. Not at all familiar with this technology, but scanning Wikipedia if you used Docker would the libcontainer library be shared between container instances?
Likely it would be. But really Docker is more about convenience of deployment and (much) less about security. I would not run seriously untrusted code in merely a (Docker) container; I don't know much about the isolation guarantees of OpenVZ.
In any case, containers share OS kernel, OS page cache, etc. This can be beneficial even for a shared hosting as a way to offer a wide range of preinstalled software as ro-mounted into the container's file tree. Likely code pages of software started this way would also be shared.
So it would be safest to execute them on separate CPUs not sharing a common cache, e.g. pinning them to different CPU sockets on a multi-socket machine, or to different physical machines altogether.
This may be still faster than running on old ARMs.
I wonder if dedicated cloud boxes, where all VMs you spin on a particular physical machine belong only to your account, will become available from major cloud providers any time soon. In such a setup, you don't need all the ZombieLoad / Meltdown / Spectre mitigations if you trust (have written) all the code you're running, so you can run faster.