Sorry for being naive. Are these kind of CPU Securities vulnerabilities new? Why it is in the past 20 years we have had close to zero in the news ( At least I wasn't aware of any ) and ever since Spectre and Meltdown we have something new like every few months.
And as far as I am aware they are mostly Intel CPU only. Why? And Why not AMD? Something in the Intel design process went wrong? And yet all the Cloud Vendor are still buying Intel and giving very little business to AMD.
Running many instances of various untrusted code on the same server is "new": it came with the cloud infrastructure.
Running many instances of various untrusted code on the same client machine is "new": it came with web apps, and with mobile apps.
Before several years ago, it was sort of a non-issue, because to exploit such a vulnerability one would need to write a virus or a trojan, and with this approach, there are many easier ways of privilege escalation.
Something like "cloud" existed likely on IBM mainframes under OS/VM [1] but System/370-compatible CPUs likely lacked all these exploitable speculative execution features.
Time sharing was very big in the 1970s, and non-OS/VM methods of sharing mainframes for batch processing were also big at times I'm less sure of.
Inviting complete randoms to routinely run untrusted code in your own security domain, as we do with browsers, that's "new". And thus the popularity of NoScript and uMatrix.
Indeed! Though time-sharing was more like a terminal server, or shared hosting, while OS/VM was more like a modern VM host.
It's interesting though why cross-process data exfiltration based on speculative execution was not tried with any success in the shared hosting environment of 1990s and early 2000s. I suppose it has something to do with the use of non-JIT-ted interpreted languages, like PHP, Perl, or SQL, on such hosting; you could not run an arbitrary native executable like you do in the cloud.
Another factor is that though speculative execution was first implemented in 1950s [1], it was either mainframes or RISC machines, and neither was used by the Intel-dominated shared-hosting environments.
> It's interesting though why cross-process data exfiltration based on speculative execution was not tried with any success in the shared hosting environment of 1990s and early 2000s.
According to several of the researchers who found Meltdown and/or Spectre, they'd always assumed Intel et. al. were too careful to let this happen, at least at useful data rates. But when they looked for reasons I forget, Katie bar the door!
A lot of reasons - one, we only recently (in academic research time) started using single servers to host services from multiple customers, so the value of these sorts attacks only recently became apparent.
Second, as I understand it, Spectre and Meltdown really started this whole parade because prior to those vulnerabilities, speculative execution attacks were something only academics ever talked about - everyone assumed it would be too difficult to pull off in the real world. When that received wisdom was proved wrong, it probably opened the floodgates for researchers - both in terms of intellectual interest and money.
Also, re: why Intel and not AMD... I think Intel is probably a higher-dollar target due to their dominance in the server market, but also probably because they have been neglecting QC for years... see, e.g., http://danluu.com/cpu-bugs/
Dan Luu didn't note that Meltdown goes all the way back to their first out-of-order speculative execution design, the Pentium Pro in 1995. I note that ARM, and both of IBM's architectures, POWER and mainframe, also had Meltdown issues, and everyone including AMD "enjoys" Spectre bugs, so named because they'll be haunting us for a very long time.
I think it is definitely worth introspecting about the history. It has been known for over 20 years that sharing pretty much anything creates side channels but nobody knew how to reliably exploit them and it was assumed that side channels might never be exploitable. In recent years there has been massive progress in practical data extraction using side channels.
you sure about that link? he's talking about a core that didn't have SMT and is ranting, in general, about errata existing and wildly misrepresenting their impact
never mind that most errata are conditional until the ucode patch load, but that particular rant has nothing to do with HT
It have always been known how to exploit them. But doing so used to be slower and there have been fewer opportunities for attacks. OS kernels used to have Big Locks (AFAIK, OpenBSD still does), that significantly deterred programs from messing with kernel code and CPU caches.
Things have changed a lot since then: OS kernels became faster by eliminating a lot of unnecessary (?) cross-process overhead; browser makers made a number of potentially problematic decisions ("let's allow Javascript to create CPU threads — what could possibly go wrong?"); Linux kernel developers made few potentially problematic decisions ("let's allow unprivileged processes to invoke arbitrary BPF bytecode — that worked for Java, so what could possibly go wrong?")
A lot of small security lapses added up until it became viable to use CPU flaws to actually target ordinary users. To add insult to injury, certain corporations started spreading myth, that well-known insecure practices — such as knowingly running local software from questionable authors — are "safe enough" for general population. Topic web page even talks about running untrusted Android software, as if Android had some kind of impenetrable security boundary around untrusted apps.
> Why it is in the past 20 years we have had close to zero in the news ( At least I wasn't aware of any ) and ever since Spectre and Meltdown we have something new like every few months.
It's a new vulnerability class. Prior to Spectre, nobody thought that code which didn't execute (and couldn't execute) could affect architectural state in an observable way. It's hard to overstate how bizarre the vulnerabilities from the Spectre family are from a software point of view: it's leaking data from code that not only didn't execute yet, but also can never execute, and in some cases doesn't even exist! It's like receiving a packet your future self sent to the past, except that your future self had been dead for two years when he sent the packet, and for some reason he's actually a parrot.
Once a new vulnerability class is discovered, researchers will start looking for new bugs in and around that class. Which is why we have seen lately so many issues disclosed around speculative execution and data leaked through shared microarchitectural state.
This is a common pattern for new bug classes. Nobody thought to look at this, and when they did, the rabbit hole went deep. We likely haven’t seen the bottom.
AMD are not better. They’re probably worse. They’ll be looked at when the Intel tree stops bearing fruit. But finding an Intel bug is higher impact, so that’s what researchers want to look at.
Intel and AMD don't have to share the same bugs for AMD to be worse.
Consider you've got two sets of vulnerabilities: [1, 2, 3] and [2, 4, 5, 6, 7, 8].
If I label set 1 Intel, and set 2 AMD, then you can see how doing your research on Intel first will make it seem like Intel has 3x the vulnerabilities as AMD - even though it actually has half.
Trying Intel attacks on AMD just in case is cheap and easy, and in this case fruitless. It doesn’t shed any light on how much effort is being put into finding AMD’s own specific screw ups.
And as far as I am aware they are mostly Intel CPU only. Why? And Why not AMD? Something in the Intel design process went wrong? And yet all the Cloud Vendor are still buying Intel and giving very little business to AMD.