Hacker News new | past | comments | ask | show | jobs | submit login

Part 1: 50% applies, RAM controllers now part of the CPU, deprecating half of what’s written, also some CPUs have eDRAM. Part 2: almost completely applies. Part 3: 70% applies, hardware SLAT (Intel EPT / AMD RVI) deprecated what’s written about virtualization. Part 5: 90% applies. The rest of them — don’t know.

P.S. What’s I dislike most about the article, it fails to explain why L3 cache is 10-20 times slower than L1 cache, while they both made from SRAM.




> it fails to explain why L3 cache is 10-20 times slower than L1 cache, while they both made from SRAM.

Why is it? Is it because L3 is usually shared?


Even when the cache line is unshared, L3 is still 10 times slower than L1.

The best explanation I saw is this: https://fgiesen.wordpress.com/2016/08/07/why-do-cpus-have-mu...


I think generally the larger the cache the higher the latency. I think various factor are things like available power budget, some increase in signal propagation delay and more complicated logic needed in a larger search space.


Without having read the article I assume it is something to do with the asseociscativity and size of it?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: