Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think this holds up. Historically, memory sizes have increased exponentially, but access times have gotten faster, not slower. And since the access time comes from the memory architecture, you can get 8 GB of RAM or 64 GB of RAM with the same access times. The estimated values in the table are not an especially good fit (30-50% off) and get worse if you adjust the memory sizes.

Theoretically, it still doesn't hold up, at least not for the foreseeable future. PCBs and integrated circuits are basically two-dimensional. Access times are limited by things like trace lengths (at the board level) and parasitics (at the IC level), none of which are defined by volume.





Not true, because then in theory you could build just L1 - L2 - L3 Cache with 64GB RAM instead of 1 - 2 MB: For SRAM in L1/L2/L3 for example you need to manufacture 6 transistors for 1 bit, while for DRAM you need 1 transistor and 1 capacitor. Thus, would men your chips at that high speed would become very big, and the speed of information through the wires would make a difference: On semiconductor level its a difference if you need to travel 1inch or 10inch billion times per second, creating an "efficient border" of how big your SRAM could max be in dependence of chip-size (and other factors like thermal effects)

Source: "What every Programmer should know about memory" https://people.freebsd.org/~lstewart/articles/cpumemory.pdf


You’re cheating and I don’t think you realize it.

Why didn’t computers have 128 terabytes of memory ten years ago? Because the access time would have been shit. You’re watching generation after generation of memory architectures compromise between access time and max capacity and drawing the wrong conclusions. If memory size were free we wouldn’t have to wait five years to get twice as much of it.


There are also economic considerations, power use, etc.

On the whole I agree, but the details keep bumping back into my assertion. Power use was a factor of Dennard scaling until very recently. So again you just wait until the next hardware generation and then trade a little time for more space.

Memory access times have not significantly improved in many years.

Memory bandwidth has improved, but it hasn't kept up with memory size or with CPU speeds. When I was a kid you could get a speedup by using lookup tables for trig functions - you'd never do that today, it's faster to recalculate.

2D vs 3D is legit, I have seen this law written down as O(sqrt N) for that reason. However, there's a lot of layer stacking going on on memory chips these days (especially flash memory or HBM for GPUs) so it's partially 3D.


> Memory access times have not significantly improved in many years.

We could say that it actually became worse and not better if we put it more into the context. For example, 90ns latency coupled with a 3GHz core is "better" than 90ns latency coupled with a 5GHz core. In latter, CPU core ends up being stalled for 450 cycles while in the former case almost half as much - 237 cycles.


Yeah, some years back I took a look at the Sieve of Eratosthenes--much, much faster to simply calculate.

While in absolute terms memory access has gotten faster, in relative terms it is MUCH slower today, compared to CPU speeds.

A modern CPU can perform hundreds or even thousands of computations while waiting for a single word to be read from main memory - and you get another order of magnitude slowdown if we're going to access data from an SSD. This used to be much closer to 1:1 with old machines, say in the Pentium 1-3 era or so.

And regardless of any speedup, the point remains as true today as it has always been: the more memory you want to access, the slower accessing it will be. Retrieving a word from a pool of 50PB will be much slower than retrieving a word from a pool of 1MB, for various fundamental reasons (even address resolution has an impact, even if we want to ignore physics).


You are correct in that the OP used the wrong metric. He should have written a big omega, not a big O.

> PCBs and integrated circuits are basically two-dimensional.

Yes, what pushes the complexity into Ω(n^1/2), that fits the original claim.

> Access times are limited by things like trace lengths

Again, Ω(n^1/2)

> and parasitics

And those are Ω(n)

So, as you found, on practice it's much worse, but the limit on the article is also there.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: