Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not shits and giggles, it's access to the array: subarray and word line decoding plus sense amp time. Find me a technology that does not need all these and I'll bow to you (and invest in your startup :D).



SRAM?

I'm only half kidding. As a complete industry outsider it doesn't seem ridiculous to think that SRAM could do to DRAM what SSDs did to HDDs. Alas, I have no idea what the economics + trends are and I never did more than dabble in semiconductor engineering/physics so I will only ever find out after the fact.


>>As a complete industry outsider it doesn't seem ridiculous to think that SRAM could do to DRAM what SSDs did to HDD

SRAM needs 6 transistors per bit, DRAM 1transisitor+1capacitior. SRAM just doesn't scale and it's very expensive.


Thanks for trying to be helpful, but this is both the most common and least believable explanation among those that I keep hearing. A constant factor between 3 and 6 kills SRAM's viability? DRAM is already over-provisioned by a factor of 2-4 even at the middle of the consumer spectrum just to support the "use case" of someone who can't be bothered to close their tabs. Going back to the hand-wavey analogy, SSDs stormed the scene with a ~50x constant factor disadvantage:

http://www.kitguru.net/components/ssd-drives/anton-shilov/sa...

If the only thing standing between SRAM and DRAM were a constant factor of <6, DRAM would already be history.

The most convincing explanation I've heard is that caches are so damn good at hiding latency that getting rid of row open/close just doesn't matter. A few minutes of googling suggests that they often run at a 95% hit rate on typical workloads and a 99% hit rate on compute workloads. You would still need a cache even with main memory as SRAM to hide transit-time, permission checking, and address translation latency, so SRAM main memory wouldn't actually free up much die space, it would just make your handful of misses a bit faster (well, it would free up the scheduler / aggregator, but not the cache itself). The reason why I called this one "most convincing" rather than "convincing" is that even with a 99% hit rate a single miss has such atrocious latency that it would seem to matter.


That's a constant factor of what, 6? Less once you consider the capacitor and row refresh circuitry?

And yet I cannot purchase even a 1GB SRAM stick.

I don't see why that equates to "just doesn't scale". Can you elaborate?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: