Not shits and giggles, it's access to the array: subarray and word line decoding plus sense amp time. Find me a technology that does not need all these and I'll bow to you (and invest in your startup :D).
I'm only half kidding. As a complete industry outsider it doesn't seem ridiculous to think that SRAM could do to DRAM what SSDs did to HDDs. Alas, I have no idea what the economics + trends are and I never did more than dabble in semiconductor engineering/physics so I will only ever find out after the fact.
Thanks for trying to be helpful, but this is both the most common and least believable explanation among those that I keep hearing. A constant factor between 3 and 6 kills SRAM's viability? DRAM is already over-provisioned by a factor of 2-4 even at the middle of the consumer spectrum just to support the "use case" of someone who can't be bothered to close their tabs. Going back to the hand-wavey analogy, SSDs stormed the scene with a ~50x constant factor disadvantage:
If the only thing standing between SRAM and DRAM were a constant factor of <6, DRAM would already be history.
The most convincing explanation I've heard is that caches are so damn good at hiding latency that getting rid of row open/close just doesn't matter. A few minutes of googling suggests that they often run at a 95% hit rate on typical workloads and a 99% hit rate on compute workloads. You would still need a cache even with main memory as SRAM to hide transit-time, permission checking, and address translation latency, so SRAM main memory wouldn't actually free up much die space, it would just make your handful of misses a bit faster (well, it would free up the scheduler / aggregator, but not the cache itself). The reason why I called this one "most convincing" rather than "convincing" is that even with a 99% hit rate a single miss has such atrocious latency that it would seem to matter.