The biggest thing that just came to my mind is for durable storage. I think I can some problem at work radically differently and much simpler if I had this available in 10s of GB.
How? Because its bit addressable and persistent. Together this makes it much simpler to implement some durable storage . We don't need the log structure good for NAND block erase issue. We don't need to worry about the flush cost compared to HDD (and this ones even faster than NAND). It would be simple to batch write the data to a slower storage if the Xpoint memory fills up.
You can design databases that keep the hot data in memory and merge the result with older disk storage and this would allow a lot of batching for efficient processing and storage. But given its a database/transaction you need durability and that makes thing that much complicated. There are still lots of problem to solve when you cross the limit of a single machine but the single machine limit can get a lot larger for a lot of problems.
They're claiming three orders of magnitude faster than NAND and three orders of magnitude more durable than NAND, which means you'll still need wear leveling to get it to last for several years, but you apparently won't have the complexity of erase blocks being much larger than writable page size.
I do wonder how much it would be slowed down by the kinds of sophisticated error correction SSDs are now relying on.
"Bit-addressable". I don't think this suffers from the same issues that NAND does with successive writes. The other articles I'm seeing after a quick search also suggest a three order of magnitude increase in write endurance.
Yes, if you read what I wrote I mentioned the three orders of magnitude increase in write endurance as compared with NAND. But when paired with the three orders of magnitude increase in performance, that means it takes the same number of hours to burn it out. And a NAND device without wear leveling can be burned out in less than a day of heavy use.
Bit addressability has absolutely nothing to do with endurance. NOR flash is bit addressable but suffers from the same endurance limitations as NAND, because they're fundamentally the same kind of memory cell, just connected differently.
>Yes, if you read what I wrote I mentioned the three orders of magnitude increase in write endurance as compared with NAND. But when paired with the three orders of magnitude increase in performance, that means it takes the same number of hours to burn it out.
Only in some bizarro world where "three orders of magnitude increase in performance" also means "we'll write three orders of magnitude more data into it".
Loads are about use cases, not about how fast you can fill a disk. If my company produces 1TB analytics info per day it wont suddenly produce 1000TB just because I can write to the disks we buy faster.
Of course being able to fill it faster also opens up some new, more heavy, use cases. But for any existing use cases, we'd be writing the SAME data volumes we do now, just 1000 times as fast and with 1000 times the endurace.
And even if we write 100 the data we do now, we still get 10 times the endurance.
The thing about bit addressability is not completely true. If you can address memory only in pages, you can have quite a large write amplification, depending on the access pattern. A single byte written may count as $PAGESIZE "written data".
A lack of bit addressability means that hammering one bit would burn out a whole word/page, but it doesn't affect how many cycles it takes to reach that burn-out point, unless you've got wear leveling.
In practice, if you burn out any one bit, you need to retire a chunk of the array at least as large as a cache line. And it's not likely that you'll actually be able to directly hammer a single bit, because the endurance is still low enough to require ECC.
How? Because its bit addressable and persistent. Together this makes it much simpler to implement some durable storage . We don't need the log structure good for NAND block erase issue. We don't need to worry about the flush cost compared to HDD (and this ones even faster than NAND). It would be simple to batch write the data to a slower storage if the Xpoint memory fills up.
You can design databases that keep the hot data in memory and merge the result with older disk storage and this would allow a lot of batching for efficient processing and storage. But given its a database/transaction you need durability and that makes thing that much complicated. There are still lots of problem to solve when you cross the limit of a single machine but the single machine limit can get a lot larger for a lot of problems.