Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
3D Xpoint memory: Faster-than-flash storage unveiled (bbc.com)
179 points by Sami_Lehtinen on July 28, 2015 | hide | past | favorite | 96 comments



I did research in the field from 2008 to 2013. The claims are always the same: faster than flash, denser than dram, lower power than both.

Here's how to spot BS: first, the density claims are immaterial until they prove yield at a technology node same as dram's today. There are multiple billions investment between 180nm and high yield (and low mask count) 2xnm, no matter how cool is the new memory technology.

Second, speed claims must be explicitly about latency: flash bandwidth is as large as you want it, latency is ~100us (read). Even so, the moment anybody claims that latency is faster than Dram, you know they're feeding the hype and lying to you: dram latency does not depend on the memory technology, rather it depends on the array size. So an 8Gb chip of any memory technology that is fast enough, is likely to be just as fast as dram.

Third: power consumption. Dram's active power is as low as it gets, the memory cell in particular stores information with really little energy. The array interconnect and circuitry consume most energy and a different memory technology won't change that.


Actually you should be taking the claims very seriously. This has been an area Micron has been researching for decades. They've had a patent on the memory cell for about 11 years now (US6777705). Phase change memory has for a very long time been one of those just over the horizon technologies, but Micron has perfected it well enough to compete with NAND and partnered with Intel to bring it to market.

This isn't something that appeared out of nowhere. Take a look at this presentation:

https://www.micron.com/~/media/documents/products/presentati...

Page 20 starts the section on "3-D Cross-Point Memory". They even have a 64Mb demonstrator on page 23, fabbed sometime before the 2011 Flash Memory Summit.

Additional info on 64Mb demonstrator: http://investors.micron.com/releasedetail.cfm?releaseid=4672...

Additionally, in early 2014, they stopped selling PCM modules, stating that "Micron's previous two generations of PCM process technologies are not available for new designs or technology evaluation, as the company is focused on developing a follow-on process to achieve lower cost per bit, lower power and higher performance."

Job description details mentioning PCM, chalcogenides, and "cross point technology":

https://www.linkedin.com/jobs2/view/12292797?trk=job_view_si...

And as far as process, as early as 2013 (2013 fall analyst conference handouts), Micron had PCM on a 45nm process and was listing 2xnm as next node. If they've gotten together with Intel and announced this, they have already reached 2xnm and/or beyond or are certain in their capability to do so.

Because of the way flash memory is organized into pages and blocks, latency is very workload specific. Can you point out where anyone claimed it was better than DRAM latency?, the BBC article says "DRAM chips are still faster than 3D Xpoint, but the difference is much smaller than when compared with flash".

And power consumption. This is PCM. Power consumption by the array when not being read/written is zero.


Forward looking article from June with tons of analysis http://seekingalpha.com/article/3253655-intel-and-micron-the...


No, apparently it's not PCM.

>While they did not specifically state it, it looks to be phase change memory (edit at the Q&A Intel stated this is not Phase Change).

Source: http://www.pcper.com/news/Storage/Breaking-Intel-and-Micron-...


The response in question has a lot of ah, uh's, and stutters:

"Relative to phase change, which has been in the market place before and which micron itself has some experience with in the past again this is a very different architecture in terms of the place it fills in the memory hierarchy because it has these dramatic improvements in speed and volatility and performance"

They didn't say that it wasn't PCM, and they didn't say that the technology was different. They said that the architecture was different than the PCM that Micron produced before. Which would be the cross-point organization and the all important selector element. They mention that the cell works my a "property change of the material", or a "bulk material property change".

Go look up Micron's recent patents and applications. This is PCM.

I do find it very interesting that they are bending over backwards not to call out the memory technology and specifically answering a question directly regarding if it is actually PCM.


The video presentation mentioned "resistive". I'm not a memory nerd. Does that fact from their disclosure help at all narrow what this type of memory can be best categorized as?

https://intel-micron-webcast.intel.com/webcast


There is an entire family of memory elements who's resistance is modified in order to store data. RRAM/ReRAM, CBRAM, PCM, MRAM, as well as many others.


I suppose "resistive" in this context relates to the method used to read the memory, not so much how this change of resistance is implemented.


There are many ways of suggesting that a technology is ready when it's not. I have not seen anything published with 2x nm and a cell size that is anywhere near 4F2 to 8F2 (on a decently large array)...

Re the latency caveat I was suggesting BS spotting rules. You're right that this article did not claim the memory was faster than dram.

RE power consumption, it's true that it you don't use it an NVM will not consume power, yet that is not what people refer to when talking about power consumption. Can you quote a technology paper that shows lower than dram, or lower than flash for that matters, active power consumption? (The proper unit is W/GBps here)


If this is PCM have they "perfected" it to the point that it won't wipe itself if it goes over 100 degrees C?


I think IBM found that the last level cache in their Power7 CPUs were faster with eDRAM than with SRAM because the reduction in transmission latency enabled by the smaller size made up for the higher cell latency so that's probably the crossover point where cell density wins over reasonable cell latencies. Of course memories with very large latency operations like a Flash erase can still dominate at much large sizes. And for off-die memory the transport is going to mask most differences anyways.

The various sort of stacked memory coming into commercial use have the potential to reduce interconnect power and potentially make active RAM power at least slightly relevant. But I think the interesting thing about reducing RAM power consumption is decreasing passive power. Not that I expect anything to come of it for at least another half decade.


Power consumption of DRAM is high, what are you talking about? DRAM requires constant refresh -- the whole reason DRAM can be high-density if because it is a 1-T cell with a trench capacitor -- but the capacitor needs to be refreshed on a synchronized basis (hence SDRAM) -- that was the big innovation.

And yes, array size affects latency but so does the underlying process technology (much more so across technologies).


As I did not have the numbers fresh in memory (apparently I needed a refresh ;) ) I googled for dram data sheet and got the first ddr3 PDF (Hynix). Quick, and very dirty math suggests that

-refresh power is on the order of milliwats per Gigabyte

-memory access is on the order of 0.1W per GB per second

For refresh power, let's face it: it's puny. Regarding the active power, I have yet to see anything (off chip and this size) that is better than that.

P.S. I did say active in my original post :)


you overlooked refresh time, which is around 7~8 ms. So, in order to keep the DRAM data alive, DRAM should consume a few tenths of Watt per Gigabyte per second, it is considerable amount of power.


A few corrections:

-refresh period is more like 32-64ms

-refreshing is way (way) cheaper than streaming data out since you refresh an entire word line, and you do not pay io cost, which is a significant fraction of the cost of accessing dram.


watt per second makes little sense in this context

watt != energy


It's W/GBps, or if you prefer simplifying J/GB. I like the former better, power and bandwidth are handier in real life computer science.


Ah, it made sense now.


But Watt * second IS energy.


> dram latency does not depend on the memory technology, rather it depends on the array size

Wait, what? You mean they are making us wait 10ns for row open and 10ns for row close just for shits and giggles? Or did you mean to say "bandwidth"? I'm confused (genuinely, not sarcastically).


Not shits and giggles, it's access to the array: subarray and word line decoding plus sense amp time. Find me a technology that does not need all these and I'll bow to you (and invest in your startup :D).


SRAM?

I'm only half kidding. As a complete industry outsider it doesn't seem ridiculous to think that SRAM could do to DRAM what SSDs did to HDDs. Alas, I have no idea what the economics + trends are and I never did more than dabble in semiconductor engineering/physics so I will only ever find out after the fact.


>>As a complete industry outsider it doesn't seem ridiculous to think that SRAM could do to DRAM what SSDs did to HDD

SRAM needs 6 transistors per bit, DRAM 1transisitor+1capacitior. SRAM just doesn't scale and it's very expensive.


Thanks for trying to be helpful, but this is both the most common and least believable explanation among those that I keep hearing. A constant factor between 3 and 6 kills SRAM's viability? DRAM is already over-provisioned by a factor of 2-4 even at the middle of the consumer spectrum just to support the "use case" of someone who can't be bothered to close their tabs. Going back to the hand-wavey analogy, SSDs stormed the scene with a ~50x constant factor disadvantage:

http://www.kitguru.net/components/ssd-drives/anton-shilov/sa...

If the only thing standing between SRAM and DRAM were a constant factor of <6, DRAM would already be history.

The most convincing explanation I've heard is that caches are so damn good at hiding latency that getting rid of row open/close just doesn't matter. A few minutes of googling suggests that they often run at a 95% hit rate on typical workloads and a 99% hit rate on compute workloads. You would still need a cache even with main memory as SRAM to hide transit-time, permission checking, and address translation latency, so SRAM main memory wouldn't actually free up much die space, it would just make your handful of misses a bit faster (well, it would free up the scheduler / aggregator, but not the cache itself). The reason why I called this one "most convincing" rather than "convincing" is that even with a 99% hit rate a single miss has such atrocious latency that it would seem to matter.


That's a constant factor of what, 6? Less once you consider the capacitor and row refresh circuitry?

And yet I cannot purchase even a 1GB SRAM stick.

I don't see why that equates to "just doesn't scale". Can you elaborate?


How often does the BS include "going into production" and "sampling later this year"?


Here:

BS: "going into production this year"

Possibly not BS: "going into production this year at 2x nam technology node with a y Gb part"

Many nvm memory technologies, including mram and PCM have been "in production" for quite many years, with << 1Gb parts on old technology nodes, that is.


The biggest thing that just came to my mind is for durable storage. I think I can some problem at work radically differently and much simpler if I had this available in 10s of GB.

How? Because its bit addressable and persistent. Together this makes it much simpler to implement some durable storage . We don't need the log structure good for NAND block erase issue. We don't need to worry about the flush cost compared to HDD (and this ones even faster than NAND). It would be simple to batch write the data to a slower storage if the Xpoint memory fills up.

You can design databases that keep the hot data in memory and merge the result with older disk storage and this would allow a lot of batching for efficient processing and storage. But given its a database/transaction you need durability and that makes thing that much complicated. There are still lots of problem to solve when you cross the limit of a single machine but the single machine limit can get a lot larger for a lot of problems.


They're claiming three orders of magnitude faster than NAND and three orders of magnitude more durable than NAND, which means you'll still need wear leveling to get it to last for several years, but you apparently won't have the complexity of erase blocks being much larger than writable page size.

I do wonder how much it would be slowed down by the kinds of sophisticated error correction SSDs are now relying on.


"Bit-addressable". I don't think this suffers from the same issues that NAND does with successive writes. The other articles I'm seeing after a quick search also suggest a three order of magnitude increase in write endurance.


Yes, if you read what I wrote I mentioned the three orders of magnitude increase in write endurance as compared with NAND. But when paired with the three orders of magnitude increase in performance, that means it takes the same number of hours to burn it out. And a NAND device without wear leveling can be burned out in less than a day of heavy use.

Bit addressability has absolutely nothing to do with endurance. NOR flash is bit addressable but suffers from the same endurance limitations as NAND, because they're fundamentally the same kind of memory cell, just connected differently.


>Yes, if you read what I wrote I mentioned the three orders of magnitude increase in write endurance as compared with NAND. But when paired with the three orders of magnitude increase in performance, that means it takes the same number of hours to burn it out.

Only in some bizarro world where "three orders of magnitude increase in performance" also means "we'll write three orders of magnitude more data into it".

Loads are about use cases, not about how fast you can fill a disk. If my company produces 1TB analytics info per day it wont suddenly produce 1000TB just because I can write to the disks we buy faster.

Of course being able to fill it faster also opens up some new, more heavy, use cases. But for any existing use cases, we'd be writing the SAME data volumes we do now, just 1000 times as fast and with 1000 times the endurace.

And even if we write 100 the data we do now, we still get 10 times the endurance.


The thing about bit addressability is not completely true. If you can address memory only in pages, you can have quite a large write amplification, depending on the access pattern. A single byte written may count as $PAGESIZE "written data".


A lack of bit addressability means that hammering one bit would burn out a whole word/page, but it doesn't affect how many cycles it takes to reach that burn-out point, unless you've got wear leveling.

In practice, if you burn out any one bit, you need to retire a chunk of the array at least as large as a cache line. And it's not likely that you'll actually be able to directly hammer a single bit, because the endurance is still low enough to require ECC.


So what do we do with it?

We have two models of storage - volatile working storage, and things that simulate disk drives. It's not clear what to do with persistent randomly addressable storage at DRAM speed. Having to go through an OS and a file system to access a few bytes kills the performance advantage of such devices. Making the device look like RAM makes it too easy to mess up. We need something in between, probably with processor support to allow controlled access without going through the OS for each access.

The great thing about RAM being volatile is that you can reboot and clean up your mess. With persistent storage, things can go gradually downhill.


Use the MMU to keep the fast non-volatile RAM out of both application and kernel memory, put a traditional RAM disk file system on the fast non-volatile RAM, and let mmap truly map blocks of files into address spaces, instead of demand-paging it in.

That probably is not the best one can do, but it is simple, and may be fairly easy to get ‘right’ if you put the code handling the file system part into a secondary kernel address space. That keeps the part that can mess up the file system’s metadata small.


DAX, which is already in new kernels, provides something like this.

https://lwn.net/Articles/610174/


Coincidentally I noticed some recent patches on linux-kernel:

"BTT is a library that converts a byte-accessible namespace into a disk with atomic sector update semantics (prevents sector tearing on crash or power loss). ... BLK is a driver for NVDIMMs that provide sliding mmio windows to access persistent memory." https://lwn.net/Articles/649588/


I think the only distinction you need to make is memory where transactions make sense (user data), and where they don't (program state).

I can see a scenario where opening a file for writing works just like mmap[0] with MAP_PRIVATE, i.e. you get blockwise copy-on-write, except everything will persist as if everything on your filesystem was under a VCS like git.

I reckon just like volume management, encryption and snapshotting has steadily been folded in to the filesystem, so will the VFS and page cache.

[0] http://man7.org/linux/man-pages/man2/mmap.2.html


see pmem.io



Always the same thing.

>1000X faster, 1000X cheaper!

Then "No, you can't buy one right now, but you will be able to do it "soon". And, no it won't actually be 1000X faster neither 1000X cheaper because blah blah blah..."


... and yet, now we have 10TB hard drives and cheap reliable SSD's on pci. we also have 24 core xeons and x-billion-transistor gpu's. let's not forget fiber at home, 40gig ethernet in the datacenter, and reliable 4G on main st.

clearly, something, somewhere is causing progress to happen, despite your inexplicable inability to see it.

the real problem is people keep making software that gobbles up all these gains.


They explicitly mention that they are going to release it next year. Not exactly when next year but it's clear enough of a statement that it will be a public failure, if they don't come out with it.

I think that's sufficient to not put it into the vaporware category.


Over the years I have come to realize that unless I can buy something in the nearest BestBuy/Amazon, it is vapor. And also, until then, I would know how much cheaper and better performant it actually is.


It's an official press release from Intel and Micron and the release says that they've begun production, not that they just got it working in a lab.

Performance questions are more valid.


On the infographic the subtext of "1000x faster" is "up to 100s of times faster than NAND"

Then it mentions 10x more performance with a PCIe/NVMe interface.

Still all good things, but 1000x is probably more marketing than reality.

Source: http://www.intelsalestraining.com/infographics/memory/3DXPoi...


My guess is that 10x indicates that they've saturated the bandwidth of the PCIe interface.


I think the reason this is so often true, is because with stuff like this, there is not a eureka moment and suddenly you have something 1000x better than what already exists as can sometimes happen in other fields like chemistry, but a technique is developed and then improved over a period of time. The point being, nobody would wait to be 1000x better than the market, as soon as you beat the incumbents by a much lower factor, you will bring your product to market. There is no reason to hold your release because you are only at 10x and next year you will be 50x.


I think you're conflating R&D press releases (which are often vapour) with announcements of actual production, which are few and far between, and almost always pan out.

Second, those 1000x faster and 1000x cheaper. I've been around a few decades, and we DO have 1000x faster and 1000x cheaper stuff now.

CPUs are 1000x the speed of 1980 CPUs.

1GB of RAM would cost you a house back in 1990.

A 1TB disk would cost you half a skyscrapper plus take 2-3 houses to house back in the day.


Not only do we have stuff 1000x faster/cheaper than before, but 3D XPoint is backed by intel and micron, both have colluded for decades on how to develop faster, cheaper, durable storage.


Where did you read "1000x cheaper"? And they say comparing to SSD and HDD it will not be cheaper but faster and comparing to RAM it will be cheaper but not faster.


I hope someone will give a push behind MRAM which seems like a more interesting option imo.

MRAM has similar performance to SRAM, similar density to DRAM but much lower power consumption than DRAM, and is much faster and suffers no degradation over time in comparison to flash memory. It is this combination of features that some suggest makes it the “universal memory”, able to replace SRAM, DRAM, EEPROM, and flash.

https://en.wikipedia.org/wiki/Magnetoresistive_random-access...


There's also FRAM, which has also been around for a while? I have an MSP430 microcontroller where the primary storage is 64kB of FRAM. It's great; performs like SRAM, but is totally persistent, and doesn't wear out.

The largest devices I've seen are 4Mb (512kB), which isn't a lot by PC standards, but is dead handy for embedded.

browses

Oho, they make one which is pin-compatible with SRAM chips!

http://www.fujitsu.com/global/products/devices/semiconductor...

Price looks like $15 a single part to $12 for ten. Ouch. If we assume that it's about $20 a megabyte in bulk, a gigabyte would cost about $20k. This is, price-wise, equivalent to:

- RAM, in 1996

- Spinning disk, in about 1988

- Flash, in about 1998 (extrapolation, my chart doesn't have nay data before 2004).

Ref: http://www.jcmit.com/memoryprice.htm

Today's statistics have been brought to you by Late, and Tired. Enjoy.


I'd like to see a source for the "similar density to DRAM" my understanding is that it's an 8F2 footprint, and scaling it down to small process nodes is still problematic, even with spin-torque transfer.

All that being said, there are people who make the exact same claims about RRAM as you quote for MRAM, which 3d-xpoint appears to be.

Also note that you can buy MRAM parts right now, which are replacements for battery-backed SRAM, and are more radiation resistant than SRAM. Densities are fairly low though.


> my understanding is that it's an 8F2 footprint

Where does that come from? From everything I've read is that its structure is fairly analogous to DRAM, "simply" replacing the capacitor with the magnetic tunnel junction, which has its component layers stacked vertically, thus not really taking up any extra space.

> scaling it down to small process nodes is still problematic, even with spin-torque transfer.

yeah, that's the main gist I'm getting too from following the news.

> people who make the exact same claims about RRAM as you quote for MRAM, which 3d-xpoint appears to be.

At least the xpoint incarnation still seems to be slower than DRAM according to that article, while MRAM is being offered as SRAM/battery-backed DRAM drop-in replacement.


>> my understanding is that it's an 8F2 footprint

> Where does that come from? From everything I've read is that its structure is fairly analogous to DRAM, "simply" replacing the capacitor with the magnetic tunnel junction, which has its component layers stacked vertically, thus not really taking up any extra space.

I did some searching; older references show an 8-12F2 size, for e.g. the Everspin parts. Grandis claims a 6F2 size which is indeed comparable to DRAM.

>> people who make the exact same claims about RRAM as you quote for MRAM, which 3d-xpoint appears to be.

> At least the xpoint incarnation still seems to be slower than DRAM according to that article, while MRAM is being offered as SRAM/battery-backed DRAM drop-in replacement.

Right, the product they are claiming they will manufacture next year is slower than DRAM and less dense than flash. (Frustratingly I couldn't find a reference for if they are talking about latency or throughput when they say "slower"; it makes a big difference for which applications will be hurt by the performance mismatch).

However, there doesn't appear (yet) to be a fundamental reason why resistive ram must always be slower than DRAM, nor a fundamental reason why they couldn't do MLC tricks with it, so you can't say all RRAM will be slower than DRAM and less dense than NAND.


Fair point, perhaps I should have phrased it more as a technology with similar applications. I find it interesting that there are parts available now, albeit made on a 180nm process apparently.



Looks like this news item is made up from recent Intel's press release [1] and quite a bit of hand-waving. Take, for example, a claim that 'it would radically increase the number of people that could be supported for the same price'. We already have E7 series Xeons that support up to 1.5TB of RAM [2] (is it per socket? there can be up to 8 sockets per server) At that point, surely we would be constrained by processing and I/O, not RAM.

[1] http://newsroom.intel.com/community/intel_newsroom/blog/2015...

[2] http://ark.intel.com/products/84685/Intel-Xeon-Processor-E7-...


A guess: this uses memristors(i.e. rram).

crossbar is a startup in the field, their site gives plenty of details about the tech.

http://www.crossbar-inc.com/


No, this is not memristors (unless you consider PCM to be a memristor) and it has no relation to crossbar-inc.com. This is phase change memory.

https://www.google.com/search?q=site%3Amicron.com+%22cross-p...


I think this is not the memristor thing. I remember memristor-related posts in Slashdot (2008 [1], 2012 [2]), but the ones related to Intel + Micron (2014 [3], 2015 [4]) tell about "3D nand", which may be just NAND-flash using multiple layers with 3D interconnect. However, the speed and density claims make doubt if in reality this is or not the same as the memristor.

[1] http://hardware.slashdot.org/story/08/04/30/211228/memristor... [2] http://hardware.slashdot.org/story/12/07/25/2127229/the-hp-m... [3] http://hardware.slashdot.org/story/14/11/25/2027220/how-inte... [4] http://hardware.slashdot.org/story/15/03/26/2037221/micron-a...


RRAM was there first, and it will previal.

Lets just forget about this "memristor" PR-nonesense. HP did as well. (Their activities seem to be dead)


Do memristors read or write based on the applied voltage? I noticed that in some of the intel marketing material.


How else would you do read/write?


I'm not familiar with the physics of memristors or storage in general. I was wondering if a single line is used to read and write, increasing the voltage on write to change the state.


Apparently a few different things have been called "memristors", but at least some of them have a resistance that depends upon the integral of current through them into the infinite past. DC current one way to write a 1, the other way to write a 0, and AC current to nondestructively read, I guess. Typically this works through ion migration in the material, and I suspect that resulted in abysmal endurance, putting an end to the promise of memristors.


So this is essentially persistent DRAM?


The article mentions:

>By contrast, 3D XPoint works by changing the properties of the material that makes up its memory cells to either having a high resistance to electricity to represent a one or a low resistance to represent a zero.

Which sounds a lot like memristors: https://en.wikipedia.org/wiki/Memristor to me. If it's memristor memory, the main advantages are that it's cheap, uses hugely less power than flash, takes up less space, and has far lower latencies.

We're probably a long ways from replacing DRAM with memristors because it's still much higher latency, but if this stuff scales up well you could do something like put the whole page file on it and get even faster loads than current top of the line SSDs.


The press release seems to be intentionally silent on the actual memory cell technology. Another possible tech the cell might be based upon is some variant of Nano-RAM.

https://en.wikipedia.org/wiki/Nano-RAM


That's certainly what it sounds like, at least in terms of functionality. Slightly slower, non-volatile RAM.


"At that point in time, there was an 8:1 price/GB differential between the 512 GB SSDs and the 500 GB HDDs. On the smaller drives, the ratio was 4:1." [1]

If anything I'm hoping the introduction of a competing new technology will make higher capacity SSDs cheaper.

1: http://www.zetta.net/blog/ssds-replace-hdds/


This article is so thin on details.

"3D XPoint works by changing the properties of the material that makes up its memory cells to either having a high resistance to electricity to represent a one or a low resistance to represent a zero."

How does it do that?


The BBC isn't EE Times and they're never going to be. Also, Intel is going out of their way to avoid providing many details, so in this case it isn't the press who's dumbing it down.


Dear Micron, brace for a mother of all patent lawsuits from HP.


> Dear Micron, brace for a mother of all patent lawsuits from HP.

Ignore this person. They don't know what they are talking about.

This is phase-change memory. It's been on everyone's radar for the last decade. HP is pushing their titanium oxide memristors. That's a completely different technology.


No, apparently it's not PCM.

>While they did not specifically state it, it looks to be phase change memory (edit at the Q&A Intel stated this is not Phase Change).

Source: http://www.pcper.com/news/Storage/Breaking-Intel-and-Micron-...


HP certainly convinced the public that they have a monopoly on NVM. Too bad their technology isn't that successful.


HP itself is convinced they have a monopoly on the idea. Technology doesnt have to be successful if you have an army of lawyers working for you. Who wants to bet HP will sue just to keep Micron from shipping products for a couple of years.


I assume there are some other disadvantages besides just a high price compared to either DRAM or SSD storage. What are they?


It appears to be slower than DRAM, so it's not going to be a sufficient replacement. I also see no indication of how much storage it actually offers. The range between RAM and SSDs isn't that large and I think it's going to be a huge difference whether they can offer 16GB, 32GB or even 64GB at an affordable price.

For a lot of users 16GB might not be worthwhile, 64GB might actually allow some people to not use a SSD at all, so it I think they will have to provide 32GB of storage. That seems obvious enough that the fact it's not stated in the article seems somewhat concerning.


The article specifically puts it at a price point lower than DRAM but higher than flash. It's a passive array which helps make it cheaper than DRAM, but stores only one bit per cell, while modern NAND stores up to 3 bits per cell.


They're promising the first chip to be 128Gb, which is where MLC flash is right now. But their die size looks a lot larger than MLC.


Interesting. At 128GB it's definitely practical to put your system partition and applications on it. If you rely heavily on cloud services that would be more than sufficient for most people and even if it isn't you could add an SSD for music and videos.

If they release it next year and it turns out well, I can certainly see Apple pulling such a move for the MacBook (Air).


128Gb, not 128GB. You still need quite a few chips to build a usefully large drive.


128Gb is only a small factor below state-of-the-art flash chips, if at all. I think 3d NAND drives it up to 384 Gbit/die. Don't know about the die size though.


Quite a few being... 8.


From the article

"Each megabyte of 3D Xpoint will certainly be significantly cheaper than the equivalent amount of Ram. And the new technology has the added advantage of being non-volatile, meaning it does not "forget" information when the power is switched off. But, unfortunately it is still not quite as fast as Ram, and some - but not all - applications need the extra speed the older tech provides."


If what they claim is true, it has the potential to be actually cheaper than both DRAM and NAND, based on the density. The biggest disadvantage at the moment is that is so new that we don't know much.

What can be read from the announcement: it's not as rewritable as DRAM: only up to 1000 more writes than NAND. It's also slower than DRAM. But it seems better than NAND in all aspects.


It's nowhere near as dense as NAND. It's just a bit denser than DRAM.


>It's nowhere near as dense as NAND.

Err, they mention several times the density of NAND.


They claim 10 times the density of DRAM, which falls quite short of the density of NAND.


Samsung had similar claims about v-nand/3D-nand, which they released on their 850 Pro series


Did they claim it was magnitudes faster and 1000x more durable than NAND? I think not, since it is still the same concept...


This is a completely different technology from NAND.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: