Micron Announces 32GB DDR4 NVDIMM-N Modulesby Billy Tallis on November 13, 2017 9:00 AM EST
Micron is announcing today their next generation of NVDIMM-N modules combining DDR4 DRAM with NAND flash memory to support persistent memory usage models. The new 32GB modules double the capacity of Micron's previous NVDIMMs and boost the speed rating to DDR4-2933 CL21, faster than what current server platforms support.
Micron is not new to the Non-Volatile DIMM market: their first DDR3 NVDIMMs predated JEDEC standardization. The new 32GB modules were preceded by 8GB and 16GB DDR4 NVDIMMs. Micron's NVDIMMs are type N, meaning they function as ordinary ECC DRAM DIMMs but have NAND flash to backup data to in the event of a power loss. This is in contrast to the NVDIMM-F type that offers pure flash storage. During normal system operation, Micron's NVDIMMs use only the DRAM. When the system experiences a power failure or signals that one is imminent, the module's onboard FPGA-based takes over to manage saving the contents of the DRAM to the module's 64GB of SLC NAND flash. During a power failure, the module can be powered either through a cable to an external AGIGA PowerGEM capacitor module, or by battery backup supplied through the DIMM slot's 12V pins.
Micron says the most common use cases for their NVDIMMs are for high-performance journalling and log storage for databases and filesystems. In these applications, a 2S server will typically be equipped with a total of about 64GB of NVDIMMs, so the new Micron 32GB modules allow these systems to use just a single NVDIMM per CPU, leaving more slots free for traditional RDIMMs. Both operating systems and applications need special support for persistent memory provided by NVDIMMs: the OS to handle restoring saved state after a power failure, and applications to manage what portions of their memory should be allocated from the persistent portion of the overall memory pool. This can be addressed either through applications using block storage APIs to access the NVDIMM's memory, or through direct memory mapping.
Micron is currently sampling the new 32GB NVDIMMs but did not state when they will be available in volume.
Conspicuously absent from Micron's announcement today is any mention of the third kind of memory they make: 3D XPoint non-volatile memory. Micron will eventually be putting 3D XPoint memory onto DIMMs and into SSDs under their QuantX brand, but so far they have been lagging far behind Intel in announcing and shipping specific products. NVDIMMs based on 3D XPoint memory may not match the performance of DRAM modules or these NVDIMM-N modules, but they will offer higher storage density at a much lower cost and without the hassle of external batteries or capacitor banks. Until those are ready, Micron is smart to nurture the NVDIMM ecosystem with their DRAM+flash solutions.
Post Your CommentPlease log in or sign up to comment.
View All Comments
III-V - Monday, November 13, 2017 - link"There is no point in increasing complexity. More chips - more things to fail."
Okay, so let's just run 8-bit CPUs. More transistors equals more things that can fail, after all! Idiot.
"Pointless cost increase."
Ever second of downtime counts in a data center. Idiot.
"What's the point when it is trivial to do accomplish the same with a few lines of code and a general purpose nvme drive?"
Because you've got to spend time doing that. This saves time, and therefore money. If you had half a brain, you know this. Idiot.
PeachNCream - Monday, November 13, 2017 - linkDon't feed the troll.
ddriver - Monday, November 13, 2017 - linkAdding "idiot" to every pathetic failure to make a valid point or even a basic adequate analogy only adds value to illustrating what you are :)
ddrіver - Monday, November 13, 2017 - linkLess chips is less complexity and less things to fail. How is this not obvious? O transistor itself does not fail. The chip it's part of might. You have ~35bn transistor in a 4GB chip. They could easily put 256bn transistors into a 32GB package and reliability would be a lot better. Or at least they could make all chips socketed so you can easily replace the failing ones.
Reflex - Monday, November 13, 2017 - linkWhy did you let him back? First article I click on today and on the very first comment and throughout most of the comments its a bunch of ill-informed drivel from someone who does not understand the product, the target market, and who clearly did not even read the article.
How is this additive?
CajunArson - Monday, November 13, 2017 - linkOh look, "ddriver" insulting technologies he clearly doesn't understand again from the comfort of his mom's basement.
peevee - Monday, November 13, 2017 - linkThe only problem with it is that the capacitors are external. So the DIMM itself is insufficient to maintain the data.
theeldest - Monday, November 13, 2017 - linkDell PowerEdge 14G has a battery available to provide power to up to 12 (maybe 16?) NVDIMMS. So it's integrated into the system and works exactly as you'd expect.
Hereiam2005 - Monday, November 13, 2017 - linkThing is, there is this application called in memory database. Where the entire database is stored within the dimm. About 3TB a node.
Lets say there’s a power failure. If you have a super ssd with a 3000MBps bandwidth, you have to keep the entire system alive for 1000 seconds, or about 15 minutes to backup your entire memory. That’s 15 minutes you don’t have.
On the other hand, if you put the SLC cache on the DIMM, 1) you don’t have to keep the entire system up, just the DIMM itself is enough, 2) you only need to backup the data on one single DIMM per SLC cache instead of all of them, and 3) you bypass the entire CPU and motherboard, enabling you to have monster bandwidth between the DIMM and the cache with far less power requirement.
Yeah, these things will eventually fail. But the pros outweigh the cons. Unless you can solve all those problems without the ssd cache, nvdimms are here to stay.
Just because you can’t see the need for these doesn’t mean it is not useful to someone else.
ddriver - Monday, November 13, 2017 - linkSo in your expert opinion, you are gonna spend 100 000$ on RAM but put a single SSD on that system? Yeah, that makes perfect sense, after all you spent your budget on RAM ;)
IMO such applications would actually rely on much faster storage solutions than your "super ssd" - current enterprise SSDs are twice as fast and more. For example the Ultrastar SN260 pushes above 6 GB/s. So that's only 500 seconds. A tad over 8 minutes. And you can put a few of those in parallel too. Two of those will cut time to 4 minutes, four to just 2. You put 150k in a server and put in a power backup solution that cannot even last 4 minutes? You are clearly doing it wrong. I'd put a power generator on such a machine as well. Not just a beefy UPS.
But that doesn't even have to take that long. Because in-memory databases can do async flushing of writes to negligible performance impact, and to tremendous returns.
You DON'T wait for power failure and then commit the whole thing to memory. You periodically commit modifications, and when power is lost, you only flush what remains. It won't take more than a few seconds, even with very liberally configured flush cycles. It will usually take less than a second.
Nobody keeps in memory databases willy-nilly without flushing data to persistent storage, not only in cases of power loss, but also in cases of component failure. Components do fail, dram modules including. And when that happens, your 3 TB database will be COMPLETELY lost, even with them precious pseudo NV DIMMs. As I already said - pointless.
But hey, don't beat yourself, at least you tried to make a point, unlike pretty much everyone else. That's a good sign.