SK Hynix has recently added single-die DDR4 memory chips featuring 16 Gb capacity to its product catalog. The benefit of the increase in single-die capacity is two fold: not only will the new components enable the company to build high-capacity memory modules using fewer chips, but also it will enable SK Hynix and its partners to build 256 GB DDR4 memory modules for ultra-high-end servers.

16 Gb DRAM chips per se are not exactly a breakthrough. Memory makers, including SK Hynix, already build high-capacity DRAM components by stacking two or four 8 Gb memory dies vertically using TSVs to get 16 Gb and 32 Gb components, then use such chips to build memory modules featuring 64 GB and 128 GB density. Stacking makes organization of DIMMs very complex: in the case of a 64 GB module we are dealing with a quad-ranked DIMM (featuring two physical and two logical ranks), whereas a 128 GB module is octal ranked (featuring two physical ranks and four logical ranks). LRDIMMs have a relatively high latency in general (because they use additional buffers), meanwhile complexity of 64 GB/128 GB LRDIMM architecture forces module makers to increase them even further (to CL20/CL22 for DDR4-2400/DDR4-2666 speed bins).

By contrast, SK Hynix has managed to develop single-die 16 Gb DDR4 components. Such ICs enable producers to build client memory modules or subsystems with a fewer number of chips, lowering power consumption, and allows server-class DIMMs with densities of up to 256 GB. When it comes to servers, the 16 Gb DDR4 components will allow to build dual-ranked 64 GB modules, quad-ranked 128 GB LRDIMMs and octal-ranked 256 GB LRDIMMs.

Do not expect the 256 GB modules to show up tomorrow, but the importance of ultra-high-density LRDIMMs is hard to overestimate. For example, if the microcode is adjusted to allow it, a single socket Xeon Scalable platform featuring an -M suffixed processors with 12 total memory slots could potentially support 3 TB of six-channel memory. Meanwhile, an AMD EPYC-based system can currently support 2 TB of eight-channel memory per CPU socket, and these modules could help support double that. For in-memory applications like huge databases, the more DRAM they can get the better. Undoubtedly, 128 GB and 256 GB memory modules will come at a price. For example, Crucial sells its 128 GB DDR4-LRDIMM for $3999.99 in retail, so a 2X capacity module would cost considerably higher.

SK Hynix’s 16 Gb DDR4 chips are organized as 1Gx16 and 2Gx8 and supplied in FBGA96 and FBGA78 packages, respectively. At present, 16 Gb memory components are rated to operate in DDR4-2133 CL15 and DDR4-2400 CL17 modes at 1.2 Volts. Sometimes in the third quarter SK Hynix plans to add DDR4-2666 CL19 to the lineup. SK Hynix does not disclose which manufacturing technology it uses to make its 16 Gb chips, but it is logical to expect that the company uses a fabrication process with minimal feature sizes and high yields to make large dies.

General Specifications of SK Hynix's 16 Gb Chips
Part Number Transfer Rate Latency Org. Pkg. VDD Availability
H5ANAG6NAMR-TFC 2133 MT/s 15-15-15 1Gx16 FBGA96 1.2 V Now
H5ANAG6NAMR-UHC 2400 MT/s 17-17-17
H5ANAG6NCMR-UHC 2400 MT/s 17-17-17 Q3 2018
H5ANAG6NCMR-VKC 2666 MT/s 19-19-19
H5ANAG8NAMR-TFC 2133 MT/s 15-15-15 2Gx8 FBGA78 Now
H5ANAG8NAMR-UHC 2400 MT/s 17-17-17
H5ANAG8NCMR-UHC 2400 MT/s 17-17-17 Q3 2018
H5ANAG8NCMR-VKC 2666 MT/s 19-19-19

Keep in mind that it will take quite a while for server makers to validate 16 Gb chips and 2Hi/4Hi stacks based on them, so do not expect 256 GB modules to hit today’s servers shortly from now. In the meantime, 16 Gb DDR4 chips will enable makers of SO-DIMMs to build single-sided 16 GB DDR4 SO-DIMM modules. This will also allow thin laptops (that do not use modules, but rely on commodity memory) to install 16 GB of DRAM using eight chips. For any user wondering why most 13-inch notebooks do not want to use 16 GB of DRAM in all but the high-end specification, these chips should enable a nicer ecosystem for higher memory capacity small notebooks.

Related Reading

Source: SK Hynix

Comments Locked

14 Comments

View All Comments

  • yuhong - Thursday, January 25, 2018 - link

    I believe the listed memory chips are dual die packages, not true 16Gbit chips.
  • KarlKastor - Friday, January 26, 2018 - link

    You're right. The tenth digit is an "M". M = DDP
  • iwod - Friday, January 26, 2018 - link

    Which made half of the article pointless....
    But, that means we now have single die, duel die, Stacked Die.

    What exactly is stopping us from making larger DRAM, both capacity and die area wise. In the era of In-Memory Computing, 4TB isn't that large. DRAM and SSD hasn't seen any price drop for a long time.
  • MrSpadge - Saturday, January 27, 2018 - link

    I thought the same while reading the article. If it's such a great hassle to improve per module capacity, why not simply go for larger dies? A few points came to my mind:

    - economies of scale: you don't want to produce a design only for the high end, but rather reuse a commodity item many times (considering the shift to virtualization & cloud computing, and products like the Titan V, this may not apply to the current high end market. Also TSVs are expensive)

    - the yield of smaller chips is better (but not a big issue for memory, since you can easily build some redundancy into them)

    - better array efficiency, i.e. a chip of 2x the capacity will be less than 2x bigger because some logic doesn't need to be duplicated (this is actually a reason to do it)

    - wasted space at the wafer edge: the larger your dies, the more unusable "fractional chips" you get at the wafer edge. It seems weird they don't simply put smaller chips of the same kind there. But so far the cost for doing that would probably be too high (stepper lithography is a serious high throughput business requiring high precision. Changing the mask on the fly is simply not possible for the current devices)
  • iwod - Monday, January 29, 2018 - link

    Yes but we have had those capacity since the 2x nm era. And we are now in 1x nm, possibly 4x the density increase and yet we have little to no capacity increase.

    i.e Despite better node, better yield, mature tech, smaller die size, our DRAM price / GB hasn't gone down at all.
  • FreckledTrout - Thursday, January 25, 2018 - link

    Those latencies are pretty bad.
  • DanNeely - Friday, January 26, 2018 - link

    Giant capacity server ram's always been slower than consumer ram due to the additional layers of hardware needed to stick so many more ram dies on the memory bus while keeping stable signalling. What you're overlooking is that they're hundreds or thousands of times faster than reading from/writing to an SSD; which is what such huge dimms are intended to be used as replacements for. (In the old days their advantage over HDDs was even larger).
  • Lolimaster - Friday, January 26, 2018 - link

    This is basically the big market for NVRam like optane, get 1-2TB of ram relatively cheap with access to before impossible densities for a "small" performance hit.
  • dgingeri - Friday, January 26, 2018 - link

    Larger capacity memory is always going to have higher latency.
  • Pinn - Thursday, January 25, 2018 - link

    I built a titan x (pascal) quad channel 6-core 128gb ram 1.2tb pcie ssd machine awhile back. looks like the ram kept the most value. do we stock up on these like ammo?

Log in

Don't have an account? Sign up now