Samsung SSD 850 Pro (128GB, 256GB & 1TB) Review: Enter the 3D Era
by Kristian Vättö on July 1, 2014 10:00 AM ESTWhy We Need 3D NAND
For years, it has been known that traditional NAND (i.e. 2D NAND) is running on its last legs. Many analysts predicted that we would not see NAND scaling below 20nm because the reliability would simply be too low to make such a small lithography feasible. However, thanks to some clever engineering on both hardware and firmware sides NAND has scaled to 15nm without any significant issues but now the limit has been reached for real. To understand the limits of 2D NAND scaling, let's say hello to our old good friend Mr. N-channel MOSFET.
Unfortunately the diagram above is a bit too simplified to truly show what we need, so let's look at a real cross-section photo instead:
Let me walk you through the structure first. At the top is the control gate, which is a part of a structure known as a wordline. In a standard NAND design the control gate wraps around the floating gate and the two gates are separated by an insulating oxide-nitride-oxide layer (i.e. ONO), which is sometimes called Inter Poly Dielectric (IPD). Under the floating gate is the tunnel oxide, which is also an insulator, followed by the silicon substrate that acts as the bitline.
The reason why the control gate is wrapped around the floating gate is to maximize capacitance between the two. As you will soon learn, the capacitance between the gates is the key factor in NAND as it means that the control gate can control the floating gate.
The purpose of bitlines and wordlines can be rather difficult to understand when looking at a cross-section, so here is what it all looks like from the top. Basically, bitlines and wordlines are just lines going in perpendicular directions and the floating gate and other materials reside between them.
When programming a cell, a high voltage of around 20V is applied to the wordline of that cell. Of course, the problem is that you cannot apply voltage to just one cell because the whole wordline will be activated so in order to select a specific cell, the bitline of that cell is held at 0V. At the same time, the neighbouring bitlines are charged to about 6V because this increases the capacitance between the bitline and floating gate, which is turn negates enough of the capacitance between the control and floating gate that the electrons cannot tunnel through the tunnel oxide. This is crucial because if all the bitlines were held at 0V, then all the cells along that wordline would be programmed with the same value.
To erase a cell, a reverse operation is performed by keeping the wordline at 0V while issuing a ~20V voltage on the bitline, which makes the electrons flow in the opposite direction (i.e. from the floating gate back to the bitline/silicon).
The way NAND is programmed and erased is also its Achilles' Heel. Because such high voltage is needed, the insulators around the floating gate (i.e. ONO and tunnel oxide) wear out as the NAND goes through program and erase cycles. The wear out causes the insulators to lose their insulating characters, meaning that electrons may now be able to escape the floating or get trapped in tunnel oxide during a program or erase. This causes a change in the voltage state of the cell.
Remember, NAND uses the voltage states to define the bit value. If the charge in the floating gate is not what it is supposed to be, the cell may return an invalid value when read. With MLC and TLC this is even worse because the voltage states are much closer to each other, meaning that even minor changes in the voltage state may cause the voltage state to shift from its original position, which means the cell value will also change. Basically, MLC and TLC have less room for voltage state changes, which is why their endurance is lower because a cell that cannot hold its charge reliably is useless.
Now that we have covered the operation of NAND briefly, let's see what this has to do with scaling. Here is the same cross-section as above but with some dimensions attached.
That is what a cross-section of a single cell looks like. When NAND is scaled, all these dimensions get smaller, which means that individual cells are smaller as well as the distance between each cell. The cross-section above is of IMFT's 25nm NAND (hence the bitline length of 25nm), so it is not exactly current generation but unfortunately I do not have any newer photos. There is no general rule to how much the dimensions shrink because 16nm simply means that one of the lengths is 16nm while others may not shrink that much.
The scaling introduces a variety of issues but I will start with the cell size. As the cell size is shrunk, the size of the floating gate is also shrunk, which means that the floating gate is able to hold less and less electrons every time the process node gets smaller. To put this into perspective, Toshiba's and SanDisk's 15nm NAND is stores less than 20 electrons per NAND cell. With TLC, that is less than three electrons per voltage state, so there is certainly not much headroom for escaped electrons. In other words, the cell becomes more vulnerable to the IPD and tunnel oxide wear out because even the loss of one electron can be vital to the voltage state.
The second issue is the proximity of the cells. The key factor in NAND is the capacitance between the control and floating gate but as the cells move closer to each other through scaling, the neighboring cells will also introduce capacitive coupling. In simpler terms, the neighboring cells will interfere more as the distance between the cells shrinks. The obstacle is that the interference varies depending on the charge of the neighbouring cell, so there is no easy way to exclude the intereference. This in turn makes programming harder and more time consuming because a higher voltage will be needed to achieve the sufficient capacitance between the control and floating gate to make the eletrons tunnel through the oxide.
The graph above outlines historic rate of how cell to cell intereference increases through die shrinks. At 90nm, the interference was only around 8-9% but at 20nm it is a rather significant 40%. The interference means that 40% of the capacitive coupling comes from the other cells, making it very hard to control the gate you are trying to program or read. Fortunately as a result of some clever engineering (i.e. an airgap between the wordlines), the intererence is only about 25% at 25nm, which is much more managable than the 40% the historic rate would have given us.
The above can be fairly tough to digest, so let's do a simple analogy that everyone should be able to understand. Imagine that you have a bunch of speakers with each playing a different song. When these speakers are relatively large and far away from each other, it is easy to properly hear the song that the speaker closest to you is playing. Now, what happens if you bring the other speakers closer to the speaker you are listening? The other speakers will interfere and it becomes harder to tell your song apart from the others. If you turn down the volume or switch to smaller speakers with lower output volume, it becomes even harder to distinquish your song from the songs that the other songs that other speakers are playing. If you repeat this enough times, there will be a point when you are hearing your song as unclearly as the other songs.
The effect is essentially the same with NAND scaling. When the cells, or speakers in the analogy, move closer to each other, the amount of interference increases, making it harder to sense the cell or listen to the speaker. At the same time the size of the cell (or speakers) is shrunk, which further complicates the focus on one cell (or speaker).
That is NAND scaling and its issues in a nutshell. We have seen innovations such as airgaps between the wordlines to reduce cell-to-cell interference and a high K metal gate instead of a traditional ONO IPD to increase control gate to floating gate capacitance, but the limit has now been reached. However, like other semiconductors NAND must follow the Moore's Law in order to get more cost efficient. If you can no longer scale in the X and Y dimensions, what do you do? You hit the reset button and introduce the Z dimension.
160 Comments
View All Comments
Squuiid - Saturday, March 14, 2015 - link
Plus, the MX100 reliability is horrible. Just google MX100 BSOD, disappearing drive.I have 2x MX100 512GB SSDs and I recommend you don't buy one, no matter how cheap they are.
nightauthor - Tuesday, July 1, 2014 - link
For business purposes, I would rather pay twice as much and get a 10 year warranty vs the 3 year supplied by Crucial. Though, for my daily, I would probably go with the Crucial.TheWrongChristian - Wednesday, July 2, 2014 - link
No current SATA drives push low queue depth random IOs to the point of saturating SATA II, let alone SATA III.At high queue depths, perhaps. But then, that is not a typical workload for most users, desktop or server.
Plus, it's a new drive, prices will come down.
jwcalla - Monday, June 30, 2014 - link
Unless they're doing 5% OP the capacities are kinda... off.melgross - Monday, June 30, 2014 - link
I think there's a slight misunderstanding of manufacturing cost. While the die size may be the same, or even smaller than a competing technology, the 32 level chip does cost more to make per area. There are more masks, more layers, more etching and washing cycles, and more chance of defects.Right now, I do see why the cost is higher. I can on,y assume that as this technology progresses, that cost will drop per area. But it will always remain higher than an SLC, MLC or TLC chip.
So there is a balance here.
Kristian Vättö - Tuesday, July 1, 2014 - link
You are correct. I did mention yield and equipment cost in the final paragraph but I figured I won't go into detail about masks and etching since those would have required an in-depth explanation of how NAND is manufactured :)R0H1T - Tuesday, July 1, 2014 - link
It would be great if Anand or you do a writeup on 3d NAND & deal with the specific pros & cons of it as compared to traditional 2d NAND & if possible include something related to manufacturing processes of these & how they're different OR more/less expensive, certainly as in case of V-NAND?MrSpadge - Tuesday, July 1, 2014 - link
You wouldn't need too much detail - just saying that the number of process steps increases by probably around an order of magnitude should make this pretty clear.frenchy_2001 - Tuesday, July 1, 2014 - link
It is probably more than that, as Samsung is currently manufacturing 32 layers of cells. Each layer requires multiple operations (deposit, etching, washing...). Their biggest advantage comes from regressing to 40nm: at that technology, each operation is *MUCH* cheaper than the equivalent one at 1X pitch (15~19nm).So, total cost is an unknown, but should be very competitive, after recovering the initial R&D investment.
Spatty - Tuesday, July 1, 2014 - link
And not to mention 3D NAND is still basically bleeding edge. It's still in the stages of where a new DDR generation arrives, much higher costs then current gen.