I just ordered the OCZ Synapse to use as a caching drive. The reviews all rave about speed increases. I am putting it in my main system. Would be nice if you could tell us where they fit in.
Also, anybody got recommendations on an SSD to get an older system to feel faster for surfing and the like?
It's hard for me to say anything about caching SSDs because we haven't reviewed any other than Intel 311 Series (yet). IIRC Synapse comes with its own caching software which is different from Intel's SRT.
Fortunately, we have some more staff working on SSD stuff now. As you may have noticed, so far Anand has done all the SSD reviews. To reduce Anand's workload, I'll be doing some of the SSD reviews in the future, which should allow us to review more SSDs. In fact, I have Plextor M3 SSD on its way here :-)
As for the SSD for an older system, is it a desktop or laptop? I think the best option would be a SSD+HD combo because that is cheap and still lets you have a decent amount of storage. You can try to find older SATA 3Gb/s SSDs (e.g. Intel X25-M G2 or Samsung 470 Series, they are very reliable). You can even hunt for a used drive, some people are already switching for faster SATA 6Gb/s SSDs so you may find a bargain.
there is something else the articles did not address. the life time of a certain SSD device is a combination of its cell reliability and how often it get re-written to.
take a look at the second page, it may look like at 3x nm, the SLC can last 20 times more than MLC. However, from a device point of view, a 120 GB SLC can well last 40 or more time than a 120 GB MLC because as you write and delete file over and over.
for example, in order to re-write the entire 120 GB of information, each cell of the SLC only get erase-write once while a 120GB MLC will most likely been erased-write twice (say to change 11 to 10 to and to 00 , a SLC will need to erase and write once on each of two cells where MLC will need to erase and write at least 3 times on a single cell), or try to imagine a super MLC cell that has all the voltage level needed for 120GB storage in one cell, then every-time something is changed that cell get re-written.
this just get a lot worse in TLC design, as you reduce the number of cells to realize more storage space, you are reducing the error margin as well as increasing the cycles. the old saying still applies "there is no displacement for replacement". there is no free lunch.
Thanks for the reply. I'll look into your suggestions, though given what I understand to be the limited life of SSDs I think I'll go new. Thanks again.
Please keep an eye out for caching solutions such as NVELO's dataplex caching software. I am looking for one that works with XP. It doesnt make much sense to have to upgrade the OS if all you're looking for is a cheap upgrade to a 5 year old pc. $69 for a caching drive is one thing. $220 for a drive and an OS is quite another...
Agree. I mentioned the NVELO caching software on these boards weeks (maybe months) ago after storagereview looked at them. It would be really good if you got some of their drives or even just software in to review. It is exciting that there is a competitor to Intel's caching in this space. Who knows? They may even be faster, cheaper and better....
I'd also like to see some analysis on NVELO "solutions." I was looking pretty seriously at buying Corsair's version of this with their Accelerator series drives as an "impulse purchase" but lack of availability of their 60Gb package and a good read of NVELO's software "licensing" put a quick halt on that.
NVELO looks like a killer app assuming it works. Sadly I expect that it does live up to it's claims, but their DRM is pretty harsh. It's locked to your hardware, so if you say change your video card you must "re-activate."
I know that for most people that's no worse than windows itself, but I change hardware a lot, and/or I'd like to be able to move the SSD cache to other machines in my house w/o worrying that I'll get DRM lockout.
57 months ago (4.75 years) you could get a 16gb supertalent for 600 dollars 41 months ago (3.41 years) you could get a 80gb intel (1st gen) ssd for 600 dollars.
Small deviations make a big difference when you are calculating exponential growth (and decrease)
While the transition from SLC to MLC and now TLC sounds good, the reality is SSD makers have yet to resolved all reliability or compatibility issues with MLC consumer grade SSDs.
Last time I checked OCZ was on firmware version (15) and people are still experiencing issues. The issues are with all SSD suppliers including Intel, Smasung, Corsair, etc. not just OCZ.
If data security is important it would be wise to heed Anand's advice to WAIT 6-12 months to see if the SSD makers resolve the BUGS.
Actually no one has any lock on SSD reliability. Intel, Samsung and Crucial have ALL had issues that required firmware updates to fix BUGS. We don't know how many more BUGS exist in their or other brands of consumer grade SSDs.
Not all HDD drives have issues. Yes some do especially the low quality high-capacity SATA drives. That however is not a good reason to buy a defective SSD.
SSD makers are just cashing in on gullible consumers. If people will pay top dollar for defective goods, that's what unscrupulous companies will ship. If consumers refuse to accept CRAP products, then the makers will fix the products or go broke.
As someone who tried to use a Sandisk controlled SSD recently, it's not as obnoxiously simple as you make it sound. It's one thing to know a drive will fail, it's another to experience BSODs every 20 minutes.
Making proper backups is the solution to drive failure, but a PC that crashes with regularity is utterly useless. I don't hate SSDs, I just want more assurance that they can be as stable as they are fast.
So I've had the Intel 80GB X-25M G2s since launch with zero issues, no reason to upgrade firmware, no BSODs or issues. I recently bought one of their 310 80GB SSDs for an HTPC--again, 5 months later, no issues, no problems, no firmware updates.
I've had a friend who's had two Vertex 2's in RAID 0 since launch with zero issues.
I also have a friend who has had a Vertex 2 drive die 4 times on him in under 2 months (this is more recent).
As of late, it seems that a lot of manufacturers are having issues but most I believe are the latest SandForce controllers which are causing the issues.
This is why you see people who use their own controllers, or one other than a recent SF controller, not having issues.
I feel bad, I really do, for those people who have been screwed over recently by the SSDs that have been failing--but I mean generally doing the research before hand benefits you down the road in the long run.
The reason Crucial, Intel, and Samsung SSDs are not having issues is because Crucial uses a Marvell controller, Intel uses its own controller, and Samsung uses it's own controller as well. This may not be true for all their drives, but most of their drives (the reliable ones) are of those controller types.
Just do your research before hand and don't be an SSD hater because they really are, when you shell out the cash to not get the cheapest thing on the market, the biggest upgrade you can do to your computer in the last 3-5 years. I haven't upgraded my mobo/cpu in either of my 3 computers in years but you bet I bought SSDs.
My OCZ Vertex 3 serves without glitch since 2.13 firmware was released. Before that occasional system freezing was major pain. Otoh I don't feel like updating to 2.15 firmware, rather being happy with what's working now :-)
There have been a few products already mixing an SSD with a HDD to allow oft-used data to be quickly read and written while rarely used bulk data that get's streamed (rather than random access) e.g. video is relegated to the HDD. Why not do the same with two grades of NAND? A few GB of SLC (or MLC) for OS files and frequently accessed and rewritten program files, and several hundred GB of TCL (or QLC, etc) for less frequently written data that it is still desirable to access quickly (e.g. game textures & FMVs). Faster than a HDD hybrid, cheaper than an all-SLC/MLC design, and just as fast in the vast majority of consumer use cases (exceptions including non-linear video editing, large-array data processing).
That's definitely a very interesting idea, I haven't actually thought about it. Maybe we will see something like that in the future. It should be feasible since we have products like Momentus XT.
That's what I was planning to write, so I agree, but it should be taken further: A weared out TLC cell should not be taken away by the controller/firmware. Instead it should be "degraded" to a MLC, and once it degrades, it should be used a s SLC.
While that's an interesting concept, you would have to over provision it even more to account for storage loss as it is unable to store as many bits. It would be easier to define at creation that some of the NAND would be MLC, and some TLC.
With really large SSD's, I think the life of TLC will be pretty good simply because we'll have so much storage to work with.
Imagine if you had a 1TB SSD, with a low 750 cycles. You could still potentially get around 750TB (minus amplification) of writes onto it.
It seems like this would play hell with wear leveling though. Even though you're trying to segregate (largely) static data into TLC NAND, you're still going to periodically write to TLC NAND and as such need to do wear leveling to keep from burning out a smaller number of the TLC NAND cells too soon. It seems like the need to wear level would largely negate the segregation of data.
Since SLC, MLC & TLC are physically the same why not make the firmware dynamic? e.g. A new (empty) starts storing information as SLC, when more storage is needed saves it as MLC or TLC. To ensure good performance and a long life of the drive it should store frequently modified & temporary files as SLC, other things like movies and music (files where speed is not important and aren't modified a lot) should be stored as TLC. other thoughts: when a TLC cell is too worn change it to MLC and later to SLC!
I know this would require a new very complex (and probably buggy) firmware. But are there any concepts or something?
as far as I understood it is about removing DRAM and using some kind of pseudo SLC cache instead. Not exactly what I was thinking about but good to know anyway. THX for the link.
Interesting idea, though I'm not sure if it's possible. While SLC, MLC and TLC are physically the same (i.e. they consist of the same transistors), I'm not sure what kind of process needs to be used to turn a NAND array into MLC or TLC instead of SLC. I would guess that it's more than what a simple SSD controller can do.
I can try to dig up more info on NAND manufacturing and hopefully it will shed some light to this. Either way, it does sound very complicated and the possibility of data loss is huge if the NAND type is changed during use (you can't really go from TLC to SLC without having a huge cache).
There is MLC-1, which is MLC which stores only 1 bit like SLC. It's almost as good as SLC, but I assume is much cheaper -- MLC is much cheaper than SLC (even if you're discarding half the capacity). I believe FusionIO uses this in some applications.
There is HET-MLC (or usually known as eMLC) which is MLC NAND aimed for enterprises. It stores two bits per cell like normal MLC but its P/E cycle rate is much higher (IIRC something like 50,000). Unfortunately I don't know how it really differs from regular MLC but it's a lot more expensive than regular MLC.
SLC, MLC and TLC simply refer to the amount of bits per cell, there is no 1-bit-per-cell MLC as MLC alone means multi-level-cell, and one isn't multi ;-)
I wasn't aware that this was going on until I read the UCSD paper "the Bleak Future of NAND Flash Memory. Somehow, you can use MLC to store just one bit, and it gets similar, but not identical, performance to SLC.
This was the study that was said to cast doubt on the ability to scale NAND effectively past 2024.
They tested this particular MLC-1 setup. Even if you discard half the capacity of MLC, it's still cheaper than SLC bit for bit.
HET-MLC and eMLC really are just highly binned MLC NAND. Toshiba gives it's eMLC 10kPE cycles. But enterprise drives only have to retain data for 3 months at MWI=0, so some of this extra endurance comes from that.
Ooh, interesting, thanks for the link! I was sure that you had mixed up eMLC and MLC because MLC-1 doesn't make much sense, at least not without further explanation.
Does the study say how much cheaper MLC-1 is when compared with SLC (I don't have time to read it thoroughly now, but will do tomorrow)? Like I said in the article, MLC is the highest volume product and gets the new process nodes first, so that might be the reason why MLC-1 is a bit cheaper than SLC. Shouldn't be much, though.
Memristors will create a giant performance boost. Their low latency and high density will allow for the replacement of HDDs and RAM. And this could be a second generation product out in three years.
If the worst memory latency was ~20ns instead of ~50µs (SSD) or ~20ms(HDD), cache misses would stop being a problem. Old, simple CPU architectures could be reproduced (with I/O upgrades) and bundled with memristor storage and compete with current designs.
In 10 years we could see CPUs replaced with multi-chip modules containing large memristor banks, ALU's with a far larger variety of operations (including GPGPU operations) and the system's I/O. No cache. No registers. No stalls.
Those are dreams for now. Anyway Sandisk/Toshiba sell a lot of TLC already, in certain products.They even had 4 bits per cell but that's not being produced anymore.As for the future they got 2 things, BiCS and 3D ReRAM. We'll see soon enough if any of those make it to market .
While cheaper is sure tempting I'm not making the move until I stop seeing so many users giving one-star ratings on Newegg when their nice new SSD bricks itself anywhere between one day and three months.
The same goes for many other components - hdds, gpu, mobo ...
And often enough there is either a crappy PSU or RAM involved doesnt work as it should. I dont really trust those "user reviews" on sites like Newegg, to many people writing there that have no idea what they are talking about.
Of course firmware issues exist, not denying that, but thats no reason pass the best possible upgrade for your pc (in most cases).
I'm having trouble understanding why the density gain from TLC is only linear and not quadratic. It seems like the web is crawling with a bunch of articles today saying the SLC -> MLC -> TLC density gain is 16 -> 32 -> 48. It should be 16 -> 32 -> 64. Am I right? Or is there something I'm not getting? Is it part of the ECC like Gray code?
Huh? SLC = 1 Cell, 1 bit MLC = 1 Cell, 2 bits TLC = 1 Cell, 3 bits You seem to think that TLC is 1 Cell, 4 bits, which it is not. Not sure why you would think that, though.
It's simple multiples, not powers. SLC stores one bit per cell, MLC is two, and TLC is three. MLC is thus twice the capacity of SLC, but TLC is only three times the capacity, not four. The power of two increase comes in the number of states to check: SLC checks two (0/1), MLC checks four (00,01,10,11), and TLC checks eight (000, 0001..., 110, 111). If someone were to try and do QLC they would need to check sixteen states, endurance would really plummet, and performance would be worse as well.
1 bit - 2 states - 16 Gb 2 bits - 4 states (double the previous) - 32Gb 3 bits - 8 states (double the the previous) - should be 64Gb not 48Gb. I'm still confused how the author got 48Gb.
The number of bits per cell is not equal to the number of voltage states. I'm not very knowledgeable in how NAND is produced, but I think the increase in voltage states by x2 per bit may have to do with the need for differentiation to write/erase each bit.
"Rather than shrinking the die to improve density/capacity, TLC (like MLC) increases the number of bits per cell. In our SSD Anthology article, Anand described how SLC and MLC flash work, and TLC works the same way but takes things a step further. Normally, you apply a voltage to a cell and keep increasing it until you reach a point where the result is far enough from the "off" state that you now consider the cell as being "on". This is how SLC works, storing one bit per cell. For MLC, you store two bits per cell, which means instead of two voltage states (0 and 1) you have four states (00, 01, 10, 11). TLC takes that a step further and stores three bits per cell, or eight voltage states (000, 001, 010, 011, 100, 101, 110, and 111). We will take a deeper look into voltage states and how they work in the next page."
Or this one?
"SLC only has two program states, "0" and "1". Hence either a high or low voltage is required. When the amount of bits goes up, you need more voltage stages. With MLC, there are four states, and eight states with TLC. The problem is that the silicon oxide layer is only about 10nm thick and it's not immortal; it wears out every time it's used in the tunneling process. When the silicon oxide layer wears out, the atomic bonds break and during the tunneling process, some electrons may get trapped inside the silicon oxide. This builds up negative charge in the silicon oxide, which negates some of the the control gate voltage."
Nowhere in the article does it state that the number of bits per transistor is equal to the number of voltage states.
That is the total number of voltage states per cell, i.e.:
1 bpc = 2 voltage states per cell (2^1) 2 bpc = 4 voltage states per cell (2^2) 3 bpc = 8 voltage states per cell (2^3)
The voltage states are what allow each bit to read as 0 or 1. TLC has 8 voltage states to allow intermediary changes in the values of the three bits in each cell: 000, 001, 010, 011, 100, 101, 110 and 111.
The information is stored per CELL and the same cell is used for SLC, MLC and TLC. The difference between them is the amount of bits per cell that is all. It is not 1 MLC = 2 SLC or 1 TLC = 3 SLC. It is if cell has:
2 electron states = SLC 4 electron states = MLC 8 electron states = TLC
Your confusing bits with data. Let's look at this problem in decimals.
If you have one symbol between 0 and 9, you can represent any number 0-9.
If you have two symbols, 0-9, you can represent any number between 0-99
If you have 3, you can represent 0-999
BUT you still only have three symbols.
Back to binary, a SLC stores 1bit, MLC 2bits and a TLC 3bits. So if you have 3 SLCs, you have 3 bits and 8 possible states. Exactly the same as one TLC. I'll expand this to make this point clear:
# Cells | # Bits | # Bits | # States
6 SLC = 6x1bit = 6bits = 2^6 states 3 MLC = 3x2bit = 6bits = 2^6 states 2 TLC = 2x3bit = 6bits = 2^6 states
All three configurations can store the same data. So to answer your question, the logical blocks which SLC, MLC and TLC apear to be based on have sixteen cells per block. Hence:
16 SLC = 16 x 1bit = 16 bits = 2^16 states
16 MLC = 16 x 2bit = 32 bits = 2^32 states
16 TCL = 16 x 3bit = 48 bits = 2^48 states
I know this was long and tedious, but if I'm not going to recheck this tread and I wanted to make sure I gave enough information that most people reading this should be able to understand the difference between bits and data.
SLC doesn't have two bits, it has one. It's not 2 raised to the blah, it's just blah. Same issue applies to your MLC & TLC examples.
SLC can _represent_ two values, 'on' or 'off'. MLC can represent 4 values ('on' or 'off' | 'on' or 'off'). And, likewise, TLC represents 8 values ('on' or 'off' | 'on' or 'off' | 'on' or 'off'). As you might notice, each grouping of 'on' and 'off' is a single bit.
I forgot to add that if you had 8 bits per cell, you would have 256 voltage states (0 or 1 for each bit, plus the different variations of 8 on or off bits), though I will not list all possible combinations, as it would take too much time/room.
I completely agree with you. The whole premise of the article is being based on the incorrect graph while having in places, the correct information in the article. There is a difference between place holders and values. SLC - 1 bit place holder 2 bits stored, MLC - 2 bits place holder 4 bits stored and TLC 3 bit place holder 8 bits stored.
First, thanks for the article! However it has reignited a question I've had for some time.
And here comes the difference. Since SLC has more spare voltage between the states, it can tolerate a higher voltage change until the erase will be so slow that the block needs to be retired.
How is this regulated exactly... does the manufacturer still set a mandatory limit to the number or writes, or is a modern SSD capable of detecting this delay and automatically correcting for it up until the point that it is able to detect the block has exceeded the time limits (and hence write endurance) allowed? In another manner of phrasing it, are arbitrary write count limits utilized or is a modern SSD self-aware enough to determine on its own when a flash block needs to be retired, regardless of the write counts?
Each chip is slightly different so there is no set maximum of writes. One can last 3000 P/E cycles while the other can last 3200.
I'm not 100% sure but I think the controller is the one who decides when a certain block is too slow. I.e. it's capable of detecting the delay and when it reaches a certain point, it decides to retire the block to avoid further performance decrease. Hence it may be controller specific and some will retire blocks sooner than others, although at least Intel is saying that there is a certain delay and after that the block is retired (but it may just be a recommendation).
When you mention every chip is different, that's a very excellent point and one of several reasons for the question.
The other reason behind my question was simply SSD lifespan... Anand has (several times) mentioned that even after the NAND "wears out" the data should remain readable for at least one year after that date.
Yet, all the SSD failures a huge number of others (including myself) have experienced has always been from an SSD suddenly failing outright, and not even being detected in the BIOS. I've yet to come across anyone that's claiming their drive became read only, or something else other than an outright failure or firmware related bug.Basically it seems like SSDs don't wear out, they just completely die outright for some reason. Going by your answer to my question, I'm going to safely assume NAND longevity isn't the factor in these episodes, but any input you may have on this would be quite welcome!
It's true that NAND remains readable when it wears out. For MLC, the period is about one year (eMLC is only 3 months, though).
I can't say for sure what is the reason behind these early failures but I would claim that it's often controller related. In general, drives equipped with SandForce controllers experience more early failures than other drives (see the link below).
All the drives with +5% return rate are SandForce based, more specifically SF-1222 based. NewEgg yields similar data. SF-2281 based SSDs have quite a few one-star ratings, usually around 20%. Switch to Crucial or Intel (or any other non-SF drive) and we are looking at less than 10% one-star ratings, which usually imply a dead drive.
Of course, even non-SF drives experience early failures but the rate is much smaller and more common for consumer electronics. In any case, it's not the NAND that is causing the failure :-)
I understand the necessity of reducing cost, but a sharp drop in durability coupled with a rapidly diminishing return on $savings/capacity due to the necessary greater redundancy seems a high price to pay for a linear increase in capacity.
This is one of those articles that has the excellent writing and technical thoroughness characteristic of something written by Anand himself. To top it off, it doesn't use an inefficient image format for the photos with large areas of flat color, like the first image.
I think the article got confusing by adding that that you can use less flash at 10.67Gb, along with 3bits per cell, giving 32Gb. Do the math: 10.67Gb * 3bits per cell = 32Gb.
The reason is that no final product has capacity of 48Gb. Capacities go in powers of 2: 2Gb, 4Gb, 8Gb, 16Gb, 32Gb, 64Gb and so on. 48Gb isn't a power of two (and no X*3 is). Hence you have to make the die smaller so that the X*3 is a power of two, like 10.67Gb is.
In theory, you could make a 48Gb TLC die and it would work just fine. It's simply considered as an odd number in the NAND industry and hence not used.
Kristian says this is awkward because TLC capacities will not scale from MLC capacities at a power of 2, like MLC did from SLC. I am not convinced that's an issue, as scaling capacity by a power of 2 has never been a requirement in the hard drive industry.
Indeed, 80/90 GB SSDs - located between power-of-2-inspired 64 GB and 128 GB capacities - have been quite popular. For that matter, 64GB/128GB SSDs are often marketed as 60GB/120GB SSDs, partially due to provisioning...
It is awkward to describe 48Gb as 10.67Gb*3, where Gb represents physical transistors rather than bits; Gb is a unit for digital information in this context, not the physical representation of such.
This is exacerbated as the cells are physically identical - an array could store 48Gb using TLC, but only 10.67Gb with SLC. I find hechacker1's explanation more intuitive. 16Gb SLC = (16*2) 32Gb MLC = (16*3) 48Gb TLC...
The takeaway point here is that you get 50% more wafers per die for a given capacity with TLC over MLC, and this shows up directly in the cost ($0.60 cents/gb vs $0.90 cents/gb) but results in greatly reduced write cycles.
Remember that I'm not the one who came up with this idea ;-) This info is straight from Micron and they indeed say that the TLC die is chopped down to 10.67 billion transistors so that it becomes a 32Gb die. Maybe OEMs are afraid of adapting "odd number" capacities. In SSDs it wouldn't be so big deal but TLC is more commonly used in devices like USB flash drives and low-end smartphones. In fact, some OEMs may even use MLC and TLC in the same model (I don't have any examples but I wouldn't be surprised).
As for why some drives have an odd capacity, it has to do with the controller design and over-provisioning. Intel's SATA 3Gb/s controller has 10 channels while most controllers have 8. That's why Intel drives have weird capacities. Populate all 10 channels with 64Gb (8GB) dies and you get 80GB. For other drives, populating all the channels works out to be only 64GB. As for SandForce drives, they have no on-board cache (DRAM) so some of the NAND (~7%) is preserved for that. That's why 128GB SF drive is marketed as 120GB.
I agree that 10.67 is an awkward number but then again, this is stuff that an average consumer doesn't really need to know. For them, the final product will look the same, thanks to the power of two capacity. The gain of TLC is the same, no matter is the die smaller or the same as MLC. TLC provides more GB per die, which means cheaper $/GB.
The information is straight from Micron, it's just an awkward way to explain the concept. If you want to keep the industry standard capacities in your explanation, perhaps show the math as capacity/(1, 2, 3) = transistors rather than transistors * (1, 2, 3) = capacity? If capacity is fixed, solving for number of transistors required seems more intuitive.
Corsair, OCZ and Kingston all make 90 GB Sandforce 2281 SSDs. I don't know how many channels / what NAND die they use. Searching that information brought up this website first every time! Upon further consideration, I blame aNAND... :-)
90GB SSDs have 96GB of NAND in them (remember that SandForce drives have ~7% over-provisioning). Most 2.5" drives have sockets for 16 NAND devices so that's simply twelve 8GB packages.
I read the comments thread looking for this answer, so thank you. I still don't see the logic behind it, as others have pointed out that storage capacities haven't been power-of-2 for decades. It could conceivably be firmware related, but given that overprovisioning makes (e.g.) 60 and 120 GB fairly common that seems unlikely.
Anyway, just some questions to keep in mind as you're in contact with the manufacturers. Thanks again for the great article, as the coverage here continues to be second to none.
It has been claimed that algorithms to minimize write amplification will follow Moore's Law
That's not really possible due to information theory. You can only compress information to reduce write by so much (entropy theory). The improvement will be more like an exponential decay rather than an exponential growth (Moore's law)
I estimate somewhere around $80 billion has been invested in the NAND flash market, cumulatively. Despite this enormous capital investment, I am surprised prices are still so high. You'd think with this type of mass economy of scale, it wouldnt cost so much to produce 1TB of flash. I wonder how much energy it takes to produce 1TB of flash...
There's so much unused space in 2.5" SSDs, let alone 3.5" drives for desktops. People wouldn't need to worry about TLC endurance, if the NAND was put into sockets and could easily be replaced. Or upgraded later on for higher capacities. And by the time you'd be doing this NAND prices will have fallen again. There'd need to be a standard for this, though...
As late as 2010 SLC's typically had 10 year retention time when new, down to about 1 year as cells got reprogrammed and the end of life was indicated for the device. (The number of erase cycles was also higher than now, but had be decreasing for a few years prior also.)
I don't know about new cell retention time when new for SLC's now, but MLCs either show no spec or the retention time spec for NEW cells is about 18 months.
For the various reasons mentioned in the article and earlier comments, the effect of MLCs is that speed has been reduced and data retention time is reduced and the fraction of long error correction time has increased dramatically.
MLCs are not suitable for long term backups and spinning drives were never good for more than 5 years EXPECTED powered off life)
MLCs just get 2 times as much storage for the same price 18 months earlier.
In the meantime, due to supply issues (capacity being used for MLC instead of SLC) Thus SLC typically cost 8 times as much per GB compared to MLC, rather than less than 2 times as much.) This amounts to about a 3 year delay in SLCs reaching a given price level.
(MLC also typically comes with implementation side effects [interleaved data layout, in particular] that means that data in unchanged pages as seen outside of the SSD is rewritten because data was changed at the interleaved logical location, not because the SSD software decided that the data was getting "weak" and needed to be refreshed.)
I'm not sure who the target audience of TLC is. Is there really a group of people out there that is willing to sacrifice reliability and data integrity for price or capacity? I certainly wouldn't.
It's bad enough that modern hard drives in the 2TB range have longevity problems. I don't want my SSD to be in the same boat, especially since that SSD tends to be the boot drive on most PC's.
I'm assuming TLC is a subclass of MLC, and not actually distinct as it's laid out in this article. Before TLC came along, all MLC belonged to (what I'll call) the DLC subclass, yeah?
SLC = Single level cell MLC = Multi level cell a. DLC = Dual level cell b. TLC = Triple level cell
I just used the names that manufacturers use. If you look at e.g. Micron's part catalog (linked below), they use SLC, MLC and TLC. I agree that the naming is misleading because MLC should refer to any NAND with multiple bits per cell. TLC is sometimes called as 3-bit-per-cell MLC or just MLC-3 but the TLC name is gaining more momentum all the time.
Shouldn't the TLC be 64Gb? It holds twice as much information as MLC as MLC hold twice as much as SLC. Each increment in bits doubles the information stored as stated in the article, SLC 2bits stored, MLC 4bits stored and TLC 8bits (1 BYTE) stored.
You are dealing with base-2 values. Each additional bit doubles the amount of data that is stored. You even have the correct values in the begining of the article. SLC stores 2 bitsof information 0 and 1, MLC stores 4 bits of information 00, 01, 10, 11 and TLC store 8 bits (1 BYTE) of information 000, 001, 010, 011, 100, 101, 110, 111 yet further down in the article you are stating that TLC stores only a third more that of SLC. You are confusing the bit place holder with the actual information that is being stored. TLC has an additional bit place holder compared to MLC which has an additional bit place holder compared to SLC. Each bit place holder increases the storage capability by a power of two (2).
SLC stores 1-bit per cell/transistor and the value can be either 0 or 1. It cannot be 0 and 1 at the same time.
MLC stores 2-bits cell. This means the value can be either 00, 01, 10, or 11. However, it can only be programmed to have one value. One MLC cell cannot store e.g. 00 and 01 at the same time. One 0 or 1 is one bit of data, i.e. 00 is two bits of data. I don't know how you are coming up with four bits, maybe you are mixing it with the voltage states (each value needs its own voltage state so when you program a cell to e.g. 00, it will be read as 00).
TLC just increases the bits per cell to three which means the possible values are 000, 001, 010, 100, 011, 110 101, and 111. Again, eight voltage states and three bits per cell.
Each additional bit per cell increases the voltage states by a power of 2 (in math terms: 2^n, where n is the amount of bits per cell). Amount of bits per cell is just n, it's not a power of two. MLC is 2*1=2, and 2 is 100% bigger than 1. TLC is 3*1=3. and 3 is 200% bigger than 1 but only 50% more than 2.
Ok let me make it simple because I still think you are confusing yourself.
SLC possible values are 0 or 1 which is equal to 2 values with is 2^1
MLC possible values are 0, 1, 10 or 11 which is equal to 4 values which is 2^2
TLC possible values are 0, 1, 10, 11, 100, 101, 110 or 111 which is equal to 8 values which is 2^3
Therefore each TLC which stores 8 values (3bits) which is twice that of a MLC which stores 4 values (2bits) which is twice that of a SLC which stores 2 values (1bit)
He's not confusing himself, you're confused about binary numbers and bits.
"Therefore each TLC which stores 8 values (3bits) which is twice that of a MLC which stores 4 values (2bits) which is twice that of a SLC which stores 2 values (1bit)"
Don't confuse the amount of bits of storage, with the maximum value it can hold.
Since you seem to be getting confused with binary numbers, lets work with decimals numbers for a bit.
Lets say an 'SLC' can represent the values 0-9. An MLC can represent the values 0-9, 0-9 or 00-99 (that's two sets of 0-9 next to each other!). A TLC can represent the values 0-9,0-9,0-9 or 000-999. It should be patently obvious that an TLC doesn't have 100 times the capcity of an SLC cell! A /single one/ can hold a VALUE 100 times, but, 3 SLCs next to each other could hold the same value.
A linear growth of bits results in an /exponential/ growth of the value those bits, when combined, can represent. It doesn't matter if all those bits are from a single cell, or X number of cells. How you get bits doesn't matter.
gives some insight on TLC block sizes and why is doesn't follow the actual size of a TLC cell. Basically some pages and not use in TLC block configurations. Strangely the amount of pages in a TLC block is more than double that of a MLC block!
I leave it up to you to clarify the article as it is somewhat confusing and needs some explanation of the differences between the cell, page and block sizes for TLC.
Actually, TLC block size does (or at least should) follow the bits-per-cell idea. 25nm IMFT MLC NAND brought us 8KB pages and 256 pages per block. According to your link, TLC has 384 pages per block (i.e. 3*128 which means 128 pages per bit). MLC is now using that same 128 pages per bit idea (before it was 64 pages per bit).
It's possible that TLC moved to a bigger block size before MLC and SLC because that lowers the cost and ultimately TLC is all about cost. There is need for less peripheral circuits between the blocks, which makes the die smaller and hence reduces production costs.
I don't know what this has to do with your original point about the article being wrong, though. Of course, I'm happy to answer any questions regarding TLC, or at least give it a try (I haven't studied NAND technology in a university so e.g. that math stuff in your link is over my head).
I haven't seen a 500 GB hard drive for anywhere near $50 in about 6 months now... where are you getting these drives? Right now the cheapest 500 GB drive on newegg.com is $84.99 and it's a bare Hitachi.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
90 Comments
Back to Article
BPB - Thursday, February 23, 2012 - link
I just ordered the OCZ Synapse to use as a caching drive. The reviews all rave about speed increases. I am putting it in my main system. Would be nice if you could tell us where they fit in.Also, anybody got recommendations on an SSD to get an older system to feel faster for surfing and the like?
Kristian Vättö - Thursday, February 23, 2012 - link
It's hard for me to say anything about caching SSDs because we haven't reviewed any other than Intel 311 Series (yet). IIRC Synapse comes with its own caching software which is different from Intel's SRT.Fortunately, we have some more staff working on SSD stuff now. As you may have noticed, so far Anand has done all the SSD reviews. To reduce Anand's workload, I'll be doing some of the SSD reviews in the future, which should allow us to review more SSDs. In fact, I have Plextor M3 SSD on its way here :-)
As for the SSD for an older system, is it a desktop or laptop? I think the best option would be a SSD+HD combo because that is cheap and still lets you have a decent amount of storage. You can try to find older SATA 3Gb/s SSDs (e.g. Intel X25-M G2 or Samsung 470 Series, they are very reliable). You can even hunt for a used drive, some people are already switching for faster SATA 6Gb/s SSDs so you may find a bargain.
ckryan - Thursday, February 23, 2012 - link
Synapse comes with NVELO's dataplex caching software, and there should be more consumer target caching solutions out soon.macuser2134 - Friday, February 24, 2012 - link
An upcoming Plextor M3 review - this is exciting! It will certainly be interesting to find out how a Plextor drive compares to other manufacturers.As a side note the "Pro" version of the Plextor M3 just started selling on Newegg only 2 days ago. Models PX-128M3P, PX-256M3P etc.
seanleeforever - Monday, February 27, 2012 - link
there is something else the articles did not address. the life time of a certain SSD device is a combination of its cell reliability and how often it get re-written to.take a look at the second page, it may look like at 3x nm, the SLC can last 20 times more than MLC. However, from a device point of view, a 120 GB SLC can well last 40 or more time than a 120 GB MLC because as you write and delete file over and over.
for example, in order to re-write the entire 120 GB of information, each cell of the SLC only get erase-write once while a 120GB MLC will most likely been erased-write twice (say to change 11 to 10 to and to 00 , a SLC will need to erase and write once on each of two cells where MLC will need to erase and write at least 3 times on a single cell), or try to imagine a super MLC cell that has all the voltage level needed for 120GB storage in one cell, then every-time something is changed that cell get re-written.
this just get a lot worse in TLC design, as you reduce the number of cells to realize more storage space, you are reducing the error margin as well as increasing the cycles. the old saying still applies "there is no displacement for replacement". there is no free lunch.
BPB - Friday, February 24, 2012 - link
Thanks for the reply. I'll look into your suggestions, though given what I understand to be the limited life of SSDs I think I'll go new. Thanks again.Shadowmaster625 - Friday, February 24, 2012 - link
Please keep an eye out for caching solutions such as NVELO's dataplex caching software. I am looking for one that works with XP. It doesnt make much sense to have to upgrade the OS if all you're looking for is a cheap upgrade to a 5 year old pc. $69 for a caching drive is one thing. $220 for a drive and an OS is quite another...JNo - Friday, February 24, 2012 - link
Agree. I mentioned the NVELO caching software on these boards weeks (maybe months) ago after storagereview looked at them. It would be really good if you got some of their drives or even just software in to review. It is exciting that there is a competitor to Intel's caching in this space. Who knows? They may even be faster, cheaper and better....xrror - Monday, February 27, 2012 - link
I'd also like to see some analysis on NVELO "solutions." I was looking pretty seriously at buying Corsair's version of this with their Accelerator series drives as an "impulse purchase" but lack of availability of their 60Gb package and a good read of NVELO's software "licensing" put a quick halt on that.NVELO looks like a killer app assuming it works. Sadly I expect that it does live up to it's claims, but their DRM is pretty harsh. It's locked to your hardware, so if you say change your video card you must "re-activate."
I know that for most people that's no worse than windows itself, but I change hardware a lot, and/or I'd like to be able to move the SSD cache to other machines in my house w/o worrying that I'll get DRM lockout.
Roland00Address - Thursday, February 23, 2012 - link
57 months ago (4.75 years) you could get a 16gb supertalent for 600 dollars41 months ago (3.41 years) you could get a 80gb intel (1st gen) ssd for 600 dollars.
Small deviations make a big difference when you are calculating exponential growth (and decrease)
Beenthere - Thursday, February 23, 2012 - link
While the transition from SLC to MLC and now TLC sounds good, the reality is SSD makers have yet to resolved all reliability or compatibility issues with MLC consumer grade SSDs.Last time I checked OCZ was on firmware version (15) and people are still experiencing issues. The issues are with all SSD suppliers including Intel, Smasung, Corsair, etc. not just OCZ.
If data security is important it would be wise to heed Anand's advice to WAIT 6-12 months to see if the SSD makers resolve the BUGS.
extide - Thursday, February 23, 2012 - link
Go with an Intel, Samsung, or Crucial drive. They are reliable and fast.Beenthere - Thursday, February 23, 2012 - link
Actually no one has any lock on SSD reliability. Intel, Samsung and Crucial have ALL had issues that required firmware updates to fix BUGS. We don't know how many more BUGS exist in their or other brands of consumer grade SSDs.Not all HDD drives have issues. Yes some do especially the low quality high-capacity SATA drives. That however is not a good reason to buy a defective SSD.
SSD makers are just cashing in on gullible consumers. If people will pay top dollar for defective goods, that's what unscrupulous companies will ship. If consumers refuse to accept CRAP products, then the makers will fix the products or go broke.
ckryan - Thursday, February 23, 2012 - link
Yes, because everyone knows HDDs are infallible, never die, and are very fast...Oh wait, none of that is true.
MonkeyPaw - Thursday, February 23, 2012 - link
As someone who tried to use a Sandisk controlled SSD recently, it's not as obnoxiously simple as you make it sound. It's one thing to know a drive will fail, it's another to experience BSODs every 20 minutes.Making proper backups is the solution to drive failure, but a PC that crashes with regularity is utterly useless. I don't hate SSDs, I just want more assurance that they can be as stable as they are fast.
martyrant - Thursday, February 23, 2012 - link
So I've had the Intel 80GB X-25M G2s since launch with zero issues, no reason to upgrade firmware, no BSODs or issues. I recently bought one of their 310 80GB SSDs for an HTPC--again, 5 months later, no issues, no problems, no firmware updates.I've had a friend who's had two Vertex 2's in RAID 0 since launch with zero issues.
I also have a friend who has had a Vertex 2 drive die 4 times on him in under 2 months (this is more recent).
As of late, it seems that a lot of manufacturers are having issues but most I believe are the latest SandForce controllers which are causing the issues.
This is why you see people who use their own controllers, or one other than a recent SF controller, not having issues.
I feel bad, I really do, for those people who have been screwed over recently by the SSDs that have been failing--but I mean generally doing the research before hand benefits you down the road in the long run.
The reason Crucial, Intel, and Samsung SSDs are not having issues is because Crucial uses a Marvell controller, Intel uses its own controller, and Samsung uses it's own controller as well. This may not be true for all their drives, but most of their drives (the reliable ones) are of those controller types.
Just do your research before hand and don't be an SSD hater because they really are, when you shell out the cash to not get the cheapest thing on the market, the biggest upgrade you can do to your computer in the last 3-5 years. I haven't upgraded my mobo/cpu in either of my 3 computers in years but you bet I bought SSDs.
Holly - Saturday, February 25, 2012 - link
My OCZ Vertex 3 serves without glitch since 2.13 firmware was released. Before that occasional system freezing was major pain. Otoh I don't feel like updating to 2.15 firmware, rather being happy with what's working now :-)jwcalla - Thursday, February 23, 2012 - link
Yeah but even HDDs have major reliability problems... especially the high-capacity consumer drives.psuedonymous - Thursday, February 23, 2012 - link
There have been a few products already mixing an SSD with a HDD to allow oft-used data to be quickly read and written while rarely used bulk data that get's streamed (rather than random access) e.g. video is relegated to the HDD. Why not do the same with two grades of NAND? A few GB of SLC (or MLC) for OS files and frequently accessed and rewritten program files, and several hundred GB of TCL (or QLC, etc) for less frequently written data that it is still desirable to access quickly (e.g. game textures & FMVs). Faster than a HDD hybrid, cheaper than an all-SLC/MLC design, and just as fast in the vast majority of consumer use cases (exceptions including non-linear video editing, large-array data processing).kensiko - Thursday, February 23, 2012 - link
Yes that's what I thought reading this article.We just have to make the majority of writes on MLC and put the static data on TLC. Pretty simple and probably feasible in a 2.5in casing.
Kristian Vättö - Thursday, February 23, 2012 - link
That's definitely a very interesting idea, I haven't actually thought about it. Maybe we will see something like that in the future. It should be feasible since we have products like Momentus XT.marraco - Thursday, February 23, 2012 - link
That's what I was planning to write, so I agree, but it should be taken further: A weared out TLC cell should not be taken away by the controller/firmware. Instead it should be "degraded" to a MLC, and once it degrades, it should be used a s SLC.hechacker1 - Friday, February 24, 2012 - link
While that's an interesting concept, you would have to over provision it even more to account for storage loss as it is unable to store as many bits. It would be easier to define at creation that some of the NAND would be MLC, and some TLC.With really large SSD's, I think the life of TLC will be pretty good simply because we'll have so much storage to work with.
Imagine if you had a 1TB SSD, with a low 750 cycles. You could still potentially get around 750TB (minus amplification) of writes onto it.
xrror - Monday, February 27, 2012 - link
Hah, I had this same thought also.What would be great fun (but a nightmare for OEMs to validate) is if you as a user could arbitrarily choose what "modes" to run the flash in.
It'd also be ironic that as an SSD "wore out" it would drastically lose capacity but yet become faster while doing it ;p
ViRGE - Friday, February 24, 2012 - link
It seems like this would play hell with wear leveling though. Even though you're trying to segregate (largely) static data into TLC NAND, you're still going to periodically write to TLC NAND and as such need to do wear leveling to keep from burning out a smaller number of the TLC NAND cells too soon. It seems like the need to wear level would largely negate the segregation of data.Mr. GlotzTV - Thursday, February 23, 2012 - link
Since SLC, MLC & TLC are physically the same why not make the firmware dynamic?e.g.
A new (empty) starts storing information as SLC, when more storage is needed saves it as MLC or TLC. To ensure good performance and a long life of the drive it should store frequently modified & temporary files as SLC, other things like movies and music (files where speed is not important and aren't modified a lot) should be stored as TLC.
other thoughts:
when a TLC cell is too worn change it to MLC and later to SLC!
I know this would require a new very complex (and probably buggy) firmware. But are there any concepts or something?
jjj - Thursday, February 23, 2012 - link
http://www.anandtech.com/show/4284/sandisktoshiba-...Mr. GlotzTV - Thursday, February 23, 2012 - link
as far as I understood it is about removing DRAM and using some kind of pseudo SLC cache instead. Not exactly what I was thinking about but good to know anyway.THX for the link.
Kristian Vättö - Thursday, February 23, 2012 - link
Interesting idea, though I'm not sure if it's possible. While SLC, MLC and TLC are physically the same (i.e. they consist of the same transistors), I'm not sure what kind of process needs to be used to turn a NAND array into MLC or TLC instead of SLC. I would guess that it's more than what a simple SSD controller can do.I can try to dig up more info on NAND manufacturing and hopefully it will shed some light to this. Either way, it does sound very complicated and the possibility of data loss is huge if the NAND type is changed during use (you can't really go from TLC to SLC without having a huge cache).
ckryan - Thursday, February 23, 2012 - link
There is MLC-1, which is MLC which stores only 1 bit like SLC. It's almost as good as SLC, but I assume is much cheaper -- MLC is much cheaper than SLC (even if you're discarding half the capacity). I believe FusionIO uses this in some applications.Kristian Vättö - Friday, February 24, 2012 - link
There is HET-MLC (or usually known as eMLC) which is MLC NAND aimed for enterprises. It stores two bits per cell like normal MLC but its P/E cycle rate is much higher (IIRC something like 50,000). Unfortunately I don't know how it really differs from regular MLC but it's a lot more expensive than regular MLC.SLC, MLC and TLC simply refer to the amount of bits per cell, there is no 1-bit-per-cell MLC as MLC alone means multi-level-cell, and one isn't multi ;-)
ckryan - Friday, February 24, 2012 - link
I wasn't aware that this was going on until I read the UCSD paper "the Bleak Future of NAND Flash Memory. Somehow, you can use MLC to store just one bit, and it gets similar, but not identical, performance to SLC.http://cseweb.ucsd.edu/users/swanson/papers/FAST20...
This was the study that was said to cast doubt on the ability to scale NAND effectively past 2024.
They tested this particular MLC-1 setup. Even if you discard half the capacity of MLC, it's still cheaper than SLC bit for bit.
HET-MLC and eMLC really are just highly binned MLC NAND. Toshiba gives it's eMLC 10kPE cycles. But enterprise drives only have to retain data for 3 months at MWI=0, so some of this extra endurance comes from that.
Kristian Vättö - Friday, February 24, 2012 - link
Ooh, interesting, thanks for the link! I was sure that you had mixed up eMLC and MLC because MLC-1 doesn't make much sense, at least not without further explanation.Does the study say how much cheaper MLC-1 is when compared with SLC (I don't have time to read it thoroughly now, but will do tomorrow)? Like I said in the article, MLC is the highest volume product and gets the new process nodes first, so that might be the reason why MLC-1 is a bit cheaper than SLC. Shouldn't be much, though.
This Guy - Friday, February 24, 2012 - link
There is a better tech in the wings anyway. Memristors:http://www.electronicsweekly.com/Articles/06/10/20...
Memristors will create a giant performance boost. Their low latency and high density will allow for the replacement of HDDs and RAM. And this could be a second generation product out in three years.
If the worst memory latency was ~20ns instead of ~50µs (SSD) or ~20ms(HDD), cache misses would stop being a problem. Old, simple CPU architectures could be reproduced (with I/O upgrades) and bundled with memristor storage and compete with current designs.
In 10 years we could see CPUs replaced with multi-chip modules containing large memristor banks, ALU's with a far larger variety of operations (including GPGPU operations) and the system's I/O. No cache. No registers. No stalls.
jjj - Saturday, February 25, 2012 - link
Those are dreams for now.Anyway Sandisk/Toshiba sell a lot of TLC already, in certain products.They even had 4 bits per cell but that's not being produced anymore.As for the future they got 2 things, BiCS and 3D ReRAM. We'll see soon enough if any of those make it to market .
rpmurray - Thursday, February 23, 2012 - link
While cheaper is sure tempting I'm not making the move until I stop seeing so many users giving one-star ratings on Newegg when their nice new SSD bricks itself anywhere between one day and three months.jdjbuffalo - Thursday, February 23, 2012 - link
As opposed to all the people who get new 2TB hard drives that fail in the first day?pc_void - Thursday, February 23, 2012 - link
So true. It is said that if it lasts for 3 months then it will probably last for years - talking about hard disk drives or anything for that matter.In my opinion, people brick ssd drives because they are not dummy proof.
pc_void - Thursday, February 23, 2012 - link
Except for the exceptions.Folterknecht - Thursday, February 23, 2012 - link
The same goes for many other components - hdds, gpu, mobo ...And often enough there is either a crappy PSU or RAM involved doesnt work as it should. I dont really trust those "user reviews" on sites like Newegg, to many people writing there that have no idea what they are talking about.
Of course firmware issues exist, not denying that, but thats no reason pass the best possible upgrade for your pc (in most cases).
aguilpa1 - Thursday, February 23, 2012 - link
I feel all warm and technical inside.Taft12 - Thursday, February 23, 2012 - link
"However, there have been quite a few widespread firmware issues, such as SF-2281 BSOD and Intel 320 Series 8MB bugs"No list of SSD firmware cockups is complete without mentioning the Kingston V200 abysmal write performance:
http://forum.notebookreview.com/solid-state-drives...
The fact that they're handing out V+ left and right to those requesting RMAs suggests to me the problem will never get fixed.
dorion - Thursday, February 23, 2012 - link
I'm having trouble understanding why the density gain from TLC is only linear and not quadratic. It seems like the web is crawling with a bunch of articles today saying the SLC -> MLC -> TLC density gain is 16 -> 32 -> 48. It should be 16 -> 32 -> 64. Am I right? Or is there something I'm not getting? Is it part of the ECC like Gray code?Death666Angel - Thursday, February 23, 2012 - link
Huh?SLC = 1 Cell, 1 bit
MLC = 1 Cell, 2 bits
TLC = 1 Cell, 3 bits
You seem to think that TLC is 1 Cell, 4 bits, which it is not. Not sure why you would think that, though.
JarredWalton - Thursday, February 23, 2012 - link
It's simple multiples, not powers. SLC stores one bit per cell, MLC is two, and TLC is three. MLC is thus twice the capacity of SLC, but TLC is only three times the capacity, not four. The power of two increase comes in the number of states to check: SLC checks two (0/1), MLC checks four (00,01,10,11), and TLC checks eight (000, 0001..., 110, 111). If someone were to try and do QLC they would need to check sixteen states, endurance would really plummet, and performance would be worse as well.dorion - Thursday, February 23, 2012 - link
I cant believe I overlooked the difference between bits you can store and how high you can count with a certain number of bits.ionis - Friday, February 24, 2012 - link
That only proves it should go 16->32->64.1 bit - 2 states - 16 Gb
2 bits - 4 states (double the previous) - 32Gb
3 bits - 8 states (double the the previous) - should be 64Gb not 48Gb. I'm still confused how the author got 48Gb.
JMC2000 - Friday, February 24, 2012 - link
The number of bits per cell is not equal to the number of voltage states. I'm not very knowledgeable in how NAND is produced, but I think the increase in voltage states by x2 per bit may have to do with the need for differentiation to write/erase each bit.ionis - Friday, February 24, 2012 - link
The article explicitly states that the number of bits per cell is equal to the number of voltage states.JMC2000 - Friday, February 24, 2012 - link
Do you mean this paragraph?"Rather than shrinking the die to improve density/capacity, TLC (like MLC) increases the number of bits per cell. In our SSD Anthology article, Anand described how SLC and MLC flash work, and TLC works the same way but takes things a step further. Normally, you apply a voltage to a cell and keep increasing it until you reach a point where the result is far enough from the "off" state that you now consider the cell as being "on". This is how SLC works, storing one bit per cell. For MLC, you store two bits per cell, which means instead of two voltage states (0 and 1) you have four states (00, 01, 10, 11). TLC takes that a step further and stores three bits per cell, or eight voltage states (000, 001, 010, 011, 100, 101, 110, and 111). We will take a deeper look into voltage states and how they work in the next page."
Or this one?
"SLC only has two program states, "0" and "1". Hence either a high or low voltage is required. When the amount of bits goes up, you need more voltage stages. With MLC, there are four states, and eight states with TLC. The problem is that the silicon oxide layer is only about 10nm thick and it's not immortal; it wears out every time it's used in the tunneling process. When the silicon oxide layer wears out, the atomic bonds break and during the tunneling process, some electrons may get trapped inside the silicon oxide. This builds up negative charge in the silicon oxide, which negates some of the the control gate voltage."
Nowhere in the article does it state that the number of bits per transistor is equal to the number of voltage states.
ionis - Friday, February 24, 2012 - link
Both paragraphs you quoted state that TLC has 8 states."TLC takes that a step further and stores three bits per cell, or eight voltage states (000, 001, 010, 011, 100, 101, 110, and 111)."
"With MLC, there are four states, and eight states with TLC. "
JMC2000 - Friday, February 24, 2012 - link
That is the total number of voltage states per cell, i.e.:1 bpc = 2 voltage states per cell (2^1)
2 bpc = 4 voltage states per cell (2^2)
3 bpc = 8 voltage states per cell (2^3)
The voltage states are what allow each bit to read as 0 or 1. TLC has 8 voltage states to allow intermediary changes in the values of the three bits in each cell: 000, 001, 010, 011, 100, 101, 110 and 111.
ionis - Friday, February 24, 2012 - link
Yes. That's the argument. So it should go 16->32->64. Where is the 48 coming from?P.S. we're running out of space in this thread!
Andunestel - Friday, February 24, 2012 - link
The commenter above explained.It's not double each time. The number of combinations, or voltage states, increases exponentially with the number of binary digits represented.
SLC 1 bit (0,1) = 2 states
MLC 2 bits (00,01,10,11) = 4 states
TLC 3 bits ( 000,001,010,011,110,111) = 6 states
Notice that MLC is 100% > SLC, but TLC is only 50% > MLC?
In other words:
SLC = 16GiB
MLC = SLC x 2 = 32 GiB
TLC = SLC x 3 = 48 GiB
- or -
TLC = MLC x 2.5 = 48GiB
Taracta - Sunday, February 26, 2012 - link
For TLC you have left out 100 and 101 so you would haveTLC 3 bits (000, 001, 010, 011, 100, 101, 110, 111) = 8 STATES!
The information is stored per CELL and the same cell is used for SLC, MLC and TLC. The difference between them is the amount of bits per cell that is all. It is not 1 MLC = 2 SLC or 1 TLC = 3 SLC. It is if cell has:
2 electron states = SLC
4 electron states = MLC
8 electron states = TLC
In binary these states are represented by:
2 states = 1bit
4 states = 2bits
8 states = 3bits
This Guy - Friday, February 24, 2012 - link
Your confusing bits with data. Let's look at this problem in decimals.If you have one symbol between 0 and 9, you can represent any number 0-9.
If you have two symbols, 0-9, you can represent any number between 0-99
If you have 3, you can represent 0-999
BUT you still only have three symbols.
Back to binary, a SLC stores 1bit, MLC 2bits and a TLC 3bits. So if you have 3 SLCs, you have 3 bits and 8 possible states. Exactly the same as one TLC. I'll expand this to make this point clear:
# Cells | # Bits | # Bits | # States
6 SLC = 6x1bit = 6bits = 2^6 states
3 MLC = 3x2bit = 6bits = 2^6 states
2 TLC = 2x3bit = 6bits = 2^6 states
All three configurations can store the same data. So to answer your question, the logical blocks which SLC, MLC and TLC apear to be based on have sixteen cells per block. Hence:
16 SLC = 16 x 1bit = 16 bits
= 2^16 states
16 MLC = 16 x 2bit = 32 bits
= 2^32 states
16 TCL = 16 x 3bit = 48 bits
= 2^48 states
I know this was long and tedious, but if I'm not going to recheck this tread and I wanted to make sure I gave enough information that most people reading this should be able to understand the difference between bits and data.
Taracta - Sunday, February 26, 2012 - link
You do notice that in your decimal example that it is increasing by powers of 10 so why in your binary exapmple it is not increasing by powers of 2?16 SLC = 16 x 2^1 bit = 32bits
16 MLC = 16 x 2^2 bit = 64bits
16 TLC = 16 x 2^3 bit = 128bits
No, your exapmples are incorrect so you just further confused the issue.
KitsuneKnight - Sunday, February 26, 2012 - link
SLC doesn't have two bits, it has one. It's not 2 raised to the blah, it's just blah. Same issue applies to your MLC & TLC examples.SLC can _represent_ two values, 'on' or 'off'. MLC can represent 4 values ('on' or 'off' | 'on' or 'off'). And, likewise, TLC represents 8 values ('on' or 'off' | 'on' or 'off' | 'on' or 'off'). As you might notice, each grouping of 'on' and 'off' is a single bit.
His examples are completely correct.
JMC2000 - Friday, February 24, 2012 - link
(What I would give for an EDIT function)I forgot to add that if you had 8 bits per cell, you would have 256 voltage states (0 or 1 for each bit, plus the different variations of 8 on or off bits), though I will not list all possible combinations, as it would take too much time/room.
Taracta - Sunday, February 26, 2012 - link
I completely agree with you. The whole premise of the article is being based on the incorrect graph while having in places, the correct information in the article. There is a difference between place holders and values. SLC - 1 bit place holder 2 bits stored, MLC - 2 bits place holder 4 bits stored and TLC 3 bit place holder 8 bits stored.Kougar - Thursday, February 23, 2012 - link
First, thanks for the article! However it has reignited a question I've had for some time.How is this regulated exactly... does the manufacturer still set a mandatory limit to the number or writes, or is a modern SSD capable of detecting this delay and automatically correcting for it up until the point that it is able to detect the block has exceeded the time limits (and hence write endurance) allowed? In another manner of phrasing it, are arbitrary write count limits utilized or is a modern SSD self-aware enough to determine on its own when a flash block needs to be retired, regardless of the write counts?
Kristian Vättö - Friday, February 24, 2012 - link
Each chip is slightly different so there is no set maximum of writes. One can last 3000 P/E cycles while the other can last 3200.I'm not 100% sure but I think the controller is the one who decides when a certain block is too slow. I.e. it's capable of detecting the delay and when it reaches a certain point, it decides to retire the block to avoid further performance decrease. Hence it may be controller specific and some will retire blocks sooner than others, although at least Intel is saying that there is a certain delay and after that the block is retired (but it may just be a recommendation).
Kougar - Friday, February 24, 2012 - link
Thank you for your reply, Kristian!When you mention every chip is different, that's a very excellent point and one of several reasons for the question.
The other reason behind my question was simply SSD lifespan... Anand has (several times) mentioned that even after the NAND "wears out" the data should remain readable for at least one year after that date.
Yet, all the SSD failures a huge number of others (including myself) have experienced has always been from an SSD suddenly failing outright, and not even being detected in the BIOS. I've yet to come across anyone that's claiming their drive became read only, or something else other than an outright failure or firmware related bug.Basically it seems like SSDs don't wear out, they just completely die outright for some reason. Going by your answer to my question, I'm going to safely assume NAND longevity isn't the factor in these episodes, but any input you may have on this would be quite welcome!
Kristian Vättö - Friday, February 24, 2012 - link
It's true that NAND remains readable when it wears out. For MLC, the period is about one year (eMLC is only 3 months, though).I can't say for sure what is the reason behind these early failures but I would claim that it's often controller related. In general, drives equipped with SandForce controllers experience more early failures than other drives (see the link below).
http://www.behardware.com/articles/843-7/component...
All the drives with +5% return rate are SandForce based, more specifically SF-1222 based. NewEgg yields similar data. SF-2281 based SSDs have quite a few one-star ratings, usually around 20%. Switch to Crucial or Intel (or any other non-SF drive) and we are looking at less than 10% one-star ratings, which usually imply a dead drive.
Of course, even non-SF drives experience early failures but the rate is much smaller and more common for consumer electronics. In any case, it's not the NAND that is causing the failure :-)
Sivar - Thursday, February 23, 2012 - link
I understand the necessity of reducing cost, but a sharp drop in durability coupled with a rapidly diminishing return on $savings/capacity due to the necessary greater redundancy seems a high price to pay for a linear increase in capacity.This is one of those articles that has the excellent writing and technical thoroughness characteristic of something written by Anand himself. To top it off, it doesn't use an inefficient image format for the photos with large areas of flat color, like the first image.
themossie - Friday, February 24, 2012 - link
Second that. Unusual clarity for any technical explanation. Thank you for the article, Kristian!hechacker1 - Friday, February 24, 2012 - link
I think the article got confusing by adding that that you can use less flash at 10.67Gb, along with 3bits per cell, giving 32Gb. Do the math: 10.67Gb * 3bits per cell = 32Gb.It's easier to just keep in mind:
16Gb NAND * 1 bit per cell = 16Gb capacity
16Gb NAND * 2 bit per cell = 32Gb capacity
16Gb NAND * 3 bit per cell = 48Gb capacity
Kristian Vättö - Friday, February 24, 2012 - link
The reason is that no final product has capacity of 48Gb. Capacities go in powers of 2: 2Gb, 4Gb, 8Gb, 16Gb, 32Gb, 64Gb and so on. 48Gb isn't a power of two (and no X*3 is). Hence you have to make the die smaller so that the X*3 is a power of two, like 10.67Gb is.In theory, you could make a 48Gb TLC die and it would work just fine. It's simply considered as an odd number in the NAND industry and hence not used.
themossie - Friday, February 24, 2012 - link
Kristian says this is awkward because TLC capacities will not scale from MLC capacities at a power of 2, like MLC did from SLC. I am not convinced that's an issue, as scaling capacity by a power of 2 has never been a requirement in the hard drive industry.Indeed, 80/90 GB SSDs - located between power-of-2-inspired 64 GB and 128 GB capacities - have been quite popular. For that matter, 64GB/128GB SSDs are often marketed as 60GB/120GB SSDs, partially due to provisioning...
It is awkward to describe 48Gb as 10.67Gb*3, where Gb represents physical transistors rather than bits; Gb is a unit for digital information in this context, not the physical representation of such.
This is exacerbated as the cells are physically identical - an array could store 48Gb using TLC, but only 10.67Gb with SLC. I find hechacker1's explanation more intuitive. 16Gb SLC = (16*2) 32Gb MLC = (16*3) 48Gb TLC...
The takeaway point here is that you get 50% more wafers per die for a given capacity with TLC over MLC, and this shows up directly in the cost ($0.60 cents/gb vs $0.90 cents/gb) but results in greatly reduced write cycles.
Kristian Vättö - Friday, February 24, 2012 - link
Remember that I'm not the one who came up with this idea ;-)This info is straight from Micron and they indeed say that the TLC die is chopped down to 10.67 billion transistors so that it becomes a 32Gb die. Maybe OEMs are afraid of adapting "odd number" capacities. In SSDs it wouldn't be so big deal but TLC is more commonly used in devices like USB flash drives and low-end smartphones. In fact, some OEMs may even use MLC and TLC in the same model (I don't have any examples but I wouldn't be surprised).
As for why some drives have an odd capacity, it has to do with the controller design and over-provisioning. Intel's SATA 3Gb/s controller has 10 channels while most controllers have 8. That's why Intel drives have weird capacities. Populate all 10 channels with 64Gb (8GB) dies and you get 80GB. For other drives, populating all the channels works out to be only 64GB. As for SandForce drives, they have no on-board cache (DRAM) so some of the NAND (~7%) is preserved for that. That's why 128GB SF drive is marketed as 120GB.
I agree that 10.67 is an awkward number but then again, this is stuff that an average consumer doesn't really need to know. For them, the final product will look the same, thanks to the power of two capacity. The gain of TLC is the same, no matter is the die smaller or the same as MLC. TLC provides more GB per die, which means cheaper $/GB.
themossie - Friday, February 24, 2012 - link
The information is straight from Micron, it's just an awkward way to explain the concept. If you want to keep the industry standard capacities in your explanation, perhaps show the math as capacity/(1, 2, 3) = transistors rather than transistors * (1, 2, 3) = capacity? If capacity is fixed, solving for number of transistors required seems more intuitive.Corsair, OCZ and Kingston all make 90 GB Sandforce 2281 SSDs. I don't know how many channels / what NAND die they use. Searching that information brought up this website first every time! Upon further consideration, I blame aNAND... :-)
Kristian Vättö - Saturday, February 25, 2012 - link
90GB SSDs have 96GB of NAND in them (remember that SandForce drives have ~7% over-provisioning). Most 2.5" drives have sockets for 16 NAND devices so that's simply twelve 8GB packages.Confusador - Friday, February 24, 2012 - link
I read the comments thread looking for this answer, so thank you. I still don't see the logic behind it, as others have pointed out that storage capacities haven't been power-of-2 for decades. It could conceivably be firmware related, but given that overprovisioning makes (e.g.) 60 and 120 GB fairly common that seems unlikely.Anyway, just some questions to keep in mind as you're in contact with the manufacturers. Thanks again for the great article, as the coverage here continues to be second to none.
AnnihilatorX - Friday, February 24, 2012 - link
That's not really possible due to information theory. You can only compress information to reduce write by so much (entropy theory). The improvement will be more like an exponential decay rather than an exponential growth (Moore's law)
Shadowmaster625 - Friday, February 24, 2012 - link
I estimate somewhere around $80 billion has been invested in the NAND flash market, cumulatively. Despite this enormous capital investment, I am surprised prices are still so high. You'd think with this type of mass economy of scale, it wouldnt cost so much to produce 1TB of flash. I wonder how much energy it takes to produce 1TB of flash...MrSpadge - Friday, February 24, 2012 - link
There's so much unused space in 2.5" SSDs, let alone 3.5" drives for desktops. People wouldn't need to worry about TLC endurance, if the NAND was put into sockets and could easily be replaced. Or upgraded later on for higher capacities. And by the time you'd be doing this NAND prices will have fallen again. There'd need to be a standard for this, though...MrS
mark53916 - Friday, February 24, 2012 - link
As late as 2010 SLC's typically had 10 year retention time when new, down to
about 1 year as cells got reprogrammed and the end of life was
indicated for the device. (The number of erase cycles was
also higher than now, but had be decreasing for a few years prior
also.)
I don't know about new cell retention time when new for SLC's
now, but MLCs either show no spec or the retention time spec for NEW
cells is about 18 months.
For the various reasons mentioned in the article and earlier comments,
the effect of MLCs is that speed has been reduced and data retention time
is reduced and the fraction of long error correction time has increased
dramatically.
MLCs are not suitable for long term backups and spinning drives were never
good for more than 5 years EXPECTED powered off life)
MLCs just get 2 times as much storage for the same price 18 months earlier.
In the meantime, due to supply issues (capacity being used for MLC instead
of SLC) Thus SLC typically cost 8 times as much per GB compared
to MLC, rather than less than 2 times as much.) This amounts
to about a 3 year delay in SLCs reaching a given price level.
(MLC also typically comes with implementation side effects
[interleaved data layout, in particular] that means that data in
unchanged pages as seen outside of the SSD is rewritten
because data was changed at the interleaved logical location,
not because the SSD software decided that the data was getting
"weak" and needed to be refreshed.)
Hulk - Friday, February 24, 2012 - link
Timely, informative, well written, and just the right amount of technical detail.Really nice job.
valnar - Friday, February 24, 2012 - link
I'm not sure who the target audience of TLC is. Is there really a group of people out there that is willing to sacrifice reliability and data integrity for price or capacity? I certainly wouldn't.It's bad enough that modern hard drives in the 2TB range have longevity problems. I don't want my SSD to be in the same boat, especially since that SSD tends to be the boot drive on most PC's.
foolsgambit11 - Friday, February 24, 2012 - link
I'm assuming TLC is a subclass of MLC, and not actually distinct as it's laid out in this article. Before TLC came along, all MLC belonged to (what I'll call) the DLC subclass, yeah?SLC = Single level cell
MLC = Multi level cell
a. DLC = Dual level cell
b. TLC = Triple level cell
Kristian Vättö - Saturday, February 25, 2012 - link
I just used the names that manufacturers use. If you look at e.g. Micron's part catalog (linked below), they use SLC, MLC and TLC. I agree that the naming is misleading because MLC should refer to any NAND with multiple bits per cell. TLC is sometimes called as 3-bit-per-cell MLC or just MLC-3 but the TLC name is gaining more momentum all the time.http://www.micron.com/products/nand-flash/mass-sto...
foolsgambit11 - Sunday, February 26, 2012 - link
Thanks.Taracta - Sunday, February 26, 2012 - link
Shouldn't the TLC be 64Gb? It holds twice as much information as MLC as MLC hold twice as much as SLC. Each increment in bits doubles the information stored as stated in the article, SLC 2bits stored, MLC 4bits stored and TLC 8bits (1 BYTE) stored.Taracta - Sunday, February 26, 2012 - link
You are dealing with base-2 values. Each additional bit doubles the amount of data that is stored. You even have the correct values in the begining of the article. SLC stores 2 bitsof information 0 and 1, MLC stores 4 bits of information 00, 01, 10, 11 and TLC store 8 bits (1 BYTE) of information 000, 001, 010, 011, 100, 101, 110, 111 yet further down in the article you are stating that TLC stores only a third more that of SLC. You are confusing the bit place holder with the actual information that is being stored. TLC has an additional bit place holder compared to MLC which has an additional bit place holder compared to SLC. Each bit place holder increases the storage capability by a power of two (2).Kristian Vättö - Sunday, February 26, 2012 - link
SLC stores 1-bit per cell/transistor and the value can be either 0 or 1. It cannot be 0 and 1 at the same time.MLC stores 2-bits cell. This means the value can be either 00, 01, 10, or 11. However, it can only be programmed to have one value. One MLC cell cannot store e.g. 00 and 01 at the same time. One 0 or 1 is one bit of data, i.e. 00 is two bits of data. I don't know how you are coming up with four bits, maybe you are mixing it with the voltage states (each value needs its own voltage state so when you program a cell to e.g. 00, it will be read as 00).
TLC just increases the bits per cell to three which means the possible values are 000, 001, 010, 100, 011, 110 101, and 111. Again, eight voltage states and three bits per cell.
Each additional bit per cell increases the voltage states by a power of 2 (in math terms: 2^n, where n is the amount of bits per cell). Amount of bits per cell is just n, it's not a power of two. MLC is 2*1=2, and 2 is 100% bigger than 1. TLC is 3*1=3. and 3 is 200% bigger than 1 but only 50% more than 2.
Taracta - Sunday, February 26, 2012 - link
Ok let me make it simple because I still think you are confusing yourself.SLC possible values are 0 or 1 which is equal to 2 values with is 2^1
MLC possible values are 0, 1, 10 or 11 which is equal to 4 values which is 2^2
TLC possible values are 0, 1, 10, 11, 100, 101, 110 or 111 which is equal to 8 values which is 2^3
Therefore each TLC which stores 8 values (3bits) which is twice that of a MLC which stores 4 values (2bits) which is twice that of a SLC which stores 2 values (1bit)
Is this right?
KitsuneKnight - Sunday, February 26, 2012 - link
He's not confusing himself, you're confused about binary numbers and bits."Therefore each TLC which stores 8 values (3bits) which is twice that of a MLC which stores 4 values (2bits) which is twice that of a SLC which stores 2 values (1bit)"
Don't confuse the amount of bits of storage, with the maximum value it can hold.
Since you seem to be getting confused with binary numbers, lets work with decimals numbers for a bit.
Lets say an 'SLC' can represent the values 0-9. An MLC can represent the values 0-9, 0-9 or 00-99 (that's two sets of 0-9 next to each other!). A TLC can represent the values 0-9,0-9,0-9 or 000-999. It should be patently obvious that an TLC doesn't have 100 times the capcity of an SLC cell! A /single one/ can hold a VALUE 100 times, but, 3 SLCs next to each other could hold the same value.
A linear growth of bits results in an /exponential/ growth of the value those bits, when combined, can represent. It doesn't matter if all those bits are from a single cell, or X number of cells. How you get bits doesn't matter.
Taracta - Monday, February 27, 2012 - link
Kristian,Did some research to see where you were coming from with the data you presented.
http://cseweb.ucsd.edu/users/swanson/papers/ICNC20...
gives some insight on TLC block sizes and why is doesn't follow the actual size of a TLC cell. Basically some pages and not use in TLC block configurations. Strangely the amount of pages in a TLC block is more than double that of a MLC block!
I leave it up to you to clarify the article as it is somewhat confusing and needs some explanation of the differences between the cell, page and block sizes for TLC.
Kristian Vättö - Monday, February 27, 2012 - link
Actually, TLC block size does (or at least should) follow the bits-per-cell idea. 25nm IMFT MLC NAND brought us 8KB pages and 256 pages per block. According to your link, TLC has 384 pages per block (i.e. 3*128 which means 128 pages per bit). MLC is now using that same 128 pages per bit idea (before it was 64 pages per bit).It's possible that TLC moved to a bigger block size before MLC and SLC because that lowers the cost and ultimately TLC is all about cost. There is need for less peripheral circuits between the blocks, which makes the die smaller and hence reduces production costs.
http://www.micron.com/~/media/Documents/Products/T...
http://www.anandtech.com/show/2928
I don't know what this has to do with your original point about the article being wrong, though. Of course, I'm happy to answer any questions regarding TLC, or at least give it a try (I haven't studied NAND technology in a university so e.g. that math stuff in your link is over my head).
mdshann - Monday, March 5, 2012 - link
I haven't seen a 500 GB hard drive for anywhere near $50 in about 6 months now... where are you getting these drives? Right now the cheapest 500 GB drive on newegg.com is $84.99 and it's a bare Hitachi.