Last year Intel introduced its first truly new SSD controller since 2008. Oh how times have changed since then. Intel's original SSD controller design was dual purpose, designed for both consumer and enterprise workloads. Launching first as the brains behind a mainstream Intel SSD, that original controller did a wonderful job of kicking off the SSD revolution that followed. Growing pains and a couple of false starts kept a true successor to Ephraim (Intel's first controller) from ever really surfacing over the next few years.

Last year, Ephraim got a true successor and it came in the form of a very high-end enterprise drive: the Intel SSD DC S3700. Equipped with tons of 25nm HET-MLC NAND, the S3700 officially broke the enterprise addiction to SLC for high endurance drives while raising the bar in all aspects of performance. In addition to the usual considerations however, Intel had a new focus with the S3700: performance consistency.

Due to the nature of NAND flash, there's a lot of background management/cleanup that happens in order to ensure endurance as well as high performance. It's these background management tasks that can dramatically impact performance. I love the cleaning your room analogy because it accurately describes the tradeoffs SSD controller makers have to deal with. Clean early and often and you'll do well. Put off cleaning until later and you'll enjoy tons of free time early but you'll quickly run into a problem. It's an oversimplification, but the latter is what most SSD controllers have done historically, and the former is what Intel always believed in. With the S3700, Intel took it to the next level and built the most consistent performing SSD I'd ever tested.

Performance consistency matters for a couple of reasons. The most obvious is an impact to user experience. Predictable latencies are what you want, otherwise your applications can encounter odd hiccups. In client drives, those hiccups appear as unexpected pauses during application usage. In the enterprise, the manifestation is similar except the user encounters the issue somewhere over the internet rather than locally. The other issue with inconsistent performance really creeps up in massive RAID arrays. With many drives in a RAID array, overall performance is determined by the slowest performing drive. Inconsistent performance, particularly with large downward swings, can result in substantial decrease in the performance of large RAID arrays. The motivation to build a consistently performing SSD is high, but so is the level of difficulty in building such a drive.

Intel had the luxury of being able to start over with the S3700's controller architecture. It moved to a flat indirection table (mapping between LBAs and NAND pages), which incurred a DRAM size penalty but ultimately made it possible to deliver much better performance consistency. The S3700 did amazingly well in our enterprise tests, and produced the most consistent IO consistency curves I'd ever seen. The only downside? Despite being much better priced than the Intel X25-E and SSD 710, the S3700 is still a very expensive drive. The move to a better architecture helped reduce the amount of spare area needed for good performance, which in turn reduced cost, but the S3700 still used Intel's most expensive, highest endurance MLC NAND available (25nm HET-MLC). With the largest versions capable of enduring nearly 15 petabytes of writes, the S3700 was really made for extremely write intensive workloads. The drive performs very well across the board, but if you don't have an extremely write intensive workload you'd be paying for much more than you needed.

We always knew that Intel would build a standard MLC version of the S3700, and today we have that drive: the Intel SSD DC S3500.

The Drives & Architecture
Comments Locked

54 Comments

View All Comments

  • Minion4Hire - Tuesday, June 11, 2013 - link

    I believe that's just the writes they guarantee the drive for. There's write amplification and maintenance to consider there as well.
  • ShieTar - Wednesday, June 12, 2013 - link

    Well, they have to keep the S3700 useful enough to sell both. So they tailor the specs a bit in order to push customers into buying the "right" drive.
  • ShieTar - Wednesday, June 12, 2013 - link

    Then again, if this is guaranteed for the whole range, its an impressive number for the small 80GB drive.
  • pesos - Tuesday, June 11, 2013 - link

    How about performance over time in virtualization scenarios? Wondering how well these SSDs hold up when they have nothing on them but virtual hard disks...
  • dealcorn - Tuesday, June 11, 2013 - link

    In Part 2, could you kindly note whether the Drive supports DEVSLP. Depending on usage pattern, not considering the drive for mobile use based on idle power requirements may be inappropriate.
  • sunbear - Tuesday, June 11, 2013 - link

    Looking at the consistency comparison against the seagate 600 pro, it looks like the intel s3500 is more consistent but unfortunately it's consistently slower in every metric. I'd rather have a seagate 600 pro with inconsistent performance if the minimum performance if that drive is better than the maximum performance of the s3500.
  • beginner99 - Wednesday, June 12, 2013 - link

    I had the same thought. agree.
  • hrrmph - Friday, June 14, 2013 - link

    As an individual drive maybe.

    For RAID, the slowest drive in the array will probably control the overall I/O rate. In that case, I don't see an advantage for Seagate over Intel.

    As I see it, the S3500 is a pro-sumer high-end workstation drive for RAID arrays, and a mid-range enterprise class drive. The S3700 is clearly a full-on high-end enterprise class drive.

    We'll have to wait for Part 2 of the article and hope that Anand gives us some comparisons to the consumer 520 series to see if there is any reason to buy an S3500 instead of a 520.

    Intel is being suspiciously quiet about the upcoming 530 series SSDs. I expect that we'll be looking at another low power consumption, high performance, relatively affordable SSD using a non-Intel controller. But, it would be nice if we could have all of that with an Intel controller instead.

    -
  • rs2 - Wednesday, June 12, 2013 - link

    What's the deal with the first slide from Intel shown in the conclusion? Specifically, how is a 12x800GB (9.6 TB) deployment comparable to a 500x300GB (150 TB) deployment?

    The only way you can get 500 VM's on such a deployment is if you allocate only ~20 GB per VM. That's anemic. And if that's the allocation size then the 500x300GB can support over 7500 VM's.

    So...yeah, not seeing how a valid comparison is being made. Intel should be quoting figures based upon ~192 SSD's, because that's how many it takes to reach the same storage capacity as the solution it's being compared to.
  • flyingpants1 - Wednesday, June 12, 2013 - link

    I noticed the same thing.

Log in

Don't have an account? Sign up now