Performance Consistency

Our performance consistency test explores the extent to which a drive can reliably sustain performance during a long-duration random write test. Specifications for consumer drives typically list peak performance numbers only attainable in ideal conditions. The performance in a worst-case scenario can be drastically different as over the course of a long test, as drives can run out of spare area and have to start performing garbage collection as well as sometimes even reach power or thermal limits.

In addition to an overall decline in performance, a long test can show patterns in how performance varies on shorter timescales. Some drives will exhibit very little variance in performance from second to second, while others will show massive drops in performance during each garbage collection cycle but otherwise maintain good performance, and others show constantly wide variance. If a drive periodically slows to hard drive levels of performance, it may feel slow to use even if its overall average performance is very high.

To maximally stress the drive's controller and force it to perform garbage collection and wear leveling, this test conducts 4kB random writes with a queue depth of 32. The drive is filled before the start of the test, and the test duration is one hour. Any spare area will be exhausted early in the test and by the end of the hour even the largest drives with the most overprovisioning will have reached a steady state. We use the last 400 seconds of the test to score the drive both on steady-state average writes per second and on its performance divided by the standard deviation.

Steady-State 4KB Random Write Performance

The X400's steady-state write performance is unimpressive but typical for planar TLC drives.

Steady-State 4KB Random Write Consistency

The consistency score of the X400 is good but not great, and it is again close to the OCZ Trion 150.

IOPS over time
Default
25% Over-Provisioning

The X400 does not display an initial burst of extreme performance and instead wobbles around 20k IOPS before dropping down to steady state. The X400 does not exhibit severe stalling at any stage of the test, whereas the previously reviewed Trion 150 handles the transition to steady state poorly.

Steady-State IOPS over time
Default
25% Over-Provisioning

In steady-state the X400's performance is not tightly regulated but doesn't have any outliers of significantly below-average performance. Extra overprovisioning does little to improve the worst-case performance but greatly increases the average and best-case write performance.

Introduction AnandTech Storage Bench - The Destroyer
Comments Locked

41 Comments

View All Comments

  • Billy Tallis - Friday, May 6, 2016 - link

    I changed TiB to TB. In reality the sizes are only nominal. The exact capacity of the X400 is 1,024,209,543,168 bytes while 1TiB would be 1,099,511,627,776 bytes and 1000GB drives like the 850 EVO are 1,000,204,886,016 bytes.
  • HollyDOL - Friday, May 6, 2016 - link

    yay, that's some black magic with spare areas / crc prossibly...
    X vs Xi prefixes are treatcherous... while with kilo it does only 2,4%, with Tera it's already 9,95%...more than enough to hide OS and majority of installed software :-)
  • bug77 - Tuesday, May 10, 2016 - link

    Then you should put that into the article, unless you're intentionally trying to be misleading ;)
  • SaolDan - Friday, May 6, 2016 - link

    Neat!!
  • hechacker1 - Friday, May 6, 2016 - link

    I'm tempted to buy two 512GB and RAID 0 them. Does anybody know if that would improve performance consistency compared to a single 1TB drive? I don't really care about raw bandwidth, but 4k IOPS for VMs. I'm having trouble finding benchmarks showing what RAID 0 does to latency outliers
  • CaedenV - Friday, May 6, 2016 - link

    As someone who has been running 2 SSDs in RAID0 for the last few years I would recommend against it. That is not to say that I have had any real issues with it, just that it is not really worth doing.
    1) once you have a RAID your boot time takes much longer as you have to POST, RAID, and then POST again, then boot. This undoes any speed benefit for fast start times that SSDs bring you.
    2) It adds points of failure. Having 2 drives means that things are (more or less) twice as likely to fail. SSDs are not as scary as they use to be, but it is still added risk for no real world benefit.
    3) Very little real-world benefit. While in benchmarks you will see a bit of improvement, real world workloads are very bursty. And the big deal with RAID with mechanical drives is the ability to que up read requests in rapid succession to dramatically reduce seek time (or at least hide seek time). With SSDs there is practically no seek time to begin with, so that advantage is not needed. For read/write performance you will also see a minor increase, but typically the bottleneck will be at the CPU, GPU, or now even the bus itself.

    Sure, if you are a huge power-user that is editing multiple concurrent 4K video streams then maybe you will need that extra little bit of performance... but for most people you would never notice the difference.

    The reason I did it 4 years ago was simply a cost and space issue. I started with a 240GB SSD that cost ~$250 which was a good deal. Then when the price dropped to $200 I picked up another and put it in RAID 0 because I needed the extra space and could not afford a larger drive. Now with the price of a single 1TB drive so low, and with RAID having just as many potential issues as potential up-sides, I would just stick with a single drive and be done with it.
  • Impulses - Friday, May 6, 2016 - link

    I did it with 2x 128GB 840s at one point and again last year for the same reasons, cost and space... 1TB EVO x2 (using a 256GB SM951 as OS drive) If I were to add more SSD space now I'd probably just end up with a 2TB EVO.

    Probably won't need to RAID up drives to form a single large-enough volume again in the future, this X400 is already $225 on Amazon (basically hours after the article went up with the $244 price).

    I don't even need the dual 1TB in RAID in an absolute sense, but it's more convenient to have a single large volume shared by games/recent photos than to balance two volumes.

    I don't think the downsides are a big deal, but I wouldn't do it for performance reasons either, and I backup often.
  • phoenix_rizzen - Friday, May 6, 2016 - link

    Get 4 of them and stick them into a RAID10. :)
  • Lolimaster - Friday, May 6, 2016 - link

    There's no point in doind raid0 with SSD's. You won't decrease latency/access times or improve random 4k reads (they will be worse most of the time).

    Sequential gains are meaningless (if it matter to you then you should stick to a PCI-e/m.2 NVME drive)
  • Pinn - Friday, May 6, 2016 - link

    I have the Samsung NVME M.2 512gb in the only m.2 slot and am aching to get more storage. Should I just fill one of my PCIe slots (x99)?

Log in

Don't have an account? Sign up now