Enterprise Storage Bench - Microsoft SQL UpdateDailyStats

Our next two tests are taken from our own internal infrastructure. We do a lot of statistics tracking at AnandTech - we record traffic data to all articles as well as aggregate traffic for the entire site (including forums) on a daily basis. We also keep track of a running total of traffic for the month. Our first benchmark is a trace of the MS SQL process that does all of the daily and monthly stats processing for the site. We run this process once a day as it puts a fairly high load on our DB server. Then again, we don't have a beefy SSD array in there yet :)

The UpdateDailyStats procedure is mostly reads (3:1 ratio of GB reads to writes) with 431K read operations and 179K write ops. Average queue depth is 4.2 and only 34% of all IOs are issued at a queue depth of 1. The transfer size breakdown is as follows:

AnandTech Enterprise Storage Bench MS SQL UpdateDaily Stats IO Breakdown
IO Size % of Total
8KB 21%
64KB 35%
128KB 35%

Microsoft SQL UpdateDailyStats - Average Data Rate

Our SQL tests are much more dependent on sequential throughput and thus we really see some impressive gains from moving to a 6Gbps SATA interface. Among the 3Gbps results the Intel SSD 520 is now the top performer, followed once again by the X25-E. To be honest, most of these drives do perform the same as they bump into the limits of 3Gbps SATA.

Microsoft SQL UpdateDailyStats - Disk Busy Time

Microsoft SQL UpdateDailyStats - Average Service Time

Once again we see a huge reduction in service time from the Intel SSD 520 running on a 6Gbps interface. Even on a 3Gbps interface the 520 takes the lead while the bulk of the 3Gbps drives cluster together around 14.4ms. Note the tangible difference in performance between the 300GB and 160GB Intel SSD 320. The gap isn't purely because of additional NAND parallelism, the 300GB drive ends up with more effective spare area since the workload size doesn't scale up with drive capacity. What you're looking at here is the impact of spare area on performance.

Enterprise Storage Bench - Oracle Swingbench Enterprise Storage Bench - Microsoft SQL WeeklyMaintenance
Comments Locked

55 Comments

View All Comments

  • ssj4Gogeta - Thursday, February 9, 2012 - link

    I think what you're forgetting here is that the 90% or 100% figures are _including_ the extra work that an SSD has to do for writing on already used blocks. That doesn't mean the data is incompressible; it means it's quite compressible.
    For example, if the SF drive compresses the data to 0.3x its original size, then including all the extra work that has to be done, the final value comes out to be 0.9x. The other drives would directly write the data and have an amplification of 3x.
  • jwilliams4200 - Thursday, February 9, 2012 - link

    No, not at all. The other SSDs have a WA of about 1.1 when writing the same data.
  • Anand Lal Shimpi - Thursday, February 9, 2012 - link

    Haha yes I do :) These SSDs were all deployed in actual systems, replacing other SSDs or hard drives. At the end of the study we looked at write amplification. The shortest use case was around 2 months I believe and the longest was 8 months of use.

    This wasn't simulated, these were actual primary use systems that we monitored over months.

    Take care,
    Anand
  • Ryan Smith - Thursday, February 9, 2012 - link

    Indeed. I was the "winner" with the highest write amplification due to the fact that I had large compressed archives regularly residing on my Vertex 2, and even then as Anand notes the write amplification was below 1.0.
  • jwilliams4200 - Thursday, February 9, 2012 - link

    And still you dodge my question.

    If the Sandforce controller can achieve decent compression, why did it not do better than the Intel 320 in the endurance test in this article?

    I think the answer is that your "8 month study" is invalid.
  • Anand Lal Shimpi - Thursday, February 9, 2012 - link

    SandForce can achieve decent compression, but not across all workloads. Our study was limited to client workloads as these were all primary use desktops/notebooks. The benchmarks here were derived from enterprise workloads and some tasks on our own servers.

    It's all workload dependent, but to say that SandForce is incapable of low write amplification in any environment is incorrect.

    Take care,
    Anand
  • jwilliams4200 - Friday, February 10, 2012 - link

    If we look at the three "workloads" discussed in this thread:

    (1) anandtech "enterprise workload"

    (2) xtremesystems.org client-workload obtained by using data actually found on user drives and writing it (mostly sequential) to a Sandforce 2281 SSD

    (3) anandtech "8 month" client study

    we find that two out of three show that Sandforce cannot achieve decent compression on realistic data.

    I think you should repeat your "client workload" tests and be more careful with tracking exactly what is being written. I suspect there was a flaw in your study. Either benchmarks were run that you were not aware of, or else it could be something like frequent hibernation where a lot of empty RAM is being dumped to SSD. I can believe Sandforce can achieve a decent compression ratio on unused RAM! :)
  • RGrizzzz - Wednesday, February 8, 2012 - link

    What the heck is your site doing where you're writing that much data? Does that include the Anandtech forums, or just Anandtech.com?
  • extide - Wednesday, February 8, 2012 - link

    Probably logs requests and browser info and whatnot.
  • Stuka87 - Wednesday, February 8, 2012 - link

    That most likely includes the CMS and a large amount of the content, the Ad system, our users accounts for commenting here, all the Bench data, etc.

    The forums would use their own vBulletin database. But most likely run on the same servers.

Log in

Don't have an account? Sign up now