Intel SSD DC P3700 Review: The PCIe SSD Transition Begins with NVMe
by Anand Lal Shimpi on June 3, 2014 2:00 AM EST- Posted in
- Storage
- SSDs
- Intel
- Intel SSD DC P3700
- NVMe
CPU Utilization
With the move to NVMe not only do we get lower latency IOs but we should also see lower CPU utilization thanks to the lower overhead protocol. To quantify the effects I used task manager to monitor CPU utilization across all four cores in a Core i7 4770K system (with HT disabled). Note that these values don't just look at the impact of the storage device, but also the CPU time required to generate the 4KB random read (QD128) workload. I created four QD32 threads so all cores are taxed and we're not limited by a single CPU core.
To really put these values in perspective though we need to take into account performance as well. The chart below divides total IOPS during this test by total CPU usage to give us IOPS/% CPU usage:
Here all of the PCIe solutions do pretty well. The SATA based S3700 is put to shame but even the Intel SSD 910 does well here.
For the next charts I'm removing Iometer from the CPU usage calculation and instead looking at the CPU usage from the rest of the software stack:
Here the 910 looks very good, it's obviously a much older (and slower) drive but it's remarkably CPU efficient. Micron's P420m doesn't look quite as good, and the SATA S3700 is certainly far less efficient when it comes to IOPS/CPU.
85 Comments
View All Comments
457R4LDR34DKN07 - Tuesday, June 3, 2014 - link
No, they are 4x pcie 2.5" SFF-8639 drives here is a good article describing the differences between satae and 2.5" SFF-8639 drives:http://www.anandtech.com/show/6294/breaking-the-sa...
Qasar - Tuesday, June 3, 2014 - link
ok.. BUT.. that's not what i asked.... will this type of drive, ie the NVMe type.. be on some other type of connection besides PCIe 4x ?? as i said :depending on ones usage... finding a PCIe slot to put a drive like this in.. may not be possible, specially in SLI/Crossfire... add the possibility of a sound card or raid card..
cause one can quickly run out of PCIe slots, or have slots covered/blocked by other PCIe cards ... right now, for example. i have an Asus P6T and due to my 7970.. the 2nd PCIe 16 slot.. is unusable and the 3rd slot.. has a raid card in it.. on a newer board.. it may be different.. but sill SLI/Crossfire.. can quickly cover up slots ... or block them ... hence.. will NVMe type drives also be on sata express ??
457R4LDR34DKN07 - Wednesday, June 4, 2014 - link
right and what I told you is that 2.5" SFF-8639 is also offered. You can probably plug it into a sata express connector but you will only realize 2x pci-e 3.0 speeds IE 10gb/s.xdrol - Tuesday, June 3, 2014 - link
It takes 5x 200 GB drives to match the performance of a 1.6 TB drive? That does not sound THAT good... Make it 8x and it's even.Lonyo - Tuesday, June 3, 2014 - link
Now make a motherboard with 8xPCIe slots to put those drives in.hpvd - Tuesday, June 3, 2014 - link
sorry only 7 :-(http://www.supermicro.nl/products/motherboard/Xeon...
:-)
hpvd - Tuesday, June 3, 2014 - link
some technical data for the lower capicity models could be fund here:http://www.intel.com/content/www/us/en/solid-state...
maybe this is interesting to be added to the article...
huge pile of sticks - Tuesday, June 3, 2014 - link
but can it run crysis?Homeles - Tuesday, June 3, 2014 - link
It can run 1,000 instances of Crysis. A kilocrysis, if you will.Shadowmaster625 - Tuesday, June 3, 2014 - link
How is 200 uS considered low latency? What a joke. If intel had any ambitions besides playing second fiddle to apple and ARM, then they would put the SSD controller on the cpu and create a DIMM type interface for the NAND. Then they would have read latencies in the 1 to 10 uS range, and even less latency as they improve their caching techniques. It's true that you wouldnt be able to address more than a couple TB of NAND through such an interface, but it would be so blazing fast that it could be shadowed using SATA SSDs with very little perceived performance loss over the entire address space. Think big cache for NAND, call it L5 or whatnot. It would do for storage what L2 did for cpus.