Data storage requirements have seen an exponential increase over the last several years. Both cloud and local storage requirements continue to be served by hard drives where workloads are either largely sequential or not performance sensitive. While the advancements in storage capacity have primarily served the interests of datacenters (enabling more storage capacity per rack), the products have trickled down to consumers in the form of drives for NAS (network-attached storage) units and pre-installed in external / DAS (direct-attached storage) enclosures. Seagate is the only one of the three hard drive vendors to target the desktop storage market with their highest capacity drives. We looked at the 10TB BarraCuda Pro drive last year, and the 12TB follow-up was launched last month.


The Seagate BarraCuda Pro 12TB is a 7200RPM SATAIII (6 Gbps) hard drive with a 256MB multi-segmented DRAM cache. It features eight PMR platters with a 923 Gb/in2 areal density in a sealed enclosure filled with helium. According to Seagate, it typically draws around 7.8W, making it one of the most power efficient high-capacity 3.5" hard drives in the market. It targets creative professionals with high-performance desktops, home servers and/or direct-attached storage units. It is meant for 24x7 usage (unlike traditional desktop-class hard drives) and carries a workload rating of 300TB/year, backed by a 5-year warranty. It also comes with a bundled data-recovery service (available for 2 years from date of purchase). The various aspects of the drive are summarized in the table below.

Seagate BarraCuda Pro 12TB Specifications
Model Number ST12000DM0007
Interface SATA 6 Gbps
Sector Size / AF 4096
Rotational Speed 7200 RPM
Cache 256 MB (Multi-segmented)
Rated Load / Unload Cycles 300 K
Non-Recoverable Read Errors / Bits Read < 1 in 1015
MTBF 1M hours
Rated Workload ~ 300 TB/yr
Operating Temperature Range 0 to 60 C
Physical Parameters 14.7 x 10.19 x 2.61 cm; 705 g
Warranty 5 years
Street Price (in USD, as-on-date) $500

Note that the weight has increased compared to the 10TB drive introduced last year. While the 10TB version had seven platters, the 12TB one bumps it up to eight.

A high-level overview of the various supported SATA features is provided by HD Tune Pro, and shows support for common mechanical features such as NCQ.

The main focus of our evaluation is the performance of the HDD as an internal disk drive in a PC. The other suggested use-case for the BarraCuda Pro is in direct-attached storage devices. The evaluation in these two modes was done with the help of our direct-attached storage testbed.

The internal drive scenario was tested by connecting the drive to one of the SATA ports off the PCH, while the Akitio Thunder3 Duo Pro was used for evaluating the performance in a DAS. The Thunder3 Duo Pro was connected to one of our testbed's Thunderbolt 3 Type-C port. The controller itself connects to the Z170 PCH via a PCIe 3.0 x4 link.

AnandTech DAS Testbed Configuration
Motherboard GIGABYTE Z170X-UD5 TH ATX
CPU Intel Core i5-6600K
Memory G.Skill Ripjaws 4 F4-2133C15-8GRR
32 GB ( 4x 8GB)
DDR4-2133 @ 15-15-15-35
OS Drive Samsung SM951 MZVPV256 NVMe 256 GB
SATA Devices Corsair Neutron XT SSD 480 GB
Intel SSD 730 Series 480 GB
Add-on Card None
Chassis Cooler Master HAF XB EVO
PSU Cooler Master V750 750 W
OS Windows 10 Pro x64
Thanks to Cooler Master, GIGABYTE, G.Skill and Intel for the build components

The full details of the reasoning behind choosing the above build components can be found here.


Performance - Internal Storage Mode
Comments Locked


View All Comments

  • rtho782 - Wednesday, November 15, 2017 - link

    I'd rather loose a 12TB Plex Library than a 100kB bitcoin wallet with 10 bitcoins in.

    The size of the data isn't really relevant.
  • BurntMyBacon - Wednesday, November 15, 2017 - link

    @Glock24: "Who wants to lose 12TB of data? Yeah, not me."

    You will only loose as much data as you have stored on the drive. If you only have 3TB data, then it doesn't matter whether it's a 12TB drive or a 6TB drive (assuming the same failure rate). If you do have 12TB of data, then you'll need several smaller drives to hold that data (2x6TB, 3x4TB, etc.). That presents a trade-off for data protection. While a single catastrophic (total) drive failure won't take all your data with it, you've massively increased the probability of a catastrophic drive failure taking place. Then there's the fact that not all your data is of equal value. If Murphy has anything to say about it, it will be your most valuable data that gets lost. So all going with smaller really does is reduce the severity of a data loss (due to total drive failure) at the expense of increasing the certainty of data loss (and that data possibly being your most valuable data).

    So, as kingpotnoodle said, have a backup plan in place. Redundancy via RAID1 (or other RAID not 0) is good practice for data protection. Also, if you are so inclined, you can use a file system that has built in redundancy features (I.E. ZFS) and store two or more copies of files on different parts of the drive. This would reduce the amount of data able to be stored on the drive, but significantly increase data resilience from failures that aren't total drive failures. It also makes data recovery more likely in the case of total drive failure.

    In short, a 12TB drive can be both less likely to loose data and have no more data to lose than a 4TB drive if you set it up that way (ZFS or similar with triple redundancy at the file system level). Of course, this comes at the expense of cash, just like any other data redundancy solution (I.E. RAID1), so choose your methods wisely.
  • Arbie - Wednesday, November 15, 2017 - link

    I suppose it can be sussed out of the performance data, but... can you please say if this drive is shingle technology or not? With any Seagate drive that's one of my first questions, and they seem to have stopped identifying it in the literature.
  • ganeshts - Wednesday, November 15, 2017 - link

    I already clarified in the introductory text with an edit, and also in a comment below - these are NOT shingled drives, but PMR platters in a sealed helium-filled enclosure.
  • Fallen Kell - Wednesday, November 15, 2017 - link

    Exactly. Now one thing that isn't being mentioned that is very important as we get into these bigger and bigger hard drives within use in RAID systems is the time to rebuild and single read failure rates. That 12TB drive in full use on a RAID 5 system will take over 18 hours just to read the other disks inside the RAID group, factor in 14 hours to write the parity data to the new disk and give a 10% overhead for calculating the parity, and you are looking at around 36 hours assuming no other activity is happening on the RAID set to rebuild from a failed disk. If during those 36 hours a single read failure occurs (on a RAID 5), you have just lost all your data.

    This is why as has been stated that things like RAID 6 has been developed, but we are now pushing the boundaries of what RAID 6 can protect against, and really need to be using RAID 5+1 or similar, but that costs double the amount of hard drives to implement.
  • wumpus - Thursday, November 16, 2017 - link

    These issues have mostly been proven to be overkill, but I doubt I'd trust even Seagates even in RAID 6 (and then some) [having two arrays of RAID 5 means your software is a kludge. That's just fundamentally stupid and you should be really looking into some sort of Reed-Solomon based system with many ecc drives. But unfortunately "known good RAID 5" beats "insufficiently tested reed solomon encoding" so I understand how it gets used. Doesn't make it any less of a kludge] .

    Also remember that the bit error rate of 10^15 doesn't mean "expect 1 bit every 10^15 read/writes" but really "expect an aligned 4kbyte of garbage every 8*4096*10^15", so the calculations are a bit different. The internals of hard drives mean either the whole sector is good or it is entirely garbage, you don't individual bit errors.

    And if "you just lost all your data" really happens, you have a pretty strange dataset that can't take a single aligned 4k group of garbage (most filesystems store multiple copies of critical data, so that wouldn't be an issue).

    Even if you did, you would just break out the tapes and reload (which unfortunately is much, much longer than 36 hours). When *arrays* of 12TB make sense, you are definitely in tape backup land. Hopefully you have a filesystem/backup system that can tag the error to the file (presumably to the RAID sector size) and simply reload the failed RAID sector from tape (because otherwise you will be down for weeks).
  • GreenReaper - Sunday, September 2, 2018 - link

    I think your sums are a little off - it doesn't have to be a serial operation. A good RAID solution will rebuild by reading and writing at the same time. However, I/O contention on reads *can* kill a rebuild, and this can easily turn an operation which "should" take a day into a week-long saga.
  • bigboxes - Friday, November 17, 2017 - link

    One more time... RAID is not backup. Doesn't matter if you have the drive mirrored. If a file gets corrupted/deleted on one drive, then you have the same issue on the mirrored drive.
  • Pinn - Thursday, November 16, 2017 - link

    Store less porn, Glock24.
  • Samus - Thursday, November 16, 2017 - link

    I'd put my family in a Ford Pinto before I put 12TB of data on a Seagate.

Log in

Don't have an account? Sign up now