ASRock X79 Extreme11 Review: PCIe 3.0 x16/x16/x16/x16 and LSI 8-Way SAS/SATAby Ian Cutress on September 3, 2012 10:15 AM EST
- Posted in
The end of summer marks the start of the X79 refresh line. We are, at best estimates, coming up to half-way in the Sandy Bridge-E life as the top of the line processor range before Ivy Bridge-E comes to market. Both chips will be expected to run on X79 and the next chipset. But in the meantime, manufacturers are coming up with ways to reinvigorate their X79 line up. So insert ASRock, and the ASRock X79 Extreme11. This motherboard comes with two PLX PEX 8747 chips, making up to 72 PCIe lanes available. This are split into 64 for the PCIe slots (x16/x16/x16/x16 capable), and eight are directed to an LSI SAS 2308 PCIe 3.0 chip, which allows RAID 0, 1 and 10 on eight SATA ports, for a peak throughput of up to 4.0 GBps from eight drives in RAID-0.
ASRock X79 Extreme11 Overview
In the ASRock X79 range, we have the Extreme3, the Extreme4, the Extreme4-M, the Extreme6, the Extreme7, the Extreme9, the Game Blaster equipped Extreme6/GB, two Fatal1ty motherboards and now the Extreme11. The ASRock X79 Extreme11 is designed to stretch both the X79 platform and your wallet – this motherboard will set you back a good $600 MSRP. For the hard cash, the motherboard has two main selling points.
Firstly, we have a pair of PLX PEX 8747 chips on board, both of which translate 16 of the CPU PCIe lanes into 32 for the PCIe slots each (for more information on how this works, please read our PLX PEX 8747 discussion). This gives the motherboard, as a whole, 72 PCIe 3.0 lanes to play with. The 64 lanes that come directly from the PLX chips go to the PCIe slots, to provide a peak x16/x16/x16/x16 mode with 4 GPU devices. However, the focus of this board is not in GPUs for gaming, but workstations with GPU accelerated features. In this arrangement, with all the PCIe slots populated, we get x16/x8/x8/x8/x8/x8/x8. If you believe the leaks/news online about an upcoming single slot GTX670, or want to purchase several single slot FirePro cards, then the ASRock will give you all that bandwidth as long as the user handles the heat.
The other eight lanes are taken direct from the CPU and put into an LSI SAS 2308 PCIe 3.0 chip. This chip supplies the board with eight SAS2 ports, which are also SATA 6 Gbps compatible. Through the chip, we have access to RAID 0, 1 and 10, (but no RAID 5 unfortunately), but due to the PCIe bandwidth, are not limited to the chipset like normal server chipsets like C602, C604 or C606. This results in our testing at a peak speed of 4.0 GBps with eight SSDs in RAID-0. However, the presumed real-world layout for this would be RAID-1 or RAID-10. It should be noted that there is also no cache associated with the LSI chip, and thus we only get speed increases above 64KB transfers.
The focus of the X79 Extreme11 is not the gamer. For gaming, ASRock have their Fatal1ty branding. ASRock want to aim at the workstation market, those that require PCIe functionality and an SAS enabled motherboard without having to go out and purchase a separate PCIe card. As a result, the ASRock X79 Extreme11 is also compatible with the 16xx, 26xx and some of the 46xx Xeons (link) from the launch BIOS, like the server compatible chipsets, and un-buffered ECC memory with those Xeons.
Other features onboard include dual Broadcom gigabit Ethernet ports which can be teamed, a Creative Sound Core3D audio solution, power delivery methodology through dual-stack MOSFETs and dual 8-pin CPU power connectors. As the X79 chipset does not have USB 3.0 as standard, ASRock have included Texas Instrument USB 3.0 controllers for a total of eight ports (4 on the back panel, 4 via two onboard headers) and a 2-port front USB panel included in the box. As expected with a board of this price we also get a full compliment of DIMM slots.
Performance on the X79 Extreme11 can be taken in different directions. It performs like most other X79 boards we have tested with the i7-3960X, although it does not have MultiCore Enhancement like the ASUS ROG motherboards. The throughput on the LSI chip scales well with drives, but the more drives you have the larger the transfer size has to be for scaling. Also with our ATTO testing, the chipset SATA ports tended to have better read speeds at lower transfer sizes. Power consumption is also an issue – having two PLX chips and an LSI chip adds a level of power draw depending on how the PCIe slots are populated. But as a motherboard aimed at workstation builds, power draw is not a primary feature here.
When deciding whether to recommend a motherboard such as the ASRock X79 Extreme11, we come across a dilemma. For workstation builders, money can potentially be no object. In terms of CPU performance, we would get the same here as we would do in the X79 Extreme4-M. The premium the user is paying for this product comes when PCIe bandwidth is the priority, but also wants a product that can handle SAS drives.
If you remember back to the ECS X79R-AX I reviewed back in January, this had SAS ports but they were not certified for SAS function. Gigabyte has released the X79S-UP5 which is based on the C606 chipset, so also comes with SAS ports but not any extra PCIe bandwidth. It is also worth nothing that on the C606 platforms that the ports are limited to SATA 3 Gbps and a peak of ~ 1.0 GBps (under Intel specifications). So for that the ASRock X79 Extreme11 is preferred.
Estimates for the PLX chip put the extra BOM cost of $40-$60 a piece, and the LSI chip cost is an unknown factor – perhaps $100+. But as of now this is the only consumer level board with the LSI 2308 controller equipped. So the premium must be paid. Purely as a workstation board I would happily recommend the ASRock X79 Extreme11. As a regular user or gamers board however, the cost could be a bit prohibitive compared to what else is available in the market.
The X79 Extreme11, due to the extras on board, comes in at the loosely defined E-ATX form factor. In the case of the ASRock, this means an extra inch on the right hand side of the board. This also pushes the board away from the case mounting holes for more ports on the edge of the board, as noticed by the SATA ports below. The main feature that sticks out on first glance however is that all the PCIe slots are full length, and next to a rather large chipset heatsink (which also hides the two PLX chips and the LSI chip.
The socket area provides space for air coolers when only four RAM slots are populated. Given that this is aimed at the workstation crowd, it is more than likely that all the memory slots are occupied, thus an all-in-one liquid cooler is a good suggestion here. The board in total has six fan headers, four of which are reachable from the CPU socket – a 3-pin PWR header to the top left of the socket, two CPU headers (one 4-pin, one 3-pin) above the socket, and a 3-pin chassis header below the 24-pin ATX power connector and USB 3.0 headers. The other two are 4-pin and 3-pin chassis headers found at the bottom of the board.
The heatsink above the socket covers the power delivery for the motherboard, which ASRock state is a 24 phase (6x2x2 multiplexing) system. The heatsink is connected via a heatpipe to the flat chipset heatsink on the bottom of the board. The bottom heatsink is designed large and flat, with lots of fins and an additional fan to aid cooling. Our estimates are that this bottom heatsink has to deal with 35W+ (Chipset + PLX + PLX + LSI) dissipation, so the fan is a welcome to disperse the heat. However, our testing under dual GPU and above made this chipset fan run at a very high speed, which was definitely audible when idle as a high-pitched buzzing on the test bed. This could be mildly irritating, or not an issue when inside the case.
Along the right hand side of the board, we find the 24-pin ATX power connector, two USB 3.0 headers (powered by a Texas Instruments controller), a 3-pin fan header, and the SATA ports. ASRock have decided to use all the SATA ports from the chipset, and we also get eight from the LSI chip. So from top to bottom we have two SATA 6 Gbps in grey (chipset), four SATA 3 Gbps in black (chipset) and then we get the eight SAS2/SATA 6 Gbps ports in grey. All the ports on board are capable of RAID 0, 1 and 10, with the chipset ports also supporting RAID 5 via chipset specifications. Due to the routing we get some interesting results from this LSI SAS chip – all of which will be discussed in the review.
Along the bottom of the board we get our HD Audio header, a ClearCMOS header, a molex power connector (for PCIe), the front panel header, a 3-pin fan header, three USB 2.0 headers, a 4-pin fan header, a two-digit debug LED and power/reset buttons. The molex power connector is obviously the most interesting here, as typically in order to provide extra power ASRock put a molex connector above the PCIe slots. In this board they have used both, as with seven PCIe slots potentially drawing 535W (seven x 75W) this cannot all come through the ATX power connector. Personally, if ASRock had a choice between a top or bottom mounted molex connector, I would prefer the bottom such that cable management was easier.
The PCIe slots are simple to understand – each slot is designated either a primary slot (1,3,5,7) or a secondary slot (2,4,6). The primary slots are all x16 routed, via the PLX chips. The secondary slots are all x8 routed, and when occupied drop the speed of the primary slot underneath to x8 as well. Here is a crudely drawn chipset diagram to explain:
The forty lanes from the CPU are split into two lots of 16 for the PLX chips, and 8 lanes to the LSI chip. The PLX chips each take their 16 lanes (upstream) and produce 32 lanes each downstream. These 32 lanes are directed to into two sets of 16, each aimed at primary PCIe slots. The secondary slots that are linked to primary slots siphon off eight lanes (technically this is a point-to-point technology) when they are in use. This gives x16/-/x16/-/x16/-/x16 or x16/x8/x8/x8/x8/x8/x8.
Despite Sandy Bridge-E processors are not verified for PCIe 3.0 by specification, they do have the internals to do PCIe 3.0. These PLX chips are also PCIe 3.0, and thus in the BIOS PCIe 3.0 for all the slots can be enabled if required. In our testing, PCIe 3.0 did almost nothing for gaming – we saw minor increases of up to 0.5%-4% in Dirt3 compared to PCIe 2.0, and a 0.3% loss to 1.9% gain in Metro 2033. (Both games at 2560x1440 with all eye candy enabled using 7970s).
Also on board we get a Creative Core3D SoundBlaster chip for audio, which uses its own electromagnetic shielding to help improve audio signal transmission. Other products also specify if their motherboard uses a separate layer for audio – no such mention is provided by ASRock here.
On the rear IO panel ASRock have equipped the X79 Extreme11 with a keyboard PS/2 port, eight USB 2.0 ports (black), a ClearCMOS button, dual gigabit Ethernet via Broadcom controllers, an IEEE1394 port, four USB 3.0 ports (TI controller), two eSATA ports, Optical SPDIF output and audio jacks. ASRock have potentially missed a trick here – the two USB 2.0 ports to the right of the ClearCMOS button could easily use a 4-bay USB 2.0 port hub, thereby removing one USB 2.0 header from the bottom of the board. Hopefully the additional routing would not be too difficult.
|ASRock X79 Extreme11|
|Price||Link to Newegg|
|Power Delivery||Intel Second Generation Core i7 Sandy Bridge E|
Eight DDR3 DIMM slots supporting up to 64 GB
Up to Quad Channel DDR3, 1066-2400 MHz
ECC Memory with Xeon Processors
|Onboard LAN||Dual Broadcom BCM57781|
|Onboard Audio||Creative Sound Core3D|
1x PCIe 3.0 x16
3x PCIe 3.0 x16 (x8 when slots above are populated)
3x PCIe 3.0 x8
2x SATA 6 Gbps (Intel)
4x SATA 6 Gbps (Intel)
8x SAS2/SATA 6 Gbps (LSI SAS 2308)
14x USB 2.0 (8 rear panel, 6 onboard)
8x USB 3.0 (4 rear panel, 4 onboard)
2x SATA 6 Gbps
4x SATA 3 Gbps
8x SAS2/SATA 6 Gbps
2x USB 3.0 Headers (TI)
3x USB 2.0 Headers
6x Fan Headers
1x IEEE 1394 Header
Two-Digit Debug LED
1 x 24-pin ATX Power Connector
2x 8-pin CPU Power Connectors
2x 4-pin Molex Power Connectors
2 x CPU (one 4-pin, one 3-pin)
1 x PWR (4-pin)
3 x CHA (one 4-pin, two 3-pin)
1x PS/2 Keyboard Port
1x Optical SPDIF Output
8x USB 2.0 Ports
4x USB 3.0 Ports (TI Controller)
2x Broadcom GbE
1x ClearCMOS Switch
|Warranty Period||3 Years|
With a motherboard costing $600 MSRP, we should try and assign where that money is going. The two PLX chips and the LSI chip combined could be as much as $300 – asides from these additional extras, the dual NIC that can be teamed (combined with XFast LAN) is a good addition for a workstation board. The upgraded audio could be construed towards more audio based applications, thus some GPUs may help with encoding. As always, we like the fact of the Power/Reset buttons and two-digit LED for debugging.
Post Your CommentPlease log in or sign up to comment.
View All Comments
Azethoth - Monday, September 3, 2012 - link"a SAS". "an" is for words starting with vowels like "an error", "a" is for words starting with consonants like "a Serial Attached SCSI" or "a Storage Area Network" or "a SAS"*. It rolls off the tongue better when you don't have adjacent vowels.
*Your particular English implementation may have different rules, these were the ones I grew up with. I find them simple and easy to apply.
lukarak - Tuesday, September 4, 2012 - linkThat's not entirely true.
It would be an 'a' if you read it as 'a sas'. But with SAS, we usually pronounce it as S A S, and then it goes with 'an'.
ahar - Tuesday, September 4, 2012 - linkWho's "we"? It doesn't include me. Why use three syllables when one will do?
Do you also talk about R A M, or R A I D arrays, or an L A N?
Death666Angel - Tuesday, September 4, 2012 - linkLike lukarak said, that is not true. The English language uses "an", when the word following it starts with a vowel sound. That doesn't necessarily mean it has a vowel as the first character (see hour).
As for abbreviations, there is no rule for it. Some people pronounce them like a single word, others don't. I use LAN, RAM, RAID as a word, but pronounce SAS as S.A.S. and SATA as S.ATA for example and SNES as S.NES. You can't appease both groups. So I think the writer of the article should go with whatever he feels most comfortable with, so that he avoids flipping between things unconsciously.
Death666Angel - Monday, September 3, 2012 - link"If you believe the leaks/news online about an upcoming single slot GTX670, or want to purchase several single slot FirePro cards, then the ASRock will give you all that bandwidth as long as the user handles the heat."
I'd probably slap some water coolers on there. Insane setup :D.
tynopik - Monday, September 3, 2012 - linkIs it even confirmed that this Ivy Bridge-E is coming out?
shunya901 - Monday, September 3, 2012 - link
== ( http://commonprosperity.org )==
you can find many cheap and fashion stuff
jordan air max oakland raiders $30–39;
Ed Hardy AF JUICY POLO Bikini $20;
Handbags (Coach lv fendi d&g) $30
T shirts (Polo ,edhardy,lacoste) $15
Jean(True Religion,edhardy,coogi) $30
Sunglasses (Oakey,coach,gucci,Armaini) $15
New era cap $15
ypsylon - Tuesday, September 4, 2012 - linkBut little is delivered.
1. Primitive RAID option. Without even small cache it is as useful as Intel Storage Matrix RAID. Of course for R 1/10 parity calculations are not required so lack of XOR chip isn't an issue, but believe me even 128 MB of cache would improve performance greatly.
2. They bolted 8 SATA/SAS ports to the board instead using standard server oriented SFF-8087 connector. You get one cable running 4 drives not 4 separate cables for each separate drive. Very clumsy solution. And very, very cheap. Exactly what I expect of ASR.
3. If someone wants RAID buy a proper hardware controller, even for simple setups of R1/10 - plenty of choice on the market. When you change the board in the future you just unplug controller from old board and plug it into new one. No configuration is needed, all arrays remain the same. Idea of running RAID off the motherboard is truly hilarious, especially if somebody change boards every year or two.
4. Fan on south bridge (or the only bridge as north bridge is in the CPU now? ;) ). Have mercy!
5. They pretend it is WS oriented board yet they equip it with lame Broadcom NICs. Completely clueless, that kind of inept reasoning is really typical of ASRock.
6.And finally why persist with ATX. At least E-ATX would be better choice. Spacing some elements wouldn't hurt. Especially with 7 full PCI-Ex slots. Impossible to replace RAM when top slot is occupied, and with really big VGAs it really is tight squeeze between CPU, RAM and VGA. Why not drop top slot to allow air to circulate. Without proper cooling in the case there will be a pocket of hot air which will never move.
To sum up. Bloody expensive, dumb implementation of certain things, and cheaply made. Like 99% of ASRock products. Cheap Chinese fake dressed like Rolls-Royce. In short:stay away.
dgingeri - Tuesday, September 4, 2012 - link1. Many server manufacturers equip their small business servers with a low end chip like that because of cost. Small businesses, like those who would build their own workstation class machines, have to deal with a limited budget. This works for this market space.
2. I don't see any sign of a SFF-8087 port or cable. I see only SATA ports. Honestly, I would have preferred a SFF-8087 port/cable, as my Dell H200 in my Poweredge T110 II uses. It would take up less real estate on the board and be more manageable. I know this from experience.
3. Yeah, the Dell H200 (or it's replacement H310) has plenty of ports (8) and runs <$200 yet any hardware raid controller with a cache would run $400 for 4 ports or about $600 for 8. (I have a 3ware 9750 in my main machine that ran me $600.) Depending on your target market, cost could matter. They get what they can with the budget they have.
4. I'd have to agree with you on the fan, but there's also the little matter of keeping clearance for the video cards top populate the slots. Take off the decorative plate and make the heatsink bigger, and they could probably do without the fan. Unfortunately, there are lots of stupid people out there who buy things on looks rather than capability.
5. Broadcom NICs are vastly superior to the Realtek or Atheros NICs we usually see on DIY boards. I would be happier to see Intel NICs, but Broadcom is still the second best on the market. I have 2 dual port Broadcom NICs in my Dell T110 II machine (which I use as a VMWare ESXI box to train up for certification and my home server.)They work quite well, as long as you don't use link aggregation.
6. Many people wouldn't be able to afford a case that would handle E-ATX, especially the target market for this board.
For the target market, DIY part time IT guy for a small business trying to make a decent CAD station or graphics workstation, it would work fairly well. I'm just not sure about the reliability factor, which would cost a small business big time. I'd say stay away just on that factor. Do with a little less speed and more reliability if you want to stay in business. Dell makes some nice IB workstations that would be perfectly reliable, but wouldn't be as speedy as a SB-E machine.
08solsticegxp - Sunday, June 9, 2013 - linkYou have to realize, this board is not a server board. If it was designed for that, I'm sure they would have two sockets. Also, it is much cheaper to add the LSI chip to the board than have it as an add-on card. If it was an add-on card... where do you expect it to go when using 4 Video cards?
I think the board is designed very well for what it was intended for. You may want to consider looking at design as it relates to the intended purpose... Not, some other purpose.
I will agree to say I would have liked to see a Raid 5 option on the RAID controller. However, looking at the price of an LSI (who are noted for being a high quality RAID controller) it is pretty pricey when you start getting to the controllers that have RAID 5 as an option.