For a number of months I have been wondering when 10GBase-T would be getting some prime time in the consumer market. Aside from add-in cards, there was no onboard solution, until ASRock announced the X99 WS-E/10G. We were lucky enough to get one in for review.

10GBase-T is somewhat of an odd standard. Based on upgraded RJ-45 connections, it pushes the standard of regular wired networking in terms of performance and capability. The controllers required for it are expensive, as the situations that normally require this bandwidth tend to use different standards that afford other benefits such as lower power, lower heat generation and more efficient signaling standards. Put bluntly, 10GBase-T is hot, power hungry, expensive, but ultimately the easiest to integrate into a home, small office or prosumer environment. Users looking into 10GBase-T calculate cost in hundreds of monies per port, rather than pennies, as the cheapest unmanaged switches cost $800 or so. A standard two port X540-T2 PCIe 2.0 x8 card can cost $400-800 depending on your location, meaning a minimum $2000 for a 3 system setup.

The benefits of 10GBase-T outside the data center sound somewhat limited. It doesn't increase your internet performance, as that is determined by the line outside the building. For a home network, its best use is in computer to computer data transfer. Normally a prosumer environment might have a server or workstation farm for large dataset analysis and GBit just isn't enough. Or the most likely home scenario is streaming lossless 4K content to several devices at once. For most users this sounds almost a myth, but for a select few it is a reality, or at least something near it. Some users are teaming individual GBit ports for similar connectivity as well.

Moving the 10GBase-T controller and ports ultimately frees up PCIe slots for other devices, and makes integration easier, although you lose the ability to transfer the card to another machine if needed. The X540-BT2 used in the X99 WS-E/10G has eight PCIe 3.0 lanes on a 40 PCIe lane CPU, but can also work with four lanes via the 28-lane i7-5820K CPU if required. Using the controller on the motherboard also helps with pricing, providing an integrated system and hopefully shaving $100 or so from the ultimate cost. That being said, as it ends up in the high end model, it is aimed at those where hardware cost is a minimal part of their prosumer activities, where an overclocked i7-5960X system with 4+ PCIe devices is par for the course.

ASRock X99 WS-E/10G Overview

In an ideal testing scenario, we would test motherboards the same way we do medicine – with a double blind randomized test. In this circumstance, there would be no markings to give away who made the device, and during testing there would be no indication of the device either. With CPUs this is relatively easy if someone else sets the system up. With motherboards, it is almost impossible due to the ecosystem of motherboard design that directly impacts expectation and use model. Part of the benefit of a system is in the way it is presented as well as the ease of use of software, to the point where manufacturers will spend time and resources developing the extra tools. Providing the tools is easy enough, but developing it into an experience is an important aspect. So when ASRock presents a motherboard with 10GBase-T, the main key points here are ‘10GBase-T functionality’ coming from ‘ASRock’.

Due to the cost of the 10GBase-T controller, the Intel X540-BT2, ASRock understandably went high-end in their first implementation. This means a full PCIe 3.0 x16/x16/x16/x16 layout due to the use of two PLX 8747 chips that act as FIFO buffer/muxes to increase the lane count. For those new to PLX 8747 chips, we went in-depth on their function when they were first released which you can read here. These PLX chips also are quite expensive, at least adding $40 each to the cost of the board for the consumer, but allow ASRock to implement top inter-GPU bandwidth.  This means that from the 40 PCIe lanes of an LGA2011-3 CPU, 8 go to the X540-BT2 and 16 each go to the PLX chips which output 32 each. For users wanting to go all out with single slot PCIe co-processors, the X99 WS-E/10G will allow an x16/x8/x8/x8/x8/x8/x8 arrangement.

If the WS in the name was not a giveaway, with the cost of these extra controllers, ASRock is aiming at the 1P workstation market. As a result the motherboard has shorter screws to allow 1U implementation and full Xeon support with ECC/RDIMM up to 128GB. The power delivery package is ASRock’s 12-phase solution along with the Super Alloy branding indicating XXL heatsinks as well as server grade components. The two PLX chips are cooled by a large heatsink with a small fan, although this can be disabled if the users cooling is sufficient. Another couple of nods to the WS market is also the two Intel I210 network interfaces with the dual 10GBase-T, affording a potential teaming rate of 22 Gbps all in. There is also a USB Type-A port sticking out for license dongles as well as a SATA DOM port. TPM, COM and two BIOS chips are also supported.

On the consumer side of the equation, the chipset IO is split into four lanes for an M.2 x4 port, the two Intel I210 NICs mentioned before and a SATA Express implementation. The M.2 slot has some PCIe sharing duties with a Marvell 9172 SATA controller as well, meaning that using the Marvell SATA ports puts the M.2 into x2 mode. The board has 12 total SATA ports, with 6 PCH RAID capable, four PCH non-RAID capable and two from the Marvell. Alongside this is eight USB 3.0 ports, four from two onboard headers and four ports on the rear panel from an ASMedia ASM1074 hub. An eSATA port is on the rear panel as well, sharing bandwidth with a non-RAID SATA ports. Finally the audio solution is ASRock’s upgraded ALC1150 package under the Purity Sound 2 branding.

Performance wise, ASRock uses an aggressive form of MultiCore Turbo to score highly in our CPU tests. Due to the 10G controller, the power consumption is higher than other motherboards we have tested, and it also impacts the DPC Latency. USB 2.0 speed was a little slow, and the audio had a low THD+N result, but POST times were ballpark for X99. The software and BIOS from ASRock followed similarly from our previous ASRock X99 WS review.

The 10GBase-T element of the equation was interesting, given that for PC-to-PC individual transfers from RAMDisk to RAMDisk peaked at 2.5 Gbps. To get the most from the protocol the data transfer requires several streams (more than one transfer function to allow for interleaving), at least four for 6 Gbps+ or eight for 8 Gbps+. One bottleneck in the transfer is the CPU, showing 50% load on an eight-thread VM during transfer using five streams, perhaps indicating that an overclocked CPU (or something like the i7-4790K with a higher threaded speed) might be preferable.

Whenever a motherboard company asks what a user looks for in a motherboard, I always mention that if they have a particular need, they will only look at motherboards that have the functionality. Following this, users would look choosing the right socket, then filter by price, brand, looks and reviews (one would hope in that vague order). The key point here being that the X99 WS-E/10G caters to that specific crowd that need a 10GBase-T motherboard. If you do not need it, the motherboard is overly expensive.

Visual Inspection

Motherboards with lots of additions tend to be bigger than usual, and the WS-E/10G sits in the E-ATX form factor. This allows the addition of the X540-BT2 controller and the two PLX 8747 switches with more PCB room for routing. As the 10G controller is rated at 14W at full tilt it comes covered with a large heatsink which is connected via a heatpipe to the heatsink covering the power delivery. The smaller heatsink covering the chipset and two PLX chips is not connected to the others, however it does have a small fan (which can be disconnected) to improve cooling potential.

As this motherboard is oriented towards the workstation market we get features such as COM and TPM headers, with a total of five fan headers around the motherboard. The two CPU fan headers, one four-pin and one three-pin, are at the top right of the board, with a 3-pin CHA header just above the SATA ports and another just below. The final header is on the bottom panel, this time four-pin. The ‘white thing that looks like a fan header’ at the bottom of the board is actually used for SATA DOM power. Note that HDD Saver does not feature on this motherboard.

The DRAM slots are single-sided with the latches due to the close proximity of the first PCIe slot, which means that users should ensure that their DRAM is fully pushed in at both ends. Next to the DRAM is one of the PCIe power connectors, a horrible looking 4-pin molex connector right in the middle of the board. I asked ASRock about these connectors (because I continually request they be replaced) and ASRock’s response was that they would prefer a single connector at the bottom but some users complain that their cases will not allow another connector angled down in that location, so they put one here as well. Users should also note that only one needs to be connected when 3+ PCIe devices are used to help boost power. I quizzed them on SATA power connectors instead, or a 6-pin PCIe, however the response was not enthusiastic.

Next to this power connector is a USB 2.0 type-A port on the motherboard itself, which we normally see on server/workstation motherboards for USB license keys or other forms of not-to-be-removed devices.

On the right hand side of the motherboard is our TPM header followed by the 24-pin ATX power connector and two USB 3.0 headers, where both of these come from the PCH. With the SATA ports there are twelve in total in this segment with the first two being powered by a Marvell controller. The next ten are from the PCH with the first six RAID capable, then the next four are not. As part of this final four there is also a SATA Express port coming from the chipset. For more connectivity we have a black SATA DOM port at the bottom of the board and a PCIe 2.0 x4 M.2 slot from the chipset supporting 2230 to 22110 sized devices. If a device is plugged into the final four SATA ports, the M.2 bandwidth drops to M.2 x2. This suggests that ASRock can partition some of the bandwidth from the second non-RAID AHCI controller in the chipset for M.2 usage, and that the second AHCI controller is in-part based on PCIe. This further implicates my prediction that the chipset is just turning into a mass of PCIe lanes / FPGA as required by the motherboard manufacturer.

At the bottom of the motherboard are our power/reset buttons alongside the two-digit debug. The two BIOS chips are also here with a BIOS select switch, two SATA-SGPIO headers, two USB 2.0 headers, a COM header, a Thunderbolt header, two of the fan headers and that ugly molex power connector. As usual the front panel audio and control headers are here too, as well as two other headers designated FRONT_LAN, presumably to allow server builders to route the signals from the network ports to LEDs on the front of the case.

The audio subsystem uses an upgraded Realtek ALC1150 package, meaning an EMI shield, PCB separation and enhanced filter caps. The PCIe layout is relatively easy to follow:

From the 40 PCIe lanes from the CPU, these are split into x16/x16/x8. The final x8 goes to the 10GBase-T controller, whereas the other lanes get filtered into one PLX controller each. This gives the effect of muxing 16 lanes into 32 (with an extra buffer), allowing each PLX controller to feed two x16 slots for a total of four PCIe 3.0 x16 (hence x16/x16/x16/x16 support). Three of these x16 slots are quick switched to x8 slots, creating x8/x8 from three of the x16 ports.

This means:

Four PCIe devices or less: x16/-/x16/-/x16/-/x16
Four to Seven PCIe devices: x8/x8/x8/x8/x8/x8/x16

So for anyone that wants to strap on some serious PCIe storage, RAID cards or single slot PCIe co-processors, everyone gets at least PCIe 3.0 x8 bandwidth.

For users on the i7-5820K, things are a little different but not so much. Due to only having 28 PCIe lanes, the outputs are split x16/x8/x4, with x4 going to the X540. This leaves x16 and x8 going to the PLX controllers, but in both cases each PLX chip will configure to 32 PCIe lanes, still giving an x16/x16/x16/x16 or x8/x8/x8/x8/x8/x8/x16 arrangement. With only four lanes, the two 10GBase-T ports are still designated to work with PCIe 3.0 x4 (given the original requirement of PCIe 2.0 x8 for the controller), but full bandwidth might not be possible according to Intel’s FAQ on the X540 range – check point 2.27 here.

The rear panel removes any PS/2 ports and gives four USB 2.0 alongside four USB 3.0, with the latter coming from an ASMedia hub. The two network ports on the left are from Intel I210 controllers, whereas the two on the right are the 10GBase-T ports from the Intel X540-BT2 controller. There is a Clear_CMOS button, an eSATA port and the audio jacks to round off the set.

Board Features

ASRock X99 WS-E/10G
Price US (Newegg)
Size E-ATX
CPU Interface LGA2011-3
Chipset Intel X99
Memory Slots Eight DDR4 DIMM slots
Supporting up to 64 GB UDIMM, 128 GB RDIMM
Up to Quad Channel, 1066-3200 MHz
Video Outputs None
Network Connectivity 2 x Intel I210 (1Gbit)
2 x Intel X540-BT2 (10GBase-T)
Onboard Audio Realtek ALC1150
Expansion Slots 4 x PCIe 3.0 x16
3 x PCIe 3.0 x8
Onboard Storage 6 x SATA 6 Gbps, RAID 0/1/5/10
4 x S_SATA 6 Gbps, no RAID
2 x SATA 6 Gbps, Marvell 9172
1 x SATA Express
1 x M.2 PCIe 2.0 x4 / x2
USB 3.0 4 x USB 3.0 on Rear Panel (ASMedia ASM1042 Hub)
2 x USB 3.0 Headers onboard (PCH)
Onboard 12 x SATA 6 Gbps
1 x SATA DOM
1 x M.2 x4
2 x USB 2.0 Headers
2 x USB 3.0 Headers
5 x Fan Headers
1 x USB 2.0 Type-A
TPM Header
COM Header
Thunderbolt Header
2 x FRONT_LAN Headers
2 x SATA_SPGIO Headers
Power/Reset Switches
Two Digit Debug
BIOS Switch
SATA DOM Power
Front Panel Header
Front Audio Header
Power Connectors 1 x 24-pin ATX
1 x 8-pin CPU
2 x VGA Molex
Fan Headers 2 x CPU (4-pin, 3-pin)
3 x CHA (4-pin, 2 x 3-pin)
IO Panel 2 x USB 2.0
2 x USB 3.0 (ASMedia Hub)
2 x Intel I210 Gbit Network
2 x Intel X540-BT2 10GBase-T Network
eSATA
Clear_CMOS Button
Audio Jacks
Warranty Period 3 Years
Product Page Link
Testing the 10GBase-T
Comments Locked

45 Comments

View All Comments

  • koekkoe - Wednesday, December 17, 2014 - link

    One usage scenario: iSCSI storage, (especially when used also for booting) greatly benefits from 10G, because on 1G you're limited to 125MB/s, and big 16/24 disc arrays like EqualLogic can easily saturate also 10G bandwidth.
  • petar_b - Thursday, December 18, 2014 - link

    Xtreme11 used LSI SAS controller, it was awesome feature, I would happily pay for decent controller instead of slow SATA marvel ports - each time we add one more sata disk, overall disk transfer speed significantly drops. Thanks to LSI we can have 8 SSD SATA on SAS and they all perform 400MB/s even if used simultaneously. Marvel was dropping as low as 50MB/s with 8 SSD simultaneously used. What a lame.
  • akula2 - Thursday, December 18, 2014 - link

    I didn't prefer that board either -- not everything should be integrated from hardware scalability and fallback point of views. I'd prefer to build from a board such as Asus X99-E WS without filling up completely, and eventually choke it up!
  • atomt - Saturday, December 20, 2014 - link

    "It doesn't increase your internet performance"

    I beg to differ. 10Gbps internet is available for residential connections in my area. :-D
  • AngelosC - Wednesday, January 7, 2015 - link

    Several things bother me with this review:
    1) Did I miss it or is there really no mention on how the VMs were accessing the X540? Was it running SR-IOV? Or VMXNET3? What network drivers were loaded in the VMs?
    2) 10GE being the major selling point of the mobo but it was only tested using "LAN Speed Test" with results summarized into a simple chart? I suggest you could have also tested using netperf or iperf, showing results also from other OSes like CentOS? Performance difference between UDP and TCP/IP streams? If you just create packets and send, then receive packets and discard (as in the case of iperf3), you probably wouldn't have run into problem of having to place a file on a RAM disk and some other issues. And then if you ran iperf on Linux, you could have ran on bare metal, taking the VMS overhead out of the equation.
    3) For sake of correctness, would you please clarify whether it was a X540-AT2 or X540-BT2?

    To be frank, this review is below the standard I'd expect from AnandTech.

Log in

Don't have an account? Sign up now