There are some software hacks to enable it (Lucid's Virtuo-MVP for example). Apple does it in all of their Macs. So it's definitely possible, it's just a matter of the drivers properly supporting it.
The only Mac that supports 4x 4K displays is the new Mac Pro, which isn't out yet. I'm certain the next MacBook Pros will have 2xTB2 ports but that would only mean 1x 4K display if the Mac Pro can support 3x 4K with 6x TB2 ports.
The new Mac Pro will also have one HDMI 1.4 port but Apple hasn't noted you can support 4x 4K displays on that display. I'd say that has to do with the GPU power not merely drivers or having ports that can support the bandwidth.
The new Mac Pro has enough GPU IO to drive twelve 4k displays (six off each GPU). The catch is that there are only three TB2 controllers and each controller appears to support only one 4k display.
The good news is that the HDMI1.4 port will like be able to drive a 4k display at 30 Hz independent of the TB2 connectors.
Pretty sure the limiting factor is the number of available pixel pipelines, which for AMD cards (regardless of how many you have in a CrossFire style setup) is 6, and the fact that all existing 4K displays treat the panel as 2 or more separate regions. Thus 3 is the limit for 4K displays at the moment.
The HDMI port on the Mac Pro is still subject to the 6 pixel pipeline issue, but could be used for a third 4K display at 30 Hz on any Haswell Mac with a discrete GPU, Falcon Ridge Thunderbolt controller and HDMI port.
"Pixel pipeline" isn't really the correct term for what I was trying to refer to. I think maybe "display pipe" or "display output" would have been more appropriate.
Each DisplayPort on AMD's FireGL cards has full DP 1.2 bandwidth. Though you are onto another limitation of AMD cards: they do support a maximum of 6 displays per card but that is not resolution dependent. The idea of that multistream transport (MST) is used with early 4k displays would indeed make each display appear as 2 and thus be easier to bump up against that limitation.
The HDMI port is likely coming off of the second GPU card. Remember that the new Mac Pro packs two GPU's and thus it would be trivial to support more than 6 displays.
I have doubts Apple would say it can support only 3x 4K/60 displays when you can say it can feasibly support twice that amount. I guess we'll have to wait for people pushing its limits but I have to wonder why Apple would cut the number in half instead of marketing it.
AMD dont support crossfire on FirePro GPUs + Mac Pro uses 1 graphics for graphics. The other GPU is dedicated to Compute. And that will revolutionise computing.
Finally we will have optimised OS and programs for compute including Final Cut X and Aperture.
To bad that MacPro will cost 5K+ + 3x3499 4K Cinema displays.
The FirePro cards do support Crossfire on the PC side. The lack of support on the OS X side has always been a driver issue.
As for the one card being dedicated to graphics, I'd be surprised if that was absolutely true. The chassis has ports for 7 directly connected monitors and the each individual card has a limit of 6 displays.
2) I think it's silly to assume that just because Apple offers 6TB+1HDMi that they intend for all ports to be used for displays. They surely wouldn't include an extra AMD GPU just to get that 7th display because they have 7 ports that can all be used for a display, just as they didn't add 5 additional display-capable ports to bring that total to 12.
The Xeon E5-26xx v2's that will likely be used in the new Mac Pro lack integrated graphics.
Although FirePro cards do support CrossFire Pro, I doubt that Apple will implement an actual CrossFire setup. I'm guessing it will be more along the lines of what AMD did for their dual-GPU FirePro S10000 server cards. I would also wager that despite having two physical cards and 7 digital display output ports, there will still be a 6 display limit. I'm sure it won't take too long for someone to try connecting displays to all 7 ports though.
1) as pointed out by others, the Xeon chips I the new Mac Pro do not have integrated graphics.
2) While the second card is seen as many compute, adding an HDMI connector for 2d display purposes really would't have a meaningful impact performance for GPGPU workloads. In fact, it would be odd if OpenCL could only use one of the cards at a time.
Also depending on how the ThunderBolt ports are linked to each GPU, it maybe possible to connect 12 displays using an MST hub to the new Mac Pro.
Yes, because tech companies releasing new products is "screwing over" those who bought their previous products. Sometimes it's like I'm reading comments from a Youtube video on here.
I really wanted the Z87-Deluxe/Dual but the lack of dual Intel NICs killed it for me. I went with the Z87-WS. Asus needs to realize that they need the feature-set of the Z87-WS with Thunderbolt 2 outputs and then also have a solution to output not just from the Intel IGP but nVidia/AMD add-on cards. Whether that is with Lucid software or through another method remains to be seen.
Also, Anandtech did a story on the Asus ThunderboltEX card. Guess what? It never materialized. Rumors say Intel won't validate the outputting of video via nVidia/AMD GPUs. I'd love it if we could get clarification on this issue.
Ulitmately, it seems like 3x 4K displays is nothing more than a specification line for these meaningless press releases that start with "WORLD'S FIRST..." instead of being usable at realistic refresh rates on modern hardware. At the moment, it seems like Thunderbolt outside of Apple is just another Firewire for storage arrays instead of a medium for displays. I'd love to be wrong so perhaps Intel/Asus/Gigabyte can clarify the point of these motherboards in real-world-getting-work-done scenarios.
That's great and all but apart from some incredibly expensive enterprise-grade storage systems and laptop docks, what would I use Thunderbolt for? Not trolling, I seriously want to know what stuff is available that I can actually use.
One thing I've seen that doesn't fit into storage, laptop docking or displays have been done video capture boxes. They're outside of consumer prices but they're out there.
The other devices I've seen are Ethernet NICs or FireWire adapters. Both these could arguably be out into the laptop docking category.
https://thunderbolttechnology.net/products gives a pretty decent overview of what's available. As for what you can actually use, that's sort of dependent on your particular situation.
I for one am very interested in the NFC functionality, though after a little digging on the Asus site it doesn't seem to support some of the neater things like Android Beam. Any chance of a review of this or the Dual, or the NFC unit by itself?
I'm just irks that Intel hasn't gotten channel bonding to work right until now. A four channel, 10 Gbit/channel has enough bandwidth to drive an 8k display @ 30 Hz. That would truly be a significant jump for professionals who have generally been limited to 2560 x 1600 from 2004 to 2012. Even now 4k is barely in the market place and the new IO standards are being slightly modified to just support it (ie no 120 Hz 2D or 3D @ 60 Hz per eye). Demand is there in the market place for higher resolution displays and Intel could have provided *the defacto* standard to get there if they hadn't botched TB bonding initially.
Careful there with channels with lanes. Thunderbolt has 4 lanes, each which is capable of transferring only in one direction - and when paired producing 2 channels. (10up + 10down + 10up + 10down = 20up + 20down)
The lanes would need to be come half-duplex capable in order to drive 8k @ 30hz.
For another example: PCI-E 16x has 32 lanes, each which transfers only in a single direction.
Is there any evidence that Intel ever intended to implement channel bonding on the original Light Peak / Thunderbolt silicon? There is scant little to indicate that they even intended to support dual channel links, let alone channel bonding.
The high end display market generally moves far slower than those responsible for penning the interconnect or content delivery standards. DisplayPort 1.2 has been around for almost 4 years, and DP 1.2 capable GPUs for nearly three, yet the first DP 1.2 HBR2 and MST capable displays only hit the market less than a year ago. Furthermore, I don't think there are any native eDP 1.2+ panels with support for HBR2 out there yet.
20 Gbit/s for Thunderbolt 2 vs. 17.28 Gbit/s for DP 1.2 is not a significant difference, and Thunderbolt is just a meta-protocol used to transport DP and PCIe packets anyway.
The bonding functionality was to be similar to how multiple PCI-e lanes work together to form a wider channel. Thus all 40 Gbit of bandwidth going over a single copper cable was supposed to be able to be utilized by a single device. The catch is that DP didn't play well in this mode.
I understand the theory, but as far as I can tell there is zero evidence to suggest that channel bonding was ever on the table prior to Falcon Ridge. When did anyone from Intel ever allude to this? Why would Intel possibly implement DP 1.2 in a Thunderbolt controller before doing so in their own IGPs? Do you have any links to back up the notion that they even attempted channel bonding with the early silicon? (I'm genuinely curious, btw, not just trying to be argumentative.)
The context of bonding has often been mentioned in reference to TB networking (note that Intel's optical based interconnects for rack based servers seem eerily familiar). Most of this was when Thunderbolt was known as LightPeak. Intel's presentation on the matter:
And there is bit of research done at MS with regards to Thunderbotl/LightPeak too (PDF):
"We built a prototype network interface card using Light Peak technology (shown in Figure 1) The prototype card is a Gen2 x4 PCI-Express add-in card and contains one host interface, an integrated crossbar switch and transceiver pair with four 10 Gbps optical ports with modified USB cable connectors. The integrated non-blocking crossbar switch is capable of delivering an aggregate bandwidth of 80 Gbps (40 Gbps receive and 40 Gbps transmit) through the optical ports and 10 Gbps to/from the host system. Traffic from one optical port to another optical port can be transmitted directly without any interaction with the host CPU. Each transceiver module supports two interfaces and provides electrical to optical conversion. "
There are a couple of odd ball things in that paragraph. First is the obvious asymmetrical bandwidth nature of the IO card: 4x lanes at PCI-E 2.0 speeds is 20 Gbit in each direction while the networking side was capable of 40 Gbit aggregate. As discussed in the paper, this was for multipathing and fail over as most enterprise networking has to support.
The unique thing here is that it shows two different implementations, a two channel implementation using standard fiber found in data centers and then a four channel implementation using a single cable but two LightPeak controllers.
While I appreciate the links, they actually tend to reinforce my point. The front end of a 4C Light Peak / Thunderbolt controller does provide 80 Gbit/s aggregate bandwidth, but that is not the same as saying that there is a provision for link aggregation. That 80 Gbit/s comes in the form of 4x 10 Gbit/s full-duplex channels, which prior to Apple's implementation were always treated as 4 separate links.
The description in the MS paper isn't odd at all. It describes the on-die crossbar switch as essentially being a 5-port 10 Gbit/s switch with 4 ports being for the optical interfaces and one as an uplink to the host. That 10 Gbit/s uplink port is connected to a protocol adapter which has a PCIe 2.0 x4 (16 Gbit/s) connection to the host. And just as you can have a 5-port Ethernet switch, that in no way guarantees that the switch is capable of link aggregation. In fact, pointing out that the connection to the PCIe protocol adapter was limited to 10 Gbit/s underscores that channel bonding was not happening from the outset.
All of the networking examples use individual 10 Gbit/s links between nodes, and link aggregation is never attempted.
The 40 Gbit/s depiction in the Avago paper actually shows one controller but two optical modules. The modules they designed only supported 2 channels, so a 4C controller required two of them. The odd thing about that diagram, as crude as it is, is that it appears to show one port for 4 channels, which is something we have not seen yet.
The Thunderbolt/LightPeak networking schema in those papers included failover and multi-pathing. The implication is that there would be a means to use multiple channels to send data between two hosts, similar to how Ethernet does bonding.
With fail over from active-to-active connections, their is a desire to regularly operate below 50% of the available bandwidth between connections. This would ensure that when fail over does happen, the single remaining link can handle the through without a degradation of service. In the context of these paper, the proposed 40 Gbit adapter with a 10 Gbit PCI-e host interface wouldn't necessarily be at a disadvantage in this scenario.
The USB style connector also had some talk of being able to use the copper connection for native USB or a slower, copper based version of Light Peak. With the USB board forbidding Light Peak to piggy back on their standard, this idea was only talked about.
$350 USD for a non-ROG board that can only do 3-Way CF as PCIe 3.0 x8/x4/x4?
....Crickets.... This is a $220 board max with a $130 mark-up for Thunderbolt. You can just as easily get $180 MSI GD65-Gaming, buy $20 wireless USB and skip the overpriced Thunderbolt.
Or get the Asus equivalent without the Thunderbolt ports. See the "Pro". Also, who would ever want to subject themselves to the horrors that is USB Wi-Fi.
However, if you insist, I can propose plenty of other superior options like Gigabyte UD4H for $115 at MicroCenter or even Asus' more impressive VI HERO board for $215 + $30 WiFI n Adapter: http://www.newegg.com/Product/Product.aspx?Item=N8...
Either way you slice it, this board is $130-150 overpriced, easily. If someone is going to throw dual Thunderbolt ports, it better be on a high end ROG board.
The MSI Z87 MPower that passed 24 hours of Prime95 at full CPU overclock has Wi-Fi with dual antennas + Bluetooth for only $215, free RAMDisk software, 16 power phases, digital power, etc.. So where is the $350 premium Asus charging coming from? http://www.microcenter.com/product/414817/Z87_MPow...
150mbps - and at what range. Hell, is it even dual-channel capable? I'm not sure why you even mention your Internet connectivity, when the point of reliable wi-fi is internal communication, with in particular, a networked NAS. When you throw in a few extra devices which has to share transmit time, then 150mbps becomes quite slow. (1/3rd of the time becomes 50mbps for example)
And yes, in case you didn't know - TB controllers go for about 50 USD each. It is -not- a cheap technology, which is also the reason for its scarcity. An in lieu of TB not having any 'gaming' purposes at this time, it does not make sense to put them on their 'gaming' motherboards.
It -would- make more sense to find it on the E-class motherboards though, yes.
It can do PCIe 3.0 x16 or dual x8 or x8/x4/x4, that is as flexible as Haswell allows without adding some kind of bridge chip. I'm pretty sure that PCIe 3.0 x4 (and PCIe 2.0 x8) don't bottleneck CF based on real world tests. Certainly PCIe 3.0 x8 doesn't, and who has three 7970's?
In fact until GPU's and LCD's have a connection option that supports 4K@120Hz, or at least 4K@60Hz nonsense, I'm not sure we even need this level of performance. Maybe for extreme multi-monitor gaming in 3D you need three GPU's, but not for single monitor gaming or multi-monitor productivity.
If I am spending $350 on a board, I want the option of putting 3x 7970s in it. I already have 2 7970s. Also, you are not taking into account that people don't just throw out this board in 2 years. What about getting Maxwell/Volta GPUs? Put it this way, if you only care to buy 2 GPUs, then at MicroCenter, the Gigabyte Z87 UD4H is just $115 USD!!! It has 16 power phases and everything you need, minus the overpriced WiFi and Thunderbolt. But if you still insist on WiFi, there is MSI Z87 MPower for $215, or Asus own Pro board with WiFi. Then there is the ROG VI HERO which blows this Deluxe board out of the water in terms of quality overclocking components:
@Jarred, "Thunderbolt supports up to 10GB/s bandwidth (bi-directional) for each port..." You want a little "b" there, and actually, it's 10 Gbit/s, full-duplex, per channel, up to 20 Gbit/s, full-duplex, per port. This is true even for OG Thunderbolt; Thunderbolt 2 merely allows channel-bonding for devices requiring greater than 10 Gbit/s of either display or data bandwidth.
Fixed. Basically, I messed up when I said bi-directional on the four channels; it's uni-directional I believe, but TB2 allows bonding with bi-directional. Or put another way, TB1 was two 10Gb/s up and two 10Gb/s down; TB2 is two 20Gb/s and that can be either up or down. I think it can switch on the fly as well? Not entirely sure about that, and maybe I still need to clarify. Hahaha. As for the little b, that was one typo out of four Gigabit references; probably just holding down Shift still from typing the G and didn't notice.
Both Thunderbolt and Thunderbolt 2 host controllers provide 4 simplex 10 Gbit/s lanes per port, configured as 2 full-duplex 10 Gbit/s channels. Thunderbolt 2 allows bonding of the two channels to create a single 20 Gbit/s full-duplex link. The direction of the individual lanes is fixed; the active cables have two transmitters and two receivers at each end.
Stupid question: why insist on including 8 x USB 2.0 ports, *in addition to* 8 x USB 3.0?
Who in blazes, I'd like to know, ever used or wanted to use more than 8 USB devices simultaneously on the same computer?
Aside from the above, I just don't get the persistence of USB 2.0. USB 3.0 has been out and around long enough already; why does USB 2.0 still keep showing up in new and supposedly premium products?? Someone please enlighten me...
Because Intel only offers 4 or 6 USB3 ports (to get six you either need to drop to 4xSATA6 or 6xPCIe2 on the southbridge); and each additional pair of 3.0 ports beyond that requires an additional controller and PCIe lane to connect it. The boards that offer a dozenish 3.0 ports probably are also spending more on a PLX chip to multiplex all the IO onto the southbridge's limited supply of pcie lanes.
The single picture isn't entirely clear; but it appears to be 4 and 4 on the back; and presumably 2 headers of each type for front panel connections. At least for the next few years 2 headers of each type is IMO the way to go since some people will be using older cases with only 2.0 ports and others new cases with 3.0 built in. Some of the former will add a 3rd party bay device for front panel ports; some of both groups will have a card reader which as a bottom denominator device will probably stay 2.0 for a few years (if anything; this argues for a 3rd 2.0 header in addition to the pair of 3.0 ones).
PS The reason why only some of the USB ports on the chipset are the 3.0 variety is that the chip is a low margin part whose size is primarily defined by the number of IO pins that need to be squeezed onto it; and that USB3 controllers need significantly more die area than USB2 controllers. Since Broadwell is planned a BGA(mobile) only part, most likely this means that we won't see an increase until Skylake launches in 2015 (since that chipset will presumably have gotten a process shrink as well). Intel could launch a desktop chipset refresh with more 3.0 ports next year even if we don't get a new CPU; but with AMD foundering there's no real pressure for them to do so.
Thanks for answering, but I still don't completely get it.
USB 3.0 is supposed to be completely (and transparently) backward-compatible with 2.0 (or am I missing something??) -- so if a motherboard provided only and exclusively 3.0 ports/headers, any legacy 2.0 ports/cables on the case or legacy devices should be able to plug into those 3.0 extension points without any issue or degradation of performance.
An 'intense' USB usage scenario might involve 1 keyboard and 1 mouse, plus maybe 1 microphone, 1 camera, 1 printer, 1 gaming controller, 1 card reader. I'm counting 7 ports (and not including WiFi, since this premium mobo already provides WiFi), leaving 1 port free. That's before considering that USB can chain, so for instance a lot of monitors these days integrate USB hubs and provide extra ports (and come with a USB extension cable), meaning you get 2-4 additional ports "for free" with each of your monitors (and most of the devices I listed, don't need the full bandwidth of USB 3.0 so would work perfectly fine over 2.0 links through hubs.)
I'm just saying: *in practice*, already having 8 USB 3.0 ports should make any additional 2.0 (or 3.0) ports utterly redundant and unnecessary. So why continue bundling in the expense and taking up the space? Still not getting it...
I can't answer all your questions, but the on-board USB 3.0 headers are not compatible with the USB 2.0 connectors. It's a completely different connector style. So cases and devices with USB 2.0 connectors would require USB 2.0 motherboard headers. Of course the user interface USB ports are backwards-compatible as you mention.
It could be that servicing USB 2.0 motherboard headers requires a USB 2.0 controller, and so maybe that is the reasoning for including it. In such a case it would make sense to throw any leftover USB 2.0 ports onto the back panel.
PS for Jarred: The source link to the ASUS press release points to the wrong URL.
Motherboard headers aren't the same; for whatever reason instead of doubling the pin density for 3.0 they went with a header 2x as large and noone's making adapters for the header cables that I'm aware of. You really can't drop below 3 headers on the board itself without causing problems for some people building higher end systems (4 case ports and a card reader); and you still need a full set on the back for people who don't only have 2 front ports or just want most of their wiring to be neatly out of sight. You can drop the number down on low end boards to cut manufacturing costs; but higher end boards are equal parts feature check and combining multiple peoples edge cases into a single package to keep the size of your product line within reason.
The total number of ports grew as USB2 replaced various legacy ports and board vendors wanted to replace the space with USB ports. Besides which, USB2 controllers were tiny so it barely cost Intel anything to add an additional pair every other year or so during the last decade. At this point I'm not really expecting the total to go up again; with the possible exception of the PS2 keyboard port there's not really any legacy IO left to replace; and with a 10GB USB standard in the works I expect Intel/AMD will be busy using process shrink southbridge transistors to update more ports to the faster standard instead of bumping the totals.
Counting charger/device cables I've currently got a total of 10 plugged in; not all have something attached at all times but there're 3 different device end plugs (B, mini, and micro) and I've got stuff that goes with each size. I actually do use a hub for some; but that's equal parts cable routing (my tower is farther away than normal), not wanting to spend money replacing all the tiny cables that came bundled with gadgets, and tradition (the hub's been in place since at least my amd 939 system when my board didn't have enough total ports built in).
It's a bit of a shame that Thunderbolt has so far been slow to be adopted, breaking out PCIe to a cable makes a lot of sense for many situations. I guess it all comes down to cost? Here's hoping TB2 makes the difference.
Only putting a controller in their next south bridge might make a difference. If you define making a difference as adding more TB ports to computers, not getting more TB devices on the market. The other thing Intel should do if they want to promote TB is to up the number of PCIe lanes available; the situation has been squeezed since Intel dropped the legacy PCI controller's 4(?) connections but didn't add PCIe lanes to compensate. While OEMs can add a PLX to work around the problem that adds to costs and has a slight hit on performance; with PCIe-SSDs potentially becoming a major enthusiast item in a few years, walling off a lane or four for TB ports very few people will use isn't an appealing option.
thunderbolt is a great idea, but no one cares until motherboards with it are under $150.
Now, I'm an enthusiast so was willing to go around 200. But there's not a chance in hell I'd convince the corporation I work at to pay that much extra for something with such limited support.
If they're serious about Thunderbolt they have to give it away. Google knows this, you don't see them charging $soft money for Android. Because they know the way in which you penetrate a market is by giving your product away for free; the money comes later.
1) Do you realize you just said the key to running a for-profit business is to not charge a profit?
2) Google's business is advertising. Search, email, Android, etc. are free because we're the product. We're the Eloi. Their customers are the ones paying for advertising or for info on herd's habits.
3) Also remember we're talking about SW v HW where your argument continues to fall apart.
tundra is good, yet in primary tests all 3 pci-e3 3 slot game card macines fluttered image. there are 4 slot pci-e3 mainboards, yet only 48 lane pci-e3, so two slots are 8x. ahso, claim for another 2x pci-e3 might bee stutter. basically with 64 pci-e3 lanes, might be god to go with thunderbolt advantage.
I recently purchased an Asus Motherboard and the problems started from day 1. The drivers update never works, the same for AI Suite III (there´s a lot of updates for this model in Asus webpage). After 2 months I still can´t install BitDefender cause a clock watchdog error. Asus technical support is the worst, mails comes and goes with no solution. I will not recommend this brand to anyone. The brand has a very good Marketing but the product and the service are very disappointment.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
56 Comments
Back to Article
PaulRod - Monday, August 19, 2013 - link
ASUS? Lame...etamin - Monday, August 19, 2013 - link
so what IGP is expected to push those three 4K displays?? (or can PCIe GPU's be run behind them?)aruisdante - Monday, August 19, 2013 - link
There are some software hacks to enable it (Lucid's Virtuo-MVP for example). Apple does it in all of their Macs. So it's definitely possible, it's just a matter of the drivers properly supporting it.solipsism - Tuesday, August 20, 2013 - link
The only Mac that supports 4x 4K displays is the new Mac Pro, which isn't out yet. I'm certain the next MacBook Pros will have 2xTB2 ports but that would only mean 1x 4K display if the Mac Pro can support 3x 4K with 6x TB2 ports.The new Mac Pro will also have one HDMI 1.4 port but Apple hasn't noted you can support 4x 4K displays on that display. I'd say that has to do with the GPU power not merely drivers or having ports that can support the bandwidth.
Kevin G - Tuesday, August 20, 2013 - link
The new Mac Pro has enough GPU IO to drive twelve 4k displays (six off each GPU). The catch is that there are only three TB2 controllers and each controller appears to support only one 4k display.The good news is that the HDMI1.4 port will like be able to drive a 4k display at 30 Hz independent of the TB2 connectors.
repoman27 - Tuesday, August 20, 2013 - link
Pretty sure the limiting factor is the number of available pixel pipelines, which for AMD cards (regardless of how many you have in a CrossFire style setup) is 6, and the fact that all existing 4K displays treat the panel as 2 or more separate regions. Thus 3 is the limit for 4K displays at the moment.The HDMI port on the Mac Pro is still subject to the 6 pixel pipeline issue, but could be used for a third 4K display at 30 Hz on any Haswell Mac with a discrete GPU, Falcon Ridge Thunderbolt controller and HDMI port.
repoman27 - Tuesday, August 20, 2013 - link
"Pixel pipeline" isn't really the correct term for what I was trying to refer to. I think maybe "display pipe" or "display output" would have been more appropriate.Kevin G - Tuesday, August 20, 2013 - link
Each DisplayPort on AMD's FireGL cards has full DP 1.2 bandwidth. Though you are onto another limitation of AMD cards: they do support a maximum of 6 displays per card but that is not resolution dependent. The idea of that multistream transport (MST) is used with early 4k displays would indeed make each display appear as 2 and thus be easier to bump up against that limitation.The HDMI port is likely coming off of the second GPU card. Remember that the new Mac Pro packs two GPU's and thus it would be trivial to support more than 6 displays.
solipsism - Tuesday, August 20, 2013 - link
I have doubts Apple would say it can support only 3x 4K/60 displays when you can say it can feasibly support twice that amount. I guess we'll have to wait for people pushing its limits but I have to wonder why Apple would cut the number in half instead of marketing it.shompa - Tuesday, August 20, 2013 - link
AMD dont support crossfire on FirePro GPUs + Mac Pro uses 1 graphics for graphics. The other GPU is dedicated to Compute. And that will revolutionise computing.Finally we will have optimised OS and programs for compute including Final Cut X and Aperture.
To bad that MacPro will cost 5K+ + 3x3499 4K Cinema displays.
Kevin G - Tuesday, August 20, 2013 - link
The FirePro cards do support Crossfire on the PC side. The lack of support on the OS X side has always been a driver issue.As for the one card being dedicated to graphics, I'd be surprised if that was absolutely true. The chassis has ports for 7 directly connected monitors and the each individual card has a limit of 6 displays.
solipsism - Tuesday, August 20, 2013 - link
1) Will the Xeon not include an IGP?2) I think it's silly to assume that just because Apple offers 6TB+1HDMi that they intend for all ports to be used for displays. They surely wouldn't include an extra AMD GPU just to get that 7th display because they have 7 ports that can all be used for a display, just as they didn't add 5 additional display-capable ports to bring that total to 12.
repoman27 - Tuesday, August 20, 2013 - link
The Xeon E5-26xx v2's that will likely be used in the new Mac Pro lack integrated graphics.Although FirePro cards do support CrossFire Pro, I doubt that Apple will implement an actual CrossFire setup. I'm guessing it will be more along the lines of what AMD did for their dual-GPU FirePro S10000 server cards. I would also wager that despite having two physical cards and 7 digital display output ports, there will still be a 6 display limit. I'm sure it won't take too long for someone to try connecting displays to all 7 ports though.
Kevin G - Tuesday, August 20, 2013 - link
1) as pointed out by others, the Xeon chips I the new Mac Pro do not have integrated graphics.2) While the second card is seen as many compute, adding an HDMI connector for 2d display purposes really would't have a meaningful impact performance for GPGPU workloads. In fact, it would be odd if OpenCL could only use one of the cards at a time.
Also depending on how the ThunderBolt ports are linked to each GPU, it maybe possible to connect 12 displays using an MST hub to the new Mac Pro.
shompa - Tuesday, August 20, 2013 - link
*hint* MacPro don't have crossfire. FirePro GPUs don't support it.Thats why MacPro only support 3 4K screens.
aruisdante - Monday, August 19, 2013 - link
It makes me sad that they came out with this so soon after I bought the Delux/Dual (exact same motherboard, but thunderbolt 1).Assimilator87 - Tuesday, August 20, 2013 - link
Wow ASUS, way to screw over everyone who bought the Dual.Sivar - Tuesday, August 20, 2013 - link
Yes, because tech companies releasing new products is "screwing over" those who bought their previous products.Sometimes it's like I'm reading comments from a Youtube video on here.
p1esk - Tuesday, August 20, 2013 - link
What does "dual" or "quad" refers to?critical_ - Tuesday, August 20, 2013 - link
I really wanted the Z87-Deluxe/Dual but the lack of dual Intel NICs killed it for me. I went with the Z87-WS. Asus needs to realize that they need the feature-set of the Z87-WS with Thunderbolt 2 outputs and then also have a solution to output not just from the Intel IGP but nVidia/AMD add-on cards. Whether that is with Lucid software or through another method remains to be seen.Also, Anandtech did a story on the Asus ThunderboltEX card. Guess what? It never materialized. Rumors say Intel won't validate the outputting of video via nVidia/AMD GPUs. I'd love it if we could get clarification on this issue.
Ulitmately, it seems like 3x 4K displays is nothing more than a specification line for these meaningless press releases that start with "WORLD'S FIRST..." instead of being usable at realistic refresh rates on modern hardware. At the moment, it seems like Thunderbolt outside of Apple is just another Firewire for storage arrays instead of a medium for displays. I'd love to be wrong so perhaps Intel/Asus/Gigabyte can clarify the point of these motherboards in real-world-getting-work-done scenarios.
/soapbox /rant
r3loaded - Tuesday, August 20, 2013 - link
That's great and all but apart from some incredibly expensive enterprise-grade storage systems and laptop docks, what would I use Thunderbolt for? Not trolling, I seriously want to know what stuff is available that I can actually use.Kevin G - Tuesday, August 20, 2013 - link
One thing I've seen that doesn't fit into storage, laptop docking or displays have been done video capture boxes. They're outside of consumer prices but they're out there.The other devices I've seen are Ethernet NICs or FireWire adapters. Both these could arguably be out into the laptop docking category.
repoman27 - Tuesday, August 20, 2013 - link
https://thunderbolttechnology.net/products gives a pretty decent overview of what's available. As for what you can actually use, that's sort of dependent on your particular situation.Zalansho - Tuesday, August 20, 2013 - link
I for one am very interested in the NFC functionality, though after a little digging on the Asus site it doesn't seem to support some of the neater things like Android Beam. Any chance of a review of this or the Dual, or the NFC unit by itself?Kevin G - Tuesday, August 20, 2013 - link
I'm just irks that Intel hasn't gotten channel bonding to work right until now. A four channel, 10 Gbit/channel has enough bandwidth to drive an 8k display @ 30 Hz. That would truly be a significant jump for professionals who have generally been limited to 2560 x 1600 from 2004 to 2012. Even now 4k is barely in the market place and the new IO standards are being slightly modified to just support it (ie no 120 Hz 2D or 3D @ 60 Hz per eye). Demand is there in the market place for higher resolution displays and Intel could have provided *the defacto* standard to get there if they hadn't botched TB bonding initially.DarkXale - Tuesday, August 20, 2013 - link
Careful there with channels with lanes. Thunderbolt has 4 lanes, each which is capable of transferring only in one direction - and when paired producing 2 channels.(10up + 10down + 10up + 10down = 20up + 20down)
The lanes would need to be come half-duplex capable in order to drive 8k @ 30hz.
For another example: PCI-E 16x has 32 lanes, each which transfers only in a single direction.
repoman27 - Tuesday, August 20, 2013 - link
Is there any evidence that Intel ever intended to implement channel bonding on the original Light Peak / Thunderbolt silicon? There is scant little to indicate that they even intended to support dual channel links, let alone channel bonding.The high end display market generally moves far slower than those responsible for penning the interconnect or content delivery standards. DisplayPort 1.2 has been around for almost 4 years, and DP 1.2 capable GPUs for nearly three, yet the first DP 1.2 HBR2 and MST capable displays only hit the market less than a year ago. Furthermore, I don't think there are any native eDP 1.2+ panels with support for HBR2 out there yet.
20 Gbit/s for Thunderbolt 2 vs. 17.28 Gbit/s for DP 1.2 is not a significant difference, and Thunderbolt is just a meta-protocol used to transport DP and PCIe packets anyway.
Kevin G - Tuesday, August 20, 2013 - link
The bonding functionality was to be similar to how multiple PCI-e lanes work together to form a wider channel. Thus all 40 Gbit of bandwidth going over a single copper cable was supposed to be able to be utilized by a single device. The catch is that DP didn't play well in this mode.repoman27 - Tuesday, August 20, 2013 - link
I understand the theory, but as far as I can tell there is zero evidence to suggest that channel bonding was ever on the table prior to Falcon Ridge. When did anyone from Intel ever allude to this? Why would Intel possibly implement DP 1.2 in a Thunderbolt controller before doing so in their own IGPs? Do you have any links to back up the notion that they even attempted channel bonding with the early silicon? (I'm genuinely curious, btw, not just trying to be argumentative.)Kevin G - Wednesday, August 21, 2013 - link
The context of bonding has often been mentioned in reference to TB networking (note that Intel's optical based interconnects for rack based servers seem eerily familiar). Most of this was when Thunderbolt was known as LightPeak. Intel's presentation on the matter:And there is bit of research done at MS with regards to Thunderbotl/LightPeak too (PDF):
http://research.microsoft.com/pubs/144715/4208a109...
Of note from the above MS paper:
"We built a prototype network interface card using Light
Peak technology (shown in Figure 1) The prototype card is a
Gen2 x4 PCI-Express add-in card and contains one host
interface, an integrated crossbar switch and transceiver pair
with four 10 Gbps optical ports with modified USB cable
connectors. The integrated non-blocking crossbar switch is
capable of delivering an aggregate bandwidth of 80 Gbps (40
Gbps receive and 40 Gbps transmit) through the optical ports
and 10 Gbps to/from the host system. Traffic from one
optical port to another optical port can be transmitted directly
without any interaction with the host CPU. Each transceiver
module supports two interfaces and provides electrical to
optical conversion. "
There are a couple of odd ball things in that paragraph. First is the obvious asymmetrical bandwidth nature of the IO card: 4x lanes at PCI-E 2.0 speeds is 20 Gbit in each direction while the networking side was capable of 40 Gbit aggregate. As discussed in the paper, this was for multipathing and fail over as most enterprise networking has to support.
Kevin G - Wednesday, August 21, 2013 - link
Ooops, hit submit a bit too early.The link to the Intel PDF:
http://www.stanford.edu/class/ee380/Abstracts/1010...
This describes at a higher level the usage of Thunderbolt/LightPeak as a network topology.
Here is another research paper that briefly mentions a 40 Gbit aggregate bandwidth for LightPeak:
http://www.ll.mit.edu/HPEC/agendas/proc11/Day2/Pos...
The unique thing here is that it shows two different implementations, a two channel implementation using standard fiber found in data centers and then a four channel implementation using a single cable but two LightPeak controllers.
repoman27 - Wednesday, August 21, 2013 - link
While I appreciate the links, they actually tend to reinforce my point. The front end of a 4C Light Peak / Thunderbolt controller does provide 80 Gbit/s aggregate bandwidth, but that is not the same as saying that there is a provision for link aggregation. That 80 Gbit/s comes in the form of 4x 10 Gbit/s full-duplex channels, which prior to Apple's implementation were always treated as 4 separate links.The description in the MS paper isn't odd at all. It describes the on-die crossbar switch as essentially being a 5-port 10 Gbit/s switch with 4 ports being for the optical interfaces and one as an uplink to the host. That 10 Gbit/s uplink port is connected to a protocol adapter which has a PCIe 2.0 x4 (16 Gbit/s) connection to the host. And just as you can have a 5-port Ethernet switch, that in no way guarantees that the switch is capable of link aggregation. In fact, pointing out that the connection to the PCIe protocol adapter was limited to 10 Gbit/s underscores that channel bonding was not happening from the outset.
All of the networking examples use individual 10 Gbit/s links between nodes, and link aggregation is never attempted.
The 40 Gbit/s depiction in the Avago paper actually shows one controller but two optical modules. The modules they designed only supported 2 channels, so a 4C controller required two of them. The odd thing about that diagram, as crude as it is, is that it appears to show one port for 4 channels, which is something we have not seen yet.
Kevin G - Monday, August 26, 2013 - link
The Thunderbolt/LightPeak networking schema in those papers included failover and multi-pathing. The implication is that there would be a means to use multiple channels to send data between two hosts, similar to how Ethernet does bonding.With fail over from active-to-active connections, their is a desire to regularly operate below 50% of the available bandwidth between connections. This would ensure that when fail over does happen, the single remaining link can handle the through without a degradation of service. In the context of these paper, the proposed 40 Gbit adapter with a 10 Gbit PCI-e host interface wouldn't necessarily be at a disadvantage in this scenario.
The 4 channel device did mention the connector: an early LightPeak connector that was an adaption of USB. Take a look at it here:
http://assets.sbnation.com/assets/749240/PC-Pro-UK...
The USB style connector also had some talk of being able to use the copper connection for native USB or a slower, copper based version of Light Peak. With the USB board forbidding Light Peak to piggy back on their standard, this idea was only talked about.
RussianSensation - Tuesday, August 20, 2013 - link
$350 USD for a non-ROG board that can only do 3-Way CF as PCIe 3.0 x8/x4/x4?....Crickets.... This is a $220 board max with a $130 mark-up for Thunderbolt. You can just as easily get $180 MSI GD65-Gaming, buy $20 wireless USB and skip the overpriced Thunderbolt.
DarkXale - Tuesday, August 20, 2013 - link
Or get the Asus equivalent without the Thunderbolt ports. See the "Pro". Also, who would ever want to subject themselves to the horrors that is USB Wi-Fi.RussianSensation - Tuesday, August 20, 2013 - link
Not sure what you are talking about. My USB WiFi delivers 150 Mbps. My ISP cannot even come close to those speeds at reasonable cost.http://imageshack.com/scaled/large/547/jdym.jpg
However, if you insist, I can propose plenty of other superior options like Gigabyte UD4H for $115 at MicroCenter or even Asus' more impressive VI HERO board for $215 + $30 WiFI n Adapter:
http://www.newegg.com/Product/Product.aspx?Item=N8...
Either way you slice it, this board is $130-150 overpriced, easily. If someone is going to throw dual Thunderbolt ports, it better be on a high end ROG board.
The MSI Z87 MPower that passed 24 hours of Prime95 at full CPU overclock has Wi-Fi with dual antennas + Bluetooth for only $215, free RAMDisk software, 16 power phases, digital power, etc.. So where is the $350 premium Asus charging coming from?
http://www.microcenter.com/product/414817/Z87_MPow...
DarkXale - Tuesday, August 20, 2013 - link
150mbps - and at what range. Hell, is it even dual-channel capable? I'm not sure why you even mention your Internet connectivity, when the point of reliable wi-fi is internal communication, with in particular, a networked NAS. When you throw in a few extra devices which has to share transmit time, then 150mbps becomes quite slow. (1/3rd of the time becomes 50mbps for example)And yes, in case you didn't know - TB controllers go for about 50 USD each. It is -not- a cheap technology, which is also the reason for its scarcity. An in lieu of TB not having any 'gaming' purposes at this time, it does not make sense to put them on their 'gaming' motherboards.
It -would- make more sense to find it on the E-class motherboards though, yes.
glugglug - Tuesday, August 20, 2013 - link
WiFi is a joke for a NAS. Yes, I'm including 802.11ac in that.kwrzesien - Tuesday, August 20, 2013 - link
It can do PCIe 3.0 x16 or dual x8 or x8/x4/x4, that is as flexible as Haswell allows without adding some kind of bridge chip. I'm pretty sure that PCIe 3.0 x4 (and PCIe 2.0 x8) don't bottleneck CF based on real world tests. Certainly PCIe 3.0 x8 doesn't, and who has three 7970's?In fact until GPU's and LCD's have a connection option that supports 4K@120Hz, or at least 4K@60Hz nonsense, I'm not sure we even need this level of performance. Maybe for extreme multi-monitor gaming in 3D you need three GPU's, but not for single monitor gaming or multi-monitor productivity.
RussianSensation - Tuesday, August 20, 2013 - link
If I am spending $350 on a board, I want the option of putting 3x 7970s in it. I already have 2 7970s. Also, you are not taking into account that people don't just throw out this board in 2 years. What about getting Maxwell/Volta GPUs? Put it this way, if you only care to buy 2 GPUs, then at MicroCenter, the Gigabyte Z87 UD4H is just $115 USD!!! It has 16 power phases and everything you need, minus the overpriced WiFi and Thunderbolt. But if you still insist on WiFi, there is MSI Z87 MPower for $215, or Asus own Pro board with WiFi. Then there is the ROG VI HERO which blows this Deluxe board out of the water in terms of quality overclocking components:http://www.techpowerup.com/reviews/ASUS/MAXIMUS_VI...
If you only want to run 2 GPUs, there are so many boards out there for hundreds less and some for less $ with higher quality components.
repoman27 - Tuesday, August 20, 2013 - link
@Jarred, "Thunderbolt supports up to 10GB/s bandwidth (bi-directional) for each port..." You want a little "b" there, and actually, it's 10 Gbit/s, full-duplex, per channel, up to 20 Gbit/s, full-duplex, per port. This is true even for OG Thunderbolt; Thunderbolt 2 merely allows channel-bonding for devices requiring greater than 10 Gbit/s of either display or data bandwidth.JarredWalton - Tuesday, August 20, 2013 - link
Fixed. Basically, I messed up when I said bi-directional on the four channels; it's uni-directional I believe, but TB2 allows bonding with bi-directional. Or put another way, TB1 was two 10Gb/s up and two 10Gb/s down; TB2 is two 20Gb/s and that can be either up or down. I think it can switch on the fly as well? Not entirely sure about that, and maybe I still need to clarify. Hahaha. As for the little b, that was one typo out of four Gigabit references; probably just holding down Shift still from typing the G and didn't notice.repoman27 - Tuesday, August 20, 2013 - link
Both Thunderbolt and Thunderbolt 2 host controllers provide 4 simplex 10 Gbit/s lanes per port, configured as 2 full-duplex 10 Gbit/s channels. Thunderbolt 2 allows bonding of the two channels to create a single 20 Gbit/s full-duplex link. The direction of the individual lanes is fixed; the active cables have two transmitters and two receivers at each end.StrangerGuy - Tuesday, August 20, 2013 - link
"but given the Z87-Deluxe/Dual runs $350 we’d expect the new board to come in above that price point."So I heard there is huge mass market demand for >$350 mobos.
boeush - Tuesday, August 20, 2013 - link
Stupid question: why insist on including 8 x USB 2.0 ports, *in addition to* 8 x USB 3.0?Who in blazes, I'd like to know, ever used or wanted to use more than 8 USB devices simultaneously on the same computer?
Aside from the above, I just don't get the persistence of USB 2.0. USB 3.0 has been out and around long enough already; why does USB 2.0 still keep showing up in new and supposedly premium products?? Someone please enlighten me...
DanNeely - Tuesday, August 20, 2013 - link
Because Intel only offers 4 or 6 USB3 ports (to get six you either need to drop to 4xSATA6 or 6xPCIe2 on the southbridge); and each additional pair of 3.0 ports beyond that requires an additional controller and PCIe lane to connect it. The boards that offer a dozenish 3.0 ports probably are also spending more on a PLX chip to multiplex all the IO onto the southbridge's limited supply of pcie lanes.The single picture isn't entirely clear; but it appears to be 4 and 4 on the back; and presumably 2 headers of each type for front panel connections. At least for the next few years 2 headers of each type is IMO the way to go since some people will be using older cases with only 2.0 ports and others new cases with 3.0 built in. Some of the former will add a 3rd party bay device for front panel ports; some of both groups will have a card reader which as a bottom denominator device will probably stay 2.0 for a few years (if anything; this argues for a 3rd 2.0 header in addition to the pair of 3.0 ones).
DanNeely - Tuesday, August 20, 2013 - link
PS The reason why only some of the USB ports on the chipset are the 3.0 variety is that the chip is a low margin part whose size is primarily defined by the number of IO pins that need to be squeezed onto it; and that USB3 controllers need significantly more die area than USB2 controllers. Since Broadwell is planned a BGA(mobile) only part, most likely this means that we won't see an increase until Skylake launches in 2015 (since that chipset will presumably have gotten a process shrink as well). Intel could launch a desktop chipset refresh with more 3.0 ports next year even if we don't get a new CPU; but with AMD foundering there's no real pressure for them to do so.boeush - Tuesday, August 20, 2013 - link
Thanks for answering, but I still don't completely get it.USB 3.0 is supposed to be completely (and transparently) backward-compatible with 2.0 (or am I missing something??) -- so if a motherboard provided only and exclusively 3.0 ports/headers, any legacy 2.0 ports/cables on the case or legacy devices should be able to plug into those 3.0 extension points without any issue or degradation of performance.
An 'intense' USB usage scenario might involve 1 keyboard and 1 mouse, plus maybe 1 microphone, 1 camera, 1 printer, 1 gaming controller, 1 card reader. I'm counting 7 ports (and not including WiFi, since this premium mobo already provides WiFi), leaving 1 port free. That's before considering that USB can chain, so for instance a lot of monitors these days integrate USB hubs and provide extra ports (and come with a USB extension cable), meaning you get 2-4 additional ports "for free" with each of your monitors (and most of the devices I listed, don't need the full bandwidth of USB 3.0 so would work perfectly fine over 2.0 links through hubs.)
I'm just saying: *in practice*, already having 8 USB 3.0 ports should make any additional 2.0 (or 3.0) ports utterly redundant and unnecessary. So why continue bundling in the expense and taking up the space? Still not getting it...
jwcalla - Wednesday, August 21, 2013 - link
I can't answer all your questions, but the on-board USB 3.0 headers are not compatible with the USB 2.0 connectors. It's a completely different connector style. So cases and devices with USB 2.0 connectors would require USB 2.0 motherboard headers. Of course the user interface USB ports are backwards-compatible as you mention.It could be that servicing USB 2.0 motherboard headers requires a USB 2.0 controller, and so maybe that is the reasoning for including it. In such a case it would make sense to throw any leftover USB 2.0 ports onto the back panel.
PS for Jarred: The source link to the ASUS press release points to the wrong URL.
DanNeely - Wednesday, August 21, 2013 - link
Motherboard headers aren't the same; for whatever reason instead of doubling the pin density for 3.0 they went with a header 2x as large and noone's making adapters for the header cables that I'm aware of. You really can't drop below 3 headers on the board itself without causing problems for some people building higher end systems (4 case ports and a card reader); and you still need a full set on the back for people who don't only have 2 front ports or just want most of their wiring to be neatly out of sight. You can drop the number down on low end boards to cut manufacturing costs; but higher end boards are equal parts feature check and combining multiple peoples edge cases into a single package to keep the size of your product line within reason.The total number of ports grew as USB2 replaced various legacy ports and board vendors wanted to replace the space with USB ports. Besides which, USB2 controllers were tiny so it barely cost Intel anything to add an additional pair every other year or so during the last decade. At this point I'm not really expecting the total to go up again; with the possible exception of the PS2 keyboard port there's not really any legacy IO left to replace; and with a 10GB USB standard in the works I expect Intel/AMD will be busy using process shrink southbridge transistors to update more ports to the faster standard instead of bumping the totals.
Counting charger/device cables I've currently got a total of 10 plugged in; not all have something attached at all times but there're 3 different device end plugs (B, mini, and micro) and I've got stuff that goes with each size. I actually do use a hub for some; but that's equal parts cable routing (my tower is farther away than normal), not wanting to spend money replacing all the tiny cables that came bundled with gadgets, and tradition (the hub's been in place since at least my amd 939 system when my board didn't have enough total ports built in).
colonelclaw - Wednesday, August 21, 2013 - link
It's a bit of a shame that Thunderbolt has so far been slow to be adopted, breaking out PCIe to a cable makes a lot of sense for many situations. I guess it all comes down to cost? Here's hoping TB2 makes the difference.DanNeely - Wednesday, August 21, 2013 - link
Only putting a controller in their next south bridge might make a difference. If you define making a difference as adding more TB ports to computers, not getting more TB devices on the market. The other thing Intel should do if they want to promote TB is to up the number of PCIe lanes available; the situation has been squeezed since Intel dropped the legacy PCI controller's 4(?) connections but didn't add PCIe lanes to compensate. While OEMs can add a PLX to work around the problem that adds to costs and has a slight hit on performance; with PCIe-SSDs potentially becoming a major enthusiast item in a few years, walling off a lane or four for TB ports very few people will use isn't an appealing option.Hrel - Wednesday, August 21, 2013 - link
$350? Lol.thunderbolt is a great idea, but no one cares until motherboards with it are under $150.
Now, I'm an enthusiast so was willing to go around 200. But there's not a chance in hell I'd convince the corporation I work at to pay that much extra for something with such limited support.
If they're serious about Thunderbolt they have to give it away. Google knows this, you don't see them charging $soft money for Android. Because they know the way in which you penetrate a market is by giving your product away for free; the money comes later.
Such a simple idea that alludes so many.
solipsism - Thursday, August 22, 2013 - link
1) Do you realize you just said the key to running a for-profit business is to not charge a profit?2) Google's business is advertising. Search, email, Android, etc. are free because we're the product. We're the Eloi. Their customers are the ones paying for advertising or for info on herd's habits.
3) Also remember we're talking about SW v HW where your argument continues to fall apart.
thomasxstewart - Thursday, August 22, 2013 - link
tundra is good, yet in primary tests all 3 pci-e3 3 slot game card macines fluttered image. there are 4 slot pci-e3 mainboards, yet only 48 lane pci-e3, so two slots are 8x. ahso, claim for another 2x pci-e3 might bee stutter. basically with 64 pci-e3 lanes, might be god to go with thunderbolt advantage.drashek
ReneGQ - Thursday, March 13, 2014 - link
I recently purchased an Asus Motherboard and the problems started from day 1. The drivers update never works, the same for AI Suite III (there´s a lot of updates for this model in Asus webpage). After 2 months I still can´t install BitDefender cause a clock watchdog error.Asus technical support is the worst, mails comes and goes with no solution.
I will not recommend this brand to anyone. The brand has a very good Marketing but the product and the service are very disappointment.