AMD Joins CXL Consortium: Playing in All The Interconnectsby Anton Shilov on July 19, 2019 5:00 PM EST
- Posted in
- PCIe 5.0
AMD's CTO, Mark Papermaster, has stated in a blog that AMD has joined the Compute Express Link (CXL) Consortium. The industry group is led by nine industry giants including Intel, Alibaba, Google, and Microsoft, but has over 35 members. The CXL 1.0 technology uses the PCIe 5.0 physical infrastructure to enable a coherent low-latency interconnect protocol that allows to share CPU and non-CPU resources efficiently and without using complex memory management. The announcement indicates that AMD now supports all of the current and upcoming non-proprietary high-speed interconnect protocols, including CCIX, Gen-Z, and OpenCAPI.
PCIe has enabled a tremendous increase of bandwidth from 2.5 GT/s per lane in 2003 to 32 GT/s per lane in 2019 and is set to remain a ubiquitous physical interface of upcoming SoCs. Over the past few years it turned out that to enable efficient coherent interconnect between CPUs and other devices, specific low-latency protocols were needed, so a variety of proprietary and open-standard technologies built upon PCIe PHY were developed, including CXL, CCIX, Gen-Z, Infinity Fabric, NVLink, CAPI, and other. In 2016, IBM (with a group of supporters) went as far as developing the OpenCAPI interface relying on a new physical layer and a new protocol (but this is a completely different story).
Each of the protocols that rely on PCIe have their peculiarities and numerous supporters. The CXL 1.0 specification introduced earlier this year was primarily designed to enable heterogeneous processing (by using accelerators) and memory systems (think memory expansion devices). The low-latency CXL runs on PCIe 5.0 PHY stack at 32 GT/s and supports x16, x8, and x4 link widths natively. Meanwhile, in degraded mode it also supports 16.0 GT/s and 8.0 GT/s data rates as well as x2 and x1 links. In case of a PCIe 5.0 x16 slot, CXL 1.0 devices will enjoy 64 GB/s bandwidth in each direction. It is also noteworthy that the CXL 1.0 features three protocols within itself: the mandatory CXL.io as well as CXL.cache for cache coherency and CXL.memory for memory coherency that are needed to effectively manage latencies.
In the coming years computers in general and machines used for AI and ML processing will require a diverse combination of accelerators featuring scalar, vector, matrix and spatial architectures. For efficient operation, some of these accelerators will need to have low-latency cache coherency and memory semantics between them and processors, but since there is no ubiquitous protocol that supports appropriate functionality, there will be a fight between some of the standards that do not complement each other.
The biggest advantage of CXL is that it is not only supported by over 30 companies already, but its founding members include such heavyweights as Alibaba, DellEMC, Facebook, Google, HPE, Huawei, Intel, and Microsoft. All of these companies build their own hardware architectures and their support for CXL means that they plan to use the technology. Since AMD clearly does not want to be left behind the industry, it is natural for the company to join the CXL party.
Since CXL relies on PCIe 5.0 physical infrastructure, companies can use the same physical interconnects but develop the transmission logic required. At this point AMD is not committing to enabling CXL on future products, but is throwing its hat into the ring to discuss how the protocol develops, should it appear in a future AMD product.
- Compute Express Link (CXL): From Nine Members to Thirty Three
- CXL Specification 1.0 Released: New Industry High-Speed Interconnect From Intel
- Gen-Z Interconnect Core Specification 1.0 Published
- Hot Chips: Intel EMIB and 14nm Stratix 10 FPGA Live Blog (8:45am PT, 3:45pm UTC)
Sources: AMD, CXL Consortium, PLDA
Post Your CommentPlease log in or sign up to comment.
View All Comments
eldakka - Friday, July 26, 2019 - link"but I have yet to see an open-ended PCIe slot on any motherboard (desktop or server; Supermicro, Tyan, Gigabyte, Asus, Asrock, or MSI). Maybe they exist, maybe they're part of the PCIe spec, but they aren't commonplace by any definition of the word."
I just went to Asrock's motherboards page (first one that crossed my mind), and on the landing page I can see 4 motherboards alone that have open-ended (x1 by the looks of things) connectors:
eldakka - Friday, July 26, 2019 - linkUmm, ever heard of open ended slots? Obviously not.
"Some slots use open-ended sockets to permit physically longer cards and negotiate the best available electrical and logical connection.
The number of lanes actually connected to a slot may also be fewer than the number supported by the physical slot size. An example is a ×16 slot that runs at ×4, which will accept any ×1, ×2, ×4, ×8 or ×16 card, but provides only four lanes. Its specification may read as "×16 (×4 mode)", while "×size @ ×speed" notation ("×16 @ ×4") is also common. The advantage is that such slots can accommodate a larger range of PCI Express cards without requiring motherboard hardware to support the full transfer rate."
Santoval - Saturday, July 20, 2019 - link"...but far as I understand 4.0 or 5.0 does not run on 3.0".
They actually do. They also run on PCIe 2.0 and even the original PCIe 1.0(a). Up to PCIe 5.0 the backwards & forward compatibility is relatively easy, because fundamentally all versions employ the same signaling scheme : NRZ (Non-Return-to-Zero).
From PCIe 6.0 onward NRZ is canned and PAM4 (Pulse Amplitude Modulation with 4 signal levels) will be adopted instead. So, while PCIe 6.0 will still be backward/forward compatible with all previous versions it will be trickier to do that and extra work for NRZ mode operation will certainly be required.
A similar situation, but at a higher level, occurred when PCIe 3.0 switched to 128b/130b encoding. To remain compatible with PCIe 2.0 and PCIe 1.0(a) slots & devices, PCIe 3.0 devices & slots needed to also have a 8b/10b encoding mode. PCIe up to at least 6.0 will still employ 128b/130b encoding, so at least this will not also have to change as well (with its 1.5% overhead noone even thinks of changing it again, it makes no sense).
azfacea - Sunday, July 21, 2019 - link"Intel doing 5.0"
you just showed up to say this didnt you. at least try creating some alt accounts its too obvious dude.
also isnt it sunday in israel now ? no respect for sabath ?? PepeThink
alufan - Monday, July 22, 2019 - linksince when do intel have PCIE 5? they are working on it as are AMD, difference being AMD went with 4 for now as its currently possible to use it with all the Ryzen 3000 CPUs I imagine you will see 5 on the next gen chips @ 5nm coming 2021 and intel will probably have similar with I dont know what chip about then as well, PCIE is backwards comp irrespective of generation it just defaults to the slot or path speed.
what may not be easy to swallow for all of us is the increase in board costs, look at the uplift in costs for the current 4.0 chip set due to the big gains in power and transmission environment requirements I cant see 5.0 being any better in this respect.
Qasar - Tuesday, July 23, 2019 - linkalufan, they dont, HStewart is saying intel will skip pcie 4, and go straight to 5. there " could " be other reasons why the x570 boards cost so much over the x470, maybe the boards them selves cost more to make, i hear the MSI godlike board, is 12 layers.
eastcoast_pete - Friday, July 19, 2019 - linkAgree that this is good news! I also have this question: Am I correct that full-speed PCIe 5 at 32 GT/sec is essentially the speed of current Dual-Channel memory-to-CPU buses (with DDR 4 RAM)? Wow.
nevcairiel - Friday, July 19, 2019 - linkDual Channel memory read speed is in the area of 40GB/s or so for "average" memory sticks.
PCIe 5 with 16 lanes would be ~63 GB/s in one direction (64 minus overhead)
I wonder if mainstream will eventually move to more memory channel, but probably not.
willis936 - Friday, July 19, 2019 - linkMain memory can’t afford the latency.
mode_13h - Saturday, July 20, 2019 - linkThe latency of what - more channels?