The AMD Ryzen Threadripper 3960X and 3970X Review: 24 and 32 Cores on 7nm
by Dr. Ian Cutress, Andrei Frumusanu & Gavin Bonshor on November 25, 2019 9:05 AM ESTZen 2 Platform for HEDT - Improvements over Last-Gen
Section by Andrei Frumusanu
The platform architecture of the new Threadripper 3000 series is fundamentally different, and a massive departure from what we’ve seen in the past on the first and second generation Threadripper products. Previously, AMD still made use of its monolithic die design used in Zen and Zen+ Ryzen, Threadripper and EPYC products. The approach was an economically smart one for AMD in regards to having to design only a single silicon die that would be used across the three product lines, however it had some fundamental technical disadvantages when it came to power efficiency as well as having to make some performance compromises.
The biggest disadvantage exhibited by the Threadripper 2000 series was the platform’s weakness in regards to its memory architecture, an issue that was particularly prevalent in the 32-core Threadripper 2990WX. As explained in our review of the TR2 products last year, the main problem with that SKU was that in order to achieve a 32-core product, AMD had to make use of 4 “Zeppelin” dies. Unlike the server-oriented SP3 socket however, Threadripper products come on the TR4 platform. While the two sockets are physically identical, they’re electrically incompatible with each other. In practice, the biggest difference between the two platforms is the fact that Threadripper products supports 4-channel memory setups, while the EPYC variants support the full 8-channel memory configuration possible.
The main conundrum for a product such as the 2990X which had to make use of 4 dies, each integrating 2 memory controller channels, is the decision on how you split up the memory controller setup between the dies and choose which 4 active controllers you’ll end up using. AMD’s approach here is that instead of using only one memory controller per die, the company chose to have two dies each with both memory controllers active, while the other two dies wouldn’t have any memory controllers enabled at all. The issue here is that the CPUs located on these dies would only have to access memory by hopping through the infinity fabric to the adjacent dies which did have memory controllers, and incur quite a large memory latency and bandwidth penalty. This penalty was large enough, that in situations where applications weren’t properly NUMA-aware and scaled across all core, the 2990WX ended up sometimes lagging behind the 16-core 2950X in performance.
Chiplet Architecture To The Rescue
Of course, AMD was aware of this drawback, but wasn’t planning to stay with this compromise forever. The new Ryzen 3000 series earlier this summer introduced the chiplet architecture for the first time ever, with some quite astounding success. The main differences here is that AMD is decoupling the actual CPU cores and cluster from the rest of the traditional SoC. The CPU chiplet contains nothing more than the CPU cores themselves, the CPU clusters L3 caches, and the I/O interface which communicates with the rest of the “traditional” system, which is now located on a separate silicon die.
AMD Ryzen 3000 Consumer IOD - Credit Fritzchens Fritz
For the Ryzen 3000 products, this I/O die is seemingly quite familiar in terms of design to what we saw in the first- and second-generation Zen architecture products. We find your various I/O IP blocks which take care of various connectivity such as USB, Ethernet, SATA, alongside the critical components such as the PCIe controllers and of course the memory controllers. In general, what’s found on the Ryzen 3000 IOD isn’t all too different in functionality than what we previously saw on the monolithic Zen dies from past years – of course, except for the CPUs themselves.
AMD EPYC2 / Threadripper 3000 sIOD - Credit Fritzchens Fritz
As we move on to the new Threadripper 3000 products (and new EPYC 2 processors), we however see the AMD’s main chiplet design advantage. Although the new Threadripper and EPYC products use the very same 7nm CPU chiplet dies (CCDs), they are using a different IO die, what seems to be called by AMD as the sIOD (server IO die?).
What’s interesting about the sIOD is that it’s not much of a “monolithic” design, but actually more similar to four consumer IO dies put together on one chip. In the above die shots (credit to Fritzchens Fritz), we actually see that AMD is employing an identical physical design of large parts of the chip’s IP blocks, with the main "central" block cluster going as far as being essentially identical. Of course, the layout of the various surrounding blocks is quite different. AMD here is essentially reusing design resources across its product ranges.
While the chip isn’t completely mirrored – there are still distinct unique IP blocks on each quarter of the die, it is in fact correct to say that it’s divided into quarters. These “quadrants” are in fact physically and logically separate from each other. Where this is important to consider, is in regards to the memory layout. In fact, logically, the layout is actually quite similar to what we’ve seen on the previous generation Threadripper and EPYC chips in terms of memory controller and CPU cluster distinction. Each quadrant still has its own two local memory controller channels, and the CPU CCXs connected to this quadrant have the best latency and bandwidth to memory. The CPUs accessing memory controllers of a different quadrant still have to do this via a hop over the infinity fabric, the biggest difference for this generation however is that instead of this hop being across different dies on the MCM package, it all remains on the same silicon die.
For Rome, AMD had explained that the latency differences between accessing memory on the local quadrant versus accessing remote memory controllers is ~+6-8ns and ~+8-10ns for adjacent quadrants (because of the rectangular die, the quadrants adjacent on the long side have larger latency than adjacent quadrants on the short side), and ~+20-25ns for the diagonally opposing quadrants. While for EPYC, AMD provides options to change the NUMA configuration of the system to optimize for either latency (quadrants are their own NUMA domain) or bandwidth (one big UMA domain), the Threadripper systems simply appear as one UMA domain, with the memory controllers of the quadrants being interleaved in the virtual memory space.
The interesting question here of course is, how is this UMA domain setup for the Threadripper 3950X and 3970X? The SKUs come with 4 chiplets each, with the 3950X employing 3 cores per CCX, totalling 24 cores, and the 3970X employing 4 cores per CCX, totaling 32 cores. However, what we don’t know is how these chiplets are divided and populated across the sIOD’s quadrants. In theory, one could have one chiplet and one memory controller per quadrant – or one could have just two fully populated quadrants with the other two quadrants disabled. Given we have numbers on a fully populated EPYC 7742 to compare against, and that the diagonally opposing quadrant latency penalty is quite big, we should be able to estimate the implementation based on the latency results.
Looking at the latency results, there’s a few comparisons to make. In regards to the L1, L2 and L3 performance, I refer to our original Zen2 analysis in our Ryzen 3000 review article. The numbers here don’t change, which is natural as we’re talking about the very same CPU chiplet across the different product lines.
Going out of the CCD, the DRAM latency is the most interesting difference that we need to have a closer look at. Comparing the new Threadripper 3970X to the 2950X we see a latency degradation of 16.2ns, with the structural DRAM latency rising from 62.2ns to 78.6ns. For this comparison we’re using the very same DRAM sticks with identical timings between the Ryzen and two Threadripper platforms, so any differences here are solely due to the architectural differences of the platforms.
This degradation is actually to be expected. The third generation Threadripper degrades in two aspects compared to its predecessor: First of all, the chiplet architecture does incur a latency penalty as the separation of the CPU cores onto a different silicon die comes with a latency penalty. Secondly, in the first and second generation Threadripper products, each CPU had access to its own die memory controller by default, and it wasn’t possible to use an UMA setup. The third-gen Threadripper comes with an UMA setup by default, and the fact that the IOD is interleaving memory accesses across the quadrant memory controllers again adds another latency penalty.
Looking at the differences between the EPYC 7742 running in NPS4 mode and the new 3970X, we however see that the new TR3000 platform has a definitive latency advantage of almost 25ns – albeit we’re no longer running apples-to-apples here in regards to the DRAM.
Finally, the most interesting comparison is using the very same DRAM and timings between a Ryzen 3000 processor and the new 3970X. Using an 3700X we had at hand, the latency penalty for the new TR chip is “only” 9.2ns, rising from 69.4ns to 78.6ns. Maybe I might sound a bit optimistic here, based on the Rome numbers from earlier this summer I had expected some quite worse results for the new Threadripper 3000 series, so I see this result to be actually quite good. While we don’t have definitive confirmation, it does look like the new 24 and 32-core Threadripper 3000 SKUs are using only two adjacent quadrants of the sIOD.
Of course, the structural latency degradations here don’t necessarily translate to performance degradations. As we saw on the Ryzen 3000 products, AMD’s new doubled L3 cache as well as improved prefetchers have managed to more than compensate for the worse structural latency, actually increasing the memory performance of the new Zen2 chips.
245 Comments
View All Comments
Silma - Monday, November 25, 2019 - link
Not.Congrats to AMD on great processors, but the consumer market for $1,4-$2k processors is super tiny.
Despite the current generation of AMD processors for desktops being arguably superior to those of Intel, in the financials, Intel still destroys AMD and it is indeed a bloodbath.
In my opinion, AMD would hurt Intel much more, and in the end earn more money, if it priced its offerings lower, for consumer processors as well as datacenter processors.
sgeocla - Monday, November 25, 2019 - link
Not that small.>> Workstations are a growing market segment and have been for quite some time. They run 24-7, are extremely reliable, and have features and specifications you can’t find in a PC. Therefore, workstations can command high price points because of the high expectations users have for them. Our research shows the market size for workstations is approximately 5.3 million units, about 2% of the total PC market, and brings in over $10 billion dollars a year, almost 2.5% of the PC market total, which indicates the average selling price (ASP) is higher than the ASP of a PC.
https://gfxspeak.com/2019/05/13/stands-for-worksta...
melgross - Monday, November 25, 2019 - link
Very small. The more cores, the smaller the market. What are so many cores good for? Video editing, huge databases. Financial transactions, which the chips are not likely to be used for.For most everyone else, 8 cores is still the sweet spot.
ShowsOn - Monday, November 25, 2019 - link
The review literally discusses this point:"...Intel has reported that the workstation market has a potential $10B a year addressable market, so it is still worth pursuing. While I have no direct quotes or data, I remember being told for several generations that Intel’s best-selling HEDT processors were always the highest core count, highest performance parts that money could buy. These users wanted off-the-shelf hardware, and were willing to pay for it – they just weren’t willing to pay for enterprise features...Now that we can get better performance at $1999 with 32 cores, assuming AMD can keep stock of the hardware, it stands to reason that this market will pick up interest again."
twtech - Monday, November 25, 2019 - link
They need to partner with a workstation vendor such as Dell, HP, etc. - or pick/create a company to partner with. Big businesses like to have a reliable single vendor they can deal with for all their server and workstation hardware, including support.eek2121 - Monday, November 25, 2019 - link
They really need Dell.xrror - Tuesday, November 26, 2019 - link
Sadly Dell always seems anti-AMD - or I guess more accurately they absolutely will not do anything that could jeopardize receiving Intel's contrarevenue.eek2121 - Monday, November 25, 2019 - link
Pretty much anybody that does graphics, video, etc. has a need for these CPUs. A large portion of professional Youtubers use blender or similar applications (that scale perfectly) to render things like 3d animations and the like. On the contrary. The market for these types of CPUs is larger than the gaming market. AMD's biggest obstacle here is getting prebuilt OEM systems built with sufficient cooling. Not many folks in that audience are going to build their own PC.melgross - Monday, November 25, 2019 - link
Nope. Graphics apps don’t use all these cores. I run that stuff. Neither do apps like Photoshop. If sometimes they use most cores, the usage ore core I’d down around 20% in spurts. Fewer cores simply have higher per core usage.As I said, video rendering is about the only thing that most users will find using a lot of cores. Even multasking doesn’t use 16 or more cores efficiently.
It’s also interesting that years ago, the argument was too much power. 150 Watters was considered to be on the high side, and not in a good way. Now these cores are moving to 300 watts, and nobody is saying anything.
Jimbo Jones - Tuesday, November 26, 2019 - link
Video rendering3D rendering
3D animation where physics calculation is need (cloth, particles, etc)
Particle simulations for 3D animation / work / science
Game creation / compiling / baking
Progamming (compiling)
VFX -- after effects, etc
Gaming while rendering out any of the above at the same time
Doing more than one thing at a time (Intel users close all their apps to game, lol)
Gaming while streaming
Youtube content creation (requires video rendering and encoding)
Digital audio workstations
To name a couple ...
I actually read someone on another comment feed defending Intel by saying "CPU's aren't even important these days anyway!" -- the desperation of fanbois to grasp at straws to defend the indefensible is hilarious ... right Mel?