The Fiji GPU: Go Big or Go Home

Now that we’ve had a chance to take a look at the architecture backing Fiji, let’s talk about the Fiji GPU itself.

Fiji’s inclusion of High Bandwidth Memory (HBM) technology complicates the picture somewhat when talking about GPUs. Whereas past GPUs were defined by the GPU die itself and then the organic substrate package it sits on, the inclusion of HBM requires a third layer, the silicon interposer. The job of the interposer is to sit between the package and the GPU, serving as the layer that connects the on-package HBM memory stacks with the GPU. Essentially a very large chip without any expensive logic on it, the silicon interposer allows for finer, denser signal routing than organic packaging is capable of, making the ultra-wide 4096-bit HBM bus viable for the first time.

We’ll get to HBM in detail in a bit, but it’s important to call out the impact of HBM and the interposer early, since they have a distinct impact on how Fiji was designed and what its capabilities are.

As for Fiji itself, Fiji is unlike any GPU built before by AMD, and not only due to the use of HBM. More than anything else, it’s simply huge, 596mm2 to be precise. As we mentioned in our introduction, AMD has traditionally shied away from big chips, even after the “small die” era ended, and for good reason. Big chips are expensive to develop, expensive to produce, take longer to develop, and yield worse than small chips (this being especially the case early-on for 40nm). Altogether they’re riskier than smaller chips, and while there are times where they are necessary, AMD has never reached this point until now.

The end result is that for the first time since the unified shader era began, AMD has gone toe-to-toe with NVIDIA on die size. Fiji’s 596mm2 die size is just 5mm2 (<1%) smaller than NVIDIA’s GM200, and more notably still hits TSMC’s 28nm reticle limit. TSMC can’t build chips any bigger than this; Fiji is as big a chip as AMD can order.

AMD Big GPUs
  Die Size Native FP64 Rate
Fiji (GCN 1.2) 596mm2 1/16
Hawaii (GCN 1.1) 438mm2 1/2
Tahiti (GCN 1.0) 352mm2 1/4
Cayman (VLIW4) 389mm2 1/4
Cypress (VLIW5) 334mm2 1/5
RV790 (VLIW5) 282mm2 N/A

Looking at Fiji relative to AMD’s other big GPUs, it becomes very clear very quickly just how significant this change is for AMD. When Hawaii was released in 2013 at 438mm2, it was already AMD’s biggest GPU ever for its time. And yet Fiji dwarfs it, coming in at 158mm2 (36%) larger. The fact that Fiji comes at the latter-half of the 28nm process’s life time means that such a large GPU is not nearly as risky now as it would have been in 2011/2012 (NVIDIA surely took some licks internally on GK110), but still, nothing else we can show you today can really sell the significance of Fiji to AMD as much as the die size can.

And the fun doesn’t stop there. Along with producing the biggest die they could, AMD has also more or less gone the direction of NVIDIA and Maxwell in the case of Fiji, building what is unambiguously the most gaming/FP32-centric GPU the company could build. With GCN supporting power-of-two FP64 rates between 1/2 and 1/16, AMD has gone for the bare minimum in FP64 performance that their architecture allows, leading to a 1/16 FP64 rate on Fiji. This is a significant departure from Hawaii, which implemented native support for ½ rate, and on consumer parts offered a handicapped 1/8 rate. Fiji will not be a FP64 powerhouse – its 4GB of VRAM is already perhaps too large of a handicap for the HPC market – so instead we get AMD’s best FP32 GPU going against NVIDIA’s best FP32 GPU.

AMD’s final ace up their sleeve on die size is HBM. Along with HBM’s bandwidth and power benefits, HBM is also much simpler to implement, requiring less GPU space for PHYs than GDDR5 does. This is in part due to the fact that HBM stacks have their own logic layer, distributing some of the logic on to each stack, and furthermore a benefit of the fact that the signaling logic that remains doesn’t have to be nearly as complex since the frequencies are so much lower. 4096-bits of HBM PHYs still takes up a fair bit of space – though AMD won’t tell us how much – but it’s notably lower than the amount of space AMD was losing to Hawaii’s GDDR5 memory controllers.

The end result is that not only has AMD built their biggest GPU ever, but they have done virtually everything they can to maximize the amount of die space they get to allocate to FP32 and rendering resources. Simply put, AMD has never reached so high and aimed for parity with NVIDIA in this manner.

Ultimately this puts Fiji’s transistor count at 8.9 billion transistors, even more than the 8 billion transistors found in NVIDIA’s GM200, and, as expected, significantly more than Hawaii’s 6.2 billion. Interestingly enough, on a relative basis this is almost exactly the same increase we saw with Hawaii; Fiji packs in 43.5% more transistors than Hawaii, and Hawaii packed in 43.9% more transistors than Tahiti. So going by transistors alone, Fiji is very much to Hawaii what Hawaii was to Tahiti.

Finally, as large as the Fiji GPU is, the silicon interposer it sits on is even larger. The interposer measures 1011mm2, nearly twice the size of Fiji. Since Fiji and its HBM stacks need to fit on top of it, the interposer must be very large to do its job, and in the process it pushes its own limits. The actual interposer die is believed to exceed the reticle limit of the 65nm process AMD is using to have it built, and as a result the interposer is carefully constructed so that only the areas that need connectivity receive metal layers. This allows AMD to put down such a large interposer without actually needing a fab capable of reaching such a large reticle limit.

What’s interesting from a design perspective is that the interposer and everything on it is essentially the heart and soul of the GPU. There is plenty of power regulation circuitry on the organic package and even more on the board itself, but within the 1011mm2 floorplan of the interposer, all of Fiji’s logic and memory is located. By mobile standards it’s very nearly an SoC in and of itself; it needs little more than external power and I/O to operate.

Fiji’s Architecture: The Grandest of GCN 1.2 Fiji’s Layout
Comments Locked

458 Comments

View All Comments

  • D. Lister - Friday, July 3, 2015 - link

    Ryan, to us, the readers, AT is just one of several sources of information, and to us, the result of your review sample is just one of the results of many other review samples. As a journalist, one would expect you to have done at least some investigation regarding the "overclockers' dream" claim, posted your numbers and left the conclusion making to those whose own money is actually going to be spent on this product - us, the customers.

    I totally understand if you couldn't because of ill health, but, with all due respect, saying that you couldn't review a review sample because there weren't enough review samples to find some scientifically accurate mean performance number, at least to me appears as a reason with less than stellar validity.
  • silverblue - Friday, July 3, 2015 - link

    I can understand some of the criticisms posted here, but let's remember that this is a free site. Additionally, I doubt there were many Fury X samples sent out. KitGuru certainly didn't get one (*titter*). Finally, we've already established that Fury X has practically sold out everywhere, so AT would have needed to purchase a Fury X AFTER release and BEFORE they went out of stock in order to satisfy the questions about sample quality and pump whine.
  • nagi603 - Thursday, July 2, 2015 - link

    "if you absolutely must have the lowest load noise possible from a reference card, the R9 Fury X should easily impress you."
    Or, you know, mod the hell out of your card. I have a 290X in a very quiet room, and can't hear it, thanks to the Accelero Xtreme IV I bolted onto it. It does look monstrously big, but still, not even the Fury X can touch that lack of system noise.
  • looncraz - Thursday, July 2, 2015 - link

    The 5870 was the fastest GPU when it was released and the the 290X was the fastest GPU when it was released. This article makes it sound like AMD has been unable to keep up at all, but they've been trading blows. nVidia simply has had the means to counter effectively.

    The 290X beat nVidia's $1,000 Titan. nVidia had to quickly respond with a 780Ti which undercut their top dog. nVidia had to release the 780Ti at a seriously low price in order to compete with the, then unreleased, Fury X and had to give the GPU 95% of the performance of their $1,000 Titan X.

    nVidia is barely keeping ahead of AMD in performance, but was well ahead in efficiency. AMD just about brought that to parity with THEIR HBM tech, which nVidia will also be using.

    Oh, anyone know the last time nVidia actually innovated with their GPUs? GSync doesn't count, that is an ages-old idea they simply had enough clout to see implemented, and PhysX doesn't count, since they simply purchased the company who created it.
  • tviceman - Thursday, July 2, 2015 - link

    The 5870 was the fastest for 7 months, but it wasn't because it beat Nvidia's competition against it. Nvidia's competition against it was many months late, and when it finally came out was clearly faster. The 7970 was the fastest for 10 weeks, then was either slower or traded blows with the GTX 680. The 290x traded blows with Titan but was not clearly faster and was then eclipsed by the 780 TI 5 days later.

    All in all, since GTX 480 came out in March of 2010, Nvidia has solidly held the single GPU performance crown. Sometimes by a small margin (GTX 680 launch vs. HD 7970), sometimes by a massive margin (GTX Titan vs. 7970Ghz), but besides a 10 week stint, Nvidia has been in the lead for over the past 5 years.
  • kn00tcn - Thursday, July 2, 2015 - link

    check reviews with newer drivers, 7970 has increased more than 680, sometimes similar with 290x vs 780/780ti depending on game (it's a mess to dig up info, some of it is coming from kepler complaints)

    speaking of drivers, 390x using a different set than 290x in reviews, that sure makes launch reviews pointless...
  • chizow - Thursday, July 2, 2015 - link

    I see AMD fanboys/proponents say this often, so I'll ask you.

    Is performance at the time you purchase and in the near future more important to you? Or are you buying for unrealized potential that may only be unlocked when you are ready to upgrade those cards again?

    But I guess that is a fundamental difference and one of the main reasons I prefer Nvidia. I'd much rather buy something knowing I'm going to get Day 1 drivers, timely updates, feature support as advertised when I buy, over the constant promise and long delays between significant updates and feature gaps.
  • silverblue - Friday, July 3, 2015 - link

    Good point, however NVIDIA has made large gains in drivers in the past, so there is definitely performance left on the table for them as well. I think the issue here is that NVIDIA has seemed - to the casual observer - to be less interested in delivering performance improvements for anything prior to Maxwell, perhaps as a method of pushing people to buy their new products. Of course, this wouldn't cause you any issues considering you're already on Maxwell 2.0, but what about the guy who bought a 680 which hasn't aged so well? Not everybody can afford a new card every generation, let alone two top end cards.
  • chizow - Sunday, July 5, 2015 - link

    Again, it fundamentally speaks to Nvidia designing hardware and using their transistor budget to meet the demands of games that will be relevant during the course of that card's useful life.

    Meanwhile, AMD may focus on archs that provide greater longevity, but really, who cares if it was always running a deficit for most of its useful life just to catch up and take the lead when you're running settings in new games that are borderline unplayable to begin with?

    Some examples for GCN vs. Kepler would be AMD's focus on compute, where they always had a lead over Nvidia in games like Dirt that started using Global Illumination, while Kepler focused on geometry and tessellation, which allowed it to beat AMD in most relevant games of the DX9 to DX11 transition era.

    Now, Nvidia presses its advantage as its compute has caught up and exceeded GCN with Kepler, while maintaining their advantage with geometry and tesseletion, so we see in these games, GCN and Kepler both fall behind. That's just called progress. The guy who thinks his 680 should still keep pace with a new gen architecture meant to take advantage of features in new gen games probably just needs to look back at history to understand, new gen archs are always going to run new gen games better than older archs.
  • chizow - Thursday, July 2, 2015 - link

    +1, exactly, except for a few momentary anomalies, Nvidia has held the single GPU performance crown and won every generation since G80. They did their best with the small die strategy for as long as they could, but they quickly learned they'd never get there against Nvidia's monster 500+mm^2 chips, so they went big die as well. Fiji was a good effort, but as we can see, it fell short and may be the last grand effort we see from AMD.

Log in

Don't have an account? Sign up now