The Fiji GPU: Go Big or Go Home

Now that we’ve had a chance to take a look at the architecture backing Fiji, let’s talk about the Fiji GPU itself.

Fiji’s inclusion of High Bandwidth Memory (HBM) technology complicates the picture somewhat when talking about GPUs. Whereas past GPUs were defined by the GPU die itself and then the organic substrate package it sits on, the inclusion of HBM requires a third layer, the silicon interposer. The job of the interposer is to sit between the package and the GPU, serving as the layer that connects the on-package HBM memory stacks with the GPU. Essentially a very large chip without any expensive logic on it, the silicon interposer allows for finer, denser signal routing than organic packaging is capable of, making the ultra-wide 4096-bit HBM bus viable for the first time.

We’ll get to HBM in detail in a bit, but it’s important to call out the impact of HBM and the interposer early, since they have a distinct impact on how Fiji was designed and what its capabilities are.

As for Fiji itself, Fiji is unlike any GPU built before by AMD, and not only due to the use of HBM. More than anything else, it’s simply huge, 596mm2 to be precise. As we mentioned in our introduction, AMD has traditionally shied away from big chips, even after the “small die” era ended, and for good reason. Big chips are expensive to develop, expensive to produce, take longer to develop, and yield worse than small chips (this being especially the case early-on for 40nm). Altogether they’re riskier than smaller chips, and while there are times where they are necessary, AMD has never reached this point until now.

The end result is that for the first time since the unified shader era began, AMD has gone toe-to-toe with NVIDIA on die size. Fiji’s 596mm2 die size is just 5mm2 (<1%) smaller than NVIDIA’s GM200, and more notably still hits TSMC’s 28nm reticle limit. TSMC can’t build chips any bigger than this; Fiji is as big a chip as AMD can order.

AMD Big GPUs
  Die Size Native FP64 Rate
Fiji (GCN 1.2) 596mm2 1/16
Hawaii (GCN 1.1) 438mm2 1/2
Tahiti (GCN 1.0) 352mm2 1/4
Cayman (VLIW4) 389mm2 1/4
Cypress (VLIW5) 334mm2 1/5
RV790 (VLIW5) 282mm2 N/A

Looking at Fiji relative to AMD’s other big GPUs, it becomes very clear very quickly just how significant this change is for AMD. When Hawaii was released in 2013 at 438mm2, it was already AMD’s biggest GPU ever for its time. And yet Fiji dwarfs it, coming in at 158mm2 (36%) larger. The fact that Fiji comes at the latter-half of the 28nm process’s life time means that such a large GPU is not nearly as risky now as it would have been in 2011/2012 (NVIDIA surely took some licks internally on GK110), but still, nothing else we can show you today can really sell the significance of Fiji to AMD as much as the die size can.

And the fun doesn’t stop there. Along with producing the biggest die they could, AMD has also more or less gone the direction of NVIDIA and Maxwell in the case of Fiji, building what is unambiguously the most gaming/FP32-centric GPU the company could build. With GCN supporting power-of-two FP64 rates between 1/2 and 1/16, AMD has gone for the bare minimum in FP64 performance that their architecture allows, leading to a 1/16 FP64 rate on Fiji. This is a significant departure from Hawaii, which implemented native support for ½ rate, and on consumer parts offered a handicapped 1/8 rate. Fiji will not be a FP64 powerhouse – its 4GB of VRAM is already perhaps too large of a handicap for the HPC market – so instead we get AMD’s best FP32 GPU going against NVIDIA’s best FP32 GPU.

AMD’s final ace up their sleeve on die size is HBM. Along with HBM’s bandwidth and power benefits, HBM is also much simpler to implement, requiring less GPU space for PHYs than GDDR5 does. This is in part due to the fact that HBM stacks have their own logic layer, distributing some of the logic on to each stack, and furthermore a benefit of the fact that the signaling logic that remains doesn’t have to be nearly as complex since the frequencies are so much lower. 4096-bits of HBM PHYs still takes up a fair bit of space – though AMD won’t tell us how much – but it’s notably lower than the amount of space AMD was losing to Hawaii’s GDDR5 memory controllers.

The end result is that not only has AMD built their biggest GPU ever, but they have done virtually everything they can to maximize the amount of die space they get to allocate to FP32 and rendering resources. Simply put, AMD has never reached so high and aimed for parity with NVIDIA in this manner.

Ultimately this puts Fiji’s transistor count at 8.9 billion transistors, even more than the 8 billion transistors found in NVIDIA’s GM200, and, as expected, significantly more than Hawaii’s 6.2 billion. Interestingly enough, on a relative basis this is almost exactly the same increase we saw with Hawaii; Fiji packs in 43.5% more transistors than Hawaii, and Hawaii packed in 43.9% more transistors than Tahiti. So going by transistors alone, Fiji is very much to Hawaii what Hawaii was to Tahiti.

Finally, as large as the Fiji GPU is, the silicon interposer it sits on is even larger. The interposer measures 1011mm2, nearly twice the size of Fiji. Since Fiji and its HBM stacks need to fit on top of it, the interposer must be very large to do its job, and in the process it pushes its own limits. The actual interposer die is believed to exceed the reticle limit of the 65nm process AMD is using to have it built, and as a result the interposer is carefully constructed so that only the areas that need connectivity receive metal layers. This allows AMD to put down such a large interposer without actually needing a fab capable of reaching such a large reticle limit.

What’s interesting from a design perspective is that the interposer and everything on it is essentially the heart and soul of the GPU. There is plenty of power regulation circuitry on the organic package and even more on the board itself, but within the 1011mm2 floorplan of the interposer, all of Fiji’s logic and memory is located. By mobile standards it’s very nearly an SoC in and of itself; it needs little more than external power and I/O to operate.

Fiji’s Architecture: The Grandest of GCN 1.2 Fiji’s Layout
Comments Locked

458 Comments

View All Comments

  • looncraz - Thursday, July 2, 2015 - link

    1. eDRAM takes up more space and more energy and is slower than HBM.
    2. HBM will make sense for GPUs/APUs, but not for use as system RAM.
    3. Yes, almost a guarantee. But how long that will take is anybody's guess.
    4. They don't "pay them" so to speak, they just have contractual restrictions during the game development phase that prevents AMD from getting an early enough and frequent enough snapshot of games so they can optimize their drivers in anticipation of the game's release. This makes nVidia look better in those games. The next hurdle is that GameWorks is intentionally designed to abuse nVidia's strengths over AMD and even their own older generation cards. Crysis 2's tessellation is the most blatant example.
  • Dribble - Monday, July 6, 2015 - link

    "Crysis 2's tessellation is the most blatant example"

    No it wasn't. What you see in the 2D wireframe mode is completely different to what the 3D mode has to draw as it doesn't do the same culling. The whole thing was just another meaningless conspiracy theory.
  • mr_tawan - Friday, July 3, 2015 - link

    > do you think nvidia pays companies to optimize for their gpus and put less focus on amd gpus? especially in 'the way it is meant to be played' sponsored games

    I don't think too many game developers checks the device ID and lower the game performance when it's not the sponser's card. However, I think through the developer relationship program (or something like that), the game with those logo tends to perform better with the respective GPU vendor as the game was developed with that vendor in mind, and with support form the vendor.

    The game would be tested against the other vendors as well, but might not be as much as with the sponser.
  • mindbomb - Thursday, July 2, 2015 - link

    Hi, I'd like to point out why the mantle performance was low. It was due to memory overcommitment lowering performance due to the 4GB of vram on the fury x, not due to a driver bug (it's a low level api anyway, there is not much for the driver to do). BF4's mantle renderer needs a surprisingly high amount of vram for optimal performance. This is also why the 2GB tonga struggles at 1080p.
  • YukaKun - Thursday, July 2, 2015 - link

    Good to know you're better now, Ryan.

    I really enjoyed the great length and depth of your review.

    Cheers!
  • Socius - Thursday, July 2, 2015 - link

    Curious as to why both toms hardware and anandtech were kind to the fury x. Is anandtech changing review style based on editorial requests from toms now? Because an extremely important point is stock overclock performance. And while the article mentions the tiny 4% performance boost the fury x gets from overclocking, it doesn't show what the 980ti can do. Or even more importantly...that the standard gtx 980 can overclock 30-40% and come out ahead of the fury x, leaving the fury x a rather expensive piece of limited hardware concept at best. Also important to mention that the video encoding test was pointless as Nvidia has moved away from CUDA accelerated video encoding in favour of NVENC hardware encoding. In fact a few drivers ago they had fully disabled CUDA accelerated encoding to promote the switchover.
  • YukaKun - Thursday, July 2, 2015 - link

    Then you must have selective reading, because they do mention it. In particular, they say if they just got a 7% OC, then the card will perform basically the same, and they it did.

    No need to do an OC to the 980ti in that scenario.

    Plus, they also mention the Fury X is still locked for OC. Give MSI and Sapphire (maybe AMD as well) until they deliver on their promise of the Fury having better control.

    Cheers!
  • YukaKun - Thursday, July 2, 2015 - link

    * and then it did *

    Edit function when? :(

    Cheers!
  • Socius - Thursday, July 2, 2015 - link

    Again going back to the problem of "missing test data" in this review. Under a 11% GPU Clock OC (highest possible), which resulted in a net 5% FPS gain, the card hit nearly 400W (just the card, not total system) and 65C after a long session. Which means any more heat than this, and throttling comes into play even harder. This package is thermally restricted. That's why AMD went with an all in one cooler in the first place...because it wanted to clock it up as high as possible for consumers, but knew the design was a heat monster.

    Outside of full custom loops, you won't be able to get much more out of this design even with fully unlocked voltage, your issue is still heat first. This is why it's important to show the standard GTX 980 OC'd compared to the Fury X OC'd. Because that completely changes the value proposition for the Fury X. But both Tom's and Anandtech have been afraid to be harsh on AMD in their reviews.
  • chizow - Thursday, July 2, 2015 - link

    Great point, it further backs the point myself and others have made that AMD already "overclocked" Fury X in an all out attempt to beat 980Ti and came close to hitting the chips thermal limits, necessitating the water cooler. We've seen in the past, especially with Hawaii, that lower operating temps = lower leakage = lower power draw under load, so its a very real possibility they could not have hit these frequencies stably with the full chip without WC.

    When you look at what we know about the Air-Cooled Fury, this is even more likely, as AMD's slides listed it as a 275W part, but it is cut down and/or clocked lower.

Log in

Don't have an account? Sign up now