Integrated Gaming Performance

As stated on the first page, here we take both APUs from DDR4-2133 to DDR4-3466 and run our testing suite at each stage. For our gaming tests, we are only concerned with real-world resolutions and settings for these games. It would be fairly easy to adjust the settings in each game to a CPU limited scenario, however the results from such a test are mostly pointless and non-transferable to the real world in our view. Scaling takes many forms, based on GPU, resolution, detail levels, and settings, so we want to make sure the results correlate to what users will see day-to-day.

Civilization 6

First up in our APU gaming tests is Civilization 6. Originally penned by Sid Meier and his team, the Civ series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer underflow. Truth be told I never actually played the first version, but every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, it a game that is easy to pick up, but hard to master.

(1080p) Civilization 6 on iGPU, Average Frames Per Second(1080p) Civilization 6 on iGPU, 99th Percentile

Civilazation 6 certainly appreciates faster memory on integrated graphics, showing a +28% gain for the 2400G on average framerates, or a +13% gain when compared to the APU rated memory frequency (DDR4-2666).

Ashes of the Singularity (DX12)

Seen as the holy child of DX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go and explore as many of the DX12 features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

(1080p) AoTS on iGPU, Average Frames Per Second(1080p) AoTS on iGPU, 99th Percentile

In Ashes, both CPUs saw a 26-30% gain in frame rates moving from the slow to fast memory, which is also seen in the percentile numbers.

Rise Of The Tomb Raider (DX12)

One of the newest games in the gaming benchmark suite is Rise of the Tomb Raider (RoTR), developed by Crystal Dynamics, and the sequel to the popular Tomb Raider which was loved for its automated benchmark mode. But don’t let that fool you: the benchmark mode in RoTR is very much different this time around. Visually, the previous Tomb Raider pushed realism to the limits with features such as TressFX, and the new RoTR goes one stage further when it comes to graphics fidelity. This leads to an interesting set of requirements in hardware: some sections of the game are typically GPU limited, whereas others with a lot of long-range physics can be CPU limited, depending on how the driver can translate the DirectX 12 workload.

(1080p) RoTR on iGPU, Average Frames Per Second(1080p) RoTR on iGPU, 99th Percentile

Both CPUs saw big gains in RoTR, however it is interesting to note that the 2400G gained margin over the 2200G: at DDR4-2133, the difference between the two APUs was 12%, however with the fast memory that difference grew to +20%.

Shadow of Mordor

The next title in our testing is a battle of system performance with the open world action-adventure title, Middle Earth: Shadow of Mordor (SoM for short). Produced by Monolith and using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.

(1080p) Shadow of Mordor on iGPU, Average Frames Per Second(1080p) Shadow of Mordor on iGPU, 99th Percentile

Shadow of Mordor also saw results rise from 26-32% for average frame rates, while the percentiles are a different story. The Ryzen 5 2400G seemed to top our at DDR4-2866, while the Ryzen 3 2200G was able to keep going and then beat the other APU. This is despite the fact that the 2200G has less graphical horsepower than the 2400G.

F1 2017

Released in the same year as the title suggests, F1 2017 is the ninth variant of the franchise to be published and developed by Codemasters. The game is based around the F1 2017 season and has been and licensed by the sports official governing body, the Federation Internationale de l’Automobile (FIA). F1 2017 features all twenty racing circuits, all twenty drivers across ten teams and allows F1 fans to immerse themselves into the world of Formula One with a rather comprehensive world championship season mode.

(1080p) F1 2017 on iGPU, Average Frames Per Second(1080p) F1 2017 on iGPU, 99th Percentile

Codemasters game engines are usually very positive when memory frequency comes into play, and although positive generally in F1 2017, it didn't seem to affect performance as much as expected. While average frame rates showed a gradual rise in performance through the straps, the Ryzen 5 2400G 99th percentile results were all over the place and not consistent at all.

Total War: WARHAMMER 2

Not only is the Total War franchise one of the most popular real-time tactical strategy titles of all time, but Sega delve into multiple worlds such as the Roman Empire, Napoleonic era and even Attila the Hun, but more recently they nosedived into the world of Games Workshop via the WARHAMMER series. Developers Creative Assembly have used their latest RTS battle title with the much talked about DirectX 12 API, just like the original version, Total War: WARHAMMER, so that this title can benefit from all the associated features that comes with it. The game itself is very CPU intensive and is capable of pushing any top end system to their limits.

(1080p) Total War: WARHAMMER 2 on iGPU, Average Frames Per Second(1080p) Total War: WARHAMMER 2 on iGPU, 99th Percentile

CPU Performance Discrete Gaming Performance
POST A COMMENT

74 Comments

View All Comments

  • GreenReaper - Saturday, June 30, 2018 - link

    Not sure CISC vs. RISC is right here - SIMD, sure, since that operates on large blocks of memory and so should be more suitable for GDDR's larger bus size., Reply
  • peevee - Tuesday, July 3, 2018 - link

    Type of memory does not determine bus size.
    128-bit GDDR5 is exactly as wide as 2-channel DDR4 in all the cheap CPUs.
    But it is a little bit smarter - for example, it contains hardware clear operation - no need to write a whole lot of zeros...
    Reply
  • close - Saturday, June 30, 2018 - link

    DDR3 has been in use since 2007. Adoption rate aside, the cycle reached a peak with DDR3's 7 year reign and it might come back down if DDR5 comes soon.

    DDR1 was announced in 2000, DDR2 in 2003, DDR3 in 2007, DDR4 in 2014. DDR5 is rumored for next year.
    Reply
  • peevee - Tuesday, July 3, 2018 - link

    " DDR is optimized for cisc operations while GDDR is optimized for risc operations"

    What a load of BS... Learn, people, before writing.
    Reply
  • niva - Tuesday, July 3, 2018 - link

    I always thought it was that GDDR was faster memory that can't be mass produced in quantities to satisfy DRAM market, not that there was something fundementally different about the memory. I also questioned that RISC vs. CISC statement but simple google searching reveals this: https://www.quora.com/What-is-the-difference-betwe...

    So perhaps that wasn't way off base.
    Reply
  • Dragonstongue - Tuesday, July 3, 2018 - link

    G for GDDR means GRAPHICS, DDR and GDDR "same thing"in theory "however"
    GDDR is not the same as DDR. Overall, GDDR is built for much higher bandwidth, thanks to a wider memory bus.
    GDDR has lower power and heat dispersal requirements compared to DDR, allowing for higher performance modules, with simpler cooling systems.
    DDR1, DDR2, and DDR3 have a 64 bit bus (or 128 bit in dual channel). GDDR3, comparatively, commonly uses between a 256 bit bus and 512 bit bus, or interface (across 4-8 channels).
    GDDR3 has a 4 bit prefetch and GDDR5 has an 8 bit prefetch, making GDDR5 twice as fast as GDDR3 in apples to apples comparisons.
    GDDR can request and receive data on the same clock cycle, where DDR cannot.
    DDR1 chips sends 8 data bits for every cycle of the clock, GDDR1 sends 16 data bits.

    things get extra "confusing" when GDDR5 came out because whatever the "rating is" for example GDDR5 900 "clock" you take this number and quadruple it which is the "effective speed" so this 900 becomes 3600 as it has a wider bus available to it A and B GDDR can send and receive data at the same time on the same clock cycle (normal DDR cannot, from what I have read)

    also GDDR is a chunk more expensive then "normal" DDR ram, though it does have multiple benefits.

    I suppose one can look at "DDR SDRAM is optimised to handle data packets from various in small bits with very low latency e.g browsers, programs, anti-virus scans, messengers.

    GDDR, on the other hand, can send and receive massive chunks of data on the same clock cycle.

    (source) http://www.dignited.com/27670/gddr-vs-ddr-sdram-gd...
    Reply
  • bananaforscale - Saturday, June 30, 2018 - link

    You are complaining about DDR4 because APUs struggle with it? You're barking up the wrong tree. The issue is in using shared memory. Reply
  • peevee - Tuesday, July 3, 2018 - link

    Or they could have supported 4 channels, given that they support 4 DIMMS anyway. Would be useful for CPU operations too, given that they run 8 threads in parallel... Reply
  • Dragonstongue - Tuesday, July 3, 2018 - link

    AMD memory controller for desktop purposes is NOT built nor designed for quad channel usage, the cost is "not worth it" there is no way you can keep costs down for a "simple" APU for those looking for a computer on a budget and have access to quad channel memory A and B very very few things the average everyday consumer does with their computer needs or can effectively use beyond what AMD CPU have provided with their HT (hypertransport) or IF (infinity fabric for Ryzen) are able to provide with dual channel.

    More is not always better, most of the time it becomes chasing unicorns vs actually "needing it", you know, for those who have a massive wallet and buy it just to say they have it AH HAHA
    Reply
  • Dragonstongue - Tuesday, July 3, 2018 - link

    we have not had DDR4 "that long" compared to say DDR3 or DDR2 which were and have been out far far longer
    DDR (2000) DDR2 (2003) DDR3 (2007) DDR4 (2014)....if you are "bad at maths" ^..^
    18+ years........15+ years.....11+ years.....4+ years
    .
    DDR5 should be towards end of 2018 though JEDEC is saying 2020 for end consumer (me and you) purchase.

    It is not the raw "speed" holding things back FYI, latencies, cycle speed, bandwidth available, power required to keep them running, all the subtimings ALL matter in their own fashion (depending on the task they are being used for) I remember many DDR2 sticks that you could heavily overclock and they ran crazy fast but also got crazy hot and died early deaths (suicide runs)

    I do not ever hear of this happening with DDR3 or DDR4 (lower volts and the chip makers such as Intel do their damndest to monitor/control the memory controller speeds and volts to avoid killing things, back in the day these same safeguards were not in place)

    (max JEDEC certified specs best I can tell)
    DDR 400, DDR2 1066, DDR3 2133. DDR4 3200
    best "jump" on a percentage basis seems to have been from DDR to DDR2 (166.5%) DDR2 to DDR3 (100.1%) DDR3 to 4 (50%)

    current spec (not finalized as of yet) for DDR5 are "up to" approximately double what the fastest current modules of DDR4 are rated for 4266-6400 (vs 3200) (33.3% "gain" or at the "best" (100%)

    I hardly call either of those "double" but I am a simple man ^.^

    SOOOOOO the major jump absolutely was ddr to ddr2 when comparing official "spec" of the fastest rated memory, obviously there is even faster that one can manage when you overclock or whatever, but this is not a guarantee either, ratings and specs are ratings and specs.

    Now as far as "when are we going to get faster memory" that depends, can your cpu or motherboard "handle it" IMHO, nope, not at this point anyways, very few can "handle" say DDR4 4700 (G.Skill) and generally speaking the extreme "fastest" also suffer from far looser timings and subtimings and a marked increase in power required to "make it happen"

    RAM is not a "simple" thing to crank up the speeds with everything getting a "nitrous boosT" like you could with a car engine ^.^
    Reply

Log in

Don't have an account? Sign up now