Anand is covering AMD’s latest Kabini/Temash architecture in a separate article, but here we get to tackle the more practical question: how does Kabini perform compared to existing hardware? Armed (sorry, bad pun) with a prototype laptop sporting AMD’s latest APU, we put it through an extensive suite of benchmarks and see what’s changed since Brazos, how Kabini stacks up against Intel’s current ULV offerings, and where it falls relative to ARM offerings and Clover Trail. But first, let’s talk about what’s launching today.

AMD has a three-pronged assault going out today: at the bottom (in terms of performance) is their 2013 AMD Elite Mobility Platform, formerly codenamed Temash. The main subject of this review is the newly christened 2013 AMD Mainstream APU Platform, aka Kabini. And at the higher end of the spectrum we’re also getting the Richland update to Trinity, which AMD is calling their 2013 Elite Performance APU Platform. We’ll cover all of these with Pipeline pieces, but here’s the overview of the Kabini parts:

In total there are five new Kabini APUs launching: one 25W part, three 15W parts, and one 9W offering. The hardware is the same from the architectural side of things, with the A-Series parts coming with four Jaguar CPU cores and supporting DDR3L-1600 while the E-Series will be dual-core with DDR3L-1333 on two of the models and DDR3L-1600 on the highest performance option. The GPUs in all cases will be fully enabled 128 core GCN architecture parts, but clock speeds range from 300MHz on the 9W part up to 600MHz on the 25W part, with the 15W parts filling in at clocks of 400-500MHz.

AMD provided plenty of material to discuss, and as usual there’s a lot of marketing material that we don’t need to get into too much. For those of you that want to see the AMD slides, though, here’s the full Kabini presentation gallery. Or if you're really interested, I've put the full 2013 Mobility Platforms deck into our galleries.

AMD’s Kabini Laptop Prototype
Comments Locked

130 Comments

View All Comments

  • HisDivineOrder - Thursday, May 23, 2013 - link

    Given AMD's traditional design wins and how those systems end up, I suspect this is not going to matter much. I have more hope of Bay Trail providing a solid deal for once than I do this.

    It's a shame because this really should be AMD's niche to dominate, but I doubt any OEM'll give them a serious try.
  • Desperad@ - Thursday, May 23, 2013 - link

    On competitive positioning, is it even near IB Pentium?
  • brainee - Saturday, May 25, 2013 - link

    I think so, yes. IB Pentium 2117U (17 Watt TDP) should be around 33 % faster in legacy Intel-optimised CPU benchmarks doing the math and according to say Techspot. I would think ULV Pentiums are more expensive for OEMs, notebooks is a different story. Not to mention Kabini should cost a fraction to make for AMD compared with even crippled 2C Ivybridges aka Celeron / Pentium. Kabini wins in games and Open CL, and in AVX-enabled applications it should eat the Pentium alive since the latter doesn't support AVX extensions (should be mentioned at least). I'd prefer AVX extensions to Cinebench but this site seems to suggest I am a minority...
  • yhselp - Saturday, May 25, 2013 - link

    Comparing a 3W SoC (Z2760) to a 15W SoC (A4-5000), and calling the former laughable... not really fair.

    Sure, Kabini is definitely faster than the old Atom architecture and, yes, I understand this is not a definitive comparison; nevertheless - it seems misleading.

    What would happen if we compare a 3W Kabini to a 15W Haswell? Laughable wouldn't even begin to describe the performance difference.
  • silverblue - Saturday, May 25, 2013 - link

    But... an A4-5000 doesn't use anywhere near 15W, as far as I've heard. Still, let's consider the evidence - the Z2760 is a 32-bit, dual core, hyperthreaded CPU at 1.8GHz with a low powered graphics unit and 1MB of L2. The A4-5000 is a 64-bit, quad core CPU at 1.5GHz with a far stronger graphics unit and 2MB of dynamic L2. Temash would be a different proposition I expect as the A4-1200 is only clocked at 1GHz.
  • yhselp - Saturday, May 25, 2013 - link

    Yes, absolutely, I agree - it's just that the direct comparisons and conclusions made are a bit stark.

    There's always another side to an argument; in your case, I could argue that comparing the brand new Jaguar to a terribly old Atom architecture isn't the way to go. Consider the following evidence - Silvermont is 64-bit, quad-core, 2MB L2 cache, OoO, 2GHz+, 22nm, far more energy efficient, supports 1st gen Core instructions and Turbo Boost; it would decimate Jaguar.

    In the article, I also discovered that the 2020M is referred to as a 1.8GHz 35W part, when it's actually 2.4GHz. Are the benchmarks done on a underclocked 2020M or was that simply a typo?

    That's the kind of stuff I'm talking about, not AMD vs. Intel.
  • jcompagner - Sunday, May 26, 2013 - link

    So this is the core that will be in the next 2 big consoles?
    Am i the only one that think that these are quite weak, even if you have 8 of them?

    That does mean now that if one of those 2 consoles are the lead in the development that the games will be forced to be really good multi threaded. (So i guess the next games for the pc will also be using multiply cores way more)

    Why did they go for the jaguar core thats really targetted for ver low end or mobile stuff?

    Why didn't they just go for a Richland 8 core system with a very good gpu that lets say is a 100W part?

    What is the guess that the TDP is of the xbox one or ps4? A console can take 100W just easily that doesn't matter, so why choose for a core that is dedicated for mobile?
  • yhselp - Sunday, May 26, 2013 - link

    Yes, the Jaguar core is 'weak', but what does 'weak' mean? That is such a vague definition. For one usage scenario Jaguar might be unacceptable, for another it might be overkill. Remember, Sony/MS are not building a contemporary PC. Jaguar might seem slow to us, and in a gaming desktop it would be, but that's not the point. Think of consoles, in this case the PS4 and the Xbox One, as non-PC devices such as tablets. Would you say the latest Samsung/Apple running on a Cortex A15 is slow? No, you would say it's super fast. Well, Jaguar is even faster. Yes, a console has to deal with different workloads than a tablet, but that's why it has very different hardware.

    Why did Sony/MS choose Jaguar? Jaguar is easier to integrated, more power efficient and most importantly cheaper than Richland. It's a far simpler architecture than Richland, and probably easier to work with in a console's life. Also, it's very important to note that Sony/MS wanted an integrated solution - they weren't going to build a system with a dedicated video card like a gaming PC.

    Cost, cost, cost - everything is about the cost. A console cannot be expensive (the way a gaming PC is) - it has to sell very well in order to establish an install base to sell games to. Sony/MS will probably sell their 8th gen consoles at a loss initially - AMD's Jaguar/GCN was their best/only choice. What else could they do at the same price or even at all? Silvermont isn't ready yet and NVIDIA probably wouldn't be willing to integrate a GPU of theirs the way AMD did, and both of those would be more expensive than Jaguar/GCN. Not to mention, MS has had a ton of trouble with NVIDIA in the original Xbox - they are probably not willing to go down that road again.

    It's not really an 8-core solution - it's two quad-core modules and communication between the two might be problematic; so games on the new PS/Xbox would probably run on four Jaguar cores at 1.6 GHz. However, don't forget that neither of the two consoles has a ton of raw graphics power under the hood - the Xbox GPU is roughly equivalent to an HD 7770 (but with better memory bandwidth), and the PS to an HD 7850. Games would be specifically developed for this kind of hardware (unlike PC games) and would most probably be GPU limited so the Jaguar cores would really be sufficient.

    I hope this answers your questions.
  • Kevin G - Monday, May 27, 2013 - link

    A Pile Driver module is much larger than a Jaguar core. For die size concerns, it going with Jaguar made sense if core counts are the same. Steam Roller cores are due out in 2014 which are expected to bring higher IPC and a slight clock speed increase compared to Pile Driver.

    Power consumption is also an issue. The bulk of the power consumption from the XBox One and PS4 SoC's will come from their GPU's. Adding a high power CPU core like Pile Driver would have ballooned power consumption close to 200W which makes cooling impractical and expensive. Jaguar still adds power but it is far more manageable in comparison.

    In addition, Steam Roller is tied to processes from Global Foundries (though IBM could likely manufacture them if need be). TSMC is the preferred foundry for bulk processes due to cost and a slight edge in density. Jaguar has been prepared to be manufactured at TSMC from the start. AMD could have stuck with GF but it would have had to port GCN functional units to that same process. Such efforts are currently underway for Kaveri that is looking to be a 2014 part. So for any type of 2013 launch, going that route was not an option.
  • aikyucenter - Sunday, June 30, 2013 - link

    Great OpenCL performance ... love it ... just make it faster launch and decrease TDP too = PERFECT :D

Log in

Don't have an account? Sign up now