Apple Announces 5nm A14 SoC - Meagre Upgrades, Or Just Less Power Hungry?by Andrei Frumusanu on September 15, 2020 4:30 PM EST
Amongst the new iPad and Watch devices released today, Apple made news in releasing the new A14 SoC chip. Apple’s newest generation silicon design is noteworthy in that is the industry’s first commercial chip to be manufactured on a 5nm process node, marking this the first of a new generation of designs that are expected to significantly push the envelope in the semiconductor space.
Apple’s event disclosures this year were a bit confusing as the company was comparing the new A14 metrics against the A12, given that’s what the previous generation iPad Air had been using until now – we’ll need to add some proper context behind the figures to extrapolate what this means.
On the CPU side of things, Apple is using new generation large performance cores as well as new small power efficient cores, but remains in a 2+4 configuration. Apple here claims a 40% performance boost on the part of the CPUs, although the company doesn’t specify exactly what this metric refers to – is it single-threaded performance? Is it multi-threaded performance? Is it for the large or the small cores?
What we do know though is that it’s in reference to the A12 chipset, and the A13 already had claimed a 20% boost over that generation. Simple arithmetic thus dictates that the A14 would be roughly 16% faster than the A13 if Apple’s performance metric measurements are consistent between generations.
On the GPU side, we also see a similar calculation as Apple claims a 30% performance boost compared to the A12 generation thanks to the new 4-core GPU in the A14. Normalising this against the A13 this would mean only an 8.3% performance boost which is actually quite meagre.
In other areas, Apple is boasting more significant performance jumps such as the new 16-core neural engine which now sports up to 11TOPs inferencing throughput, which is over double the 5TOPs of the A12 and 83% more than the estimated 6TOPs of the A13 neural engine.
Apple does advertise a new image signal processor amongst new features of the SoC, but otherwise the performance metrics (aside from the neural engine) seem rather conservative given the fact that the new chip is boasting 11.8 billion transistors, a 38% generational increase over the A13’s 8.5bn figures.
The one explanation and theory I have is that Apple might have finally pulled back on their excessive peak power draw at the maximum performance states of the CPUs and GPUs, and thus peak performance wouldn’t have seen such a large jump this generation, but favour more sustainable thermal figures.
Apple’s A12 and A13 chips were large performance upgrades both on the side of the CPU and GPU, however one criticism I had made of the company’s designs is that they both increased the power draw beyond what was usually sustainable in a mobile thermal envelope. This meant that while the designs had amazing peak performance figures, the chips were unable to sustain them for prolonged periods beyond 2-3 minutes. Keeping that in mind, the devices throttled to performance levels that were still ahead of the competition, leaving Apple in a leadership position in terms of efficiency.
What speaks against such a theory is that Apple made no mention at all of concrete power or power efficiency improvements this generation, which is rather very unusual given they’ve traditionally always made a remark on this aspect of the new A-series designs.
We’ll just have to wait and see if this is indicative of the actual products not having improved in this regard, of it’s just an omission and side-effect of the new more streamlined presentation style of the event.
Whatever the performance and efficiency figures are, what Apple can boast about is having the industry’s first ever 5nm silicon design. The new TSMC-fabricated A14 thus represents the cutting-edge of semiconductor technology today, and Apple made sure to mention this during the presentation.
- The Apple iPhone 11, 11 Pro & 11 Pro Max Review: Performance, Battery, & Camera Elevated
- The Samsung Galaxy S20+, S20 Ultra Exynos & Snapdragon Review: Megalomania Devices
- TSMC Expects 5nm to be 11% of 2020 Wafer Production (sub 16nm)
- ‘Better Yield on 5nm than 7nm’: TSMC Update on Defect Rates for N5
Post Your CommentPlease log in or sign up to comment.
View All Comments
tipoo - Tuesday, September 15, 2020 - linkApple until now has had a steady train of impressive single core gains, they're not among the ones that just throw more cores at the problem. Look at this itself, only two big cores still. That's why this update in particular sticks out, with people maybe wondering if that gravy train is going to slow down.
Luminar - Wednesday, September 16, 2020 - linkHow does Siri use ML?
How does scribble use ML?
Google photos categorizes photos as well.
close - Wednesday, September 16, 2020 - linkDoesn't Google Photos upload everything to Google and let the server do all the work? Same for voice recognition and everything, Google does *nothing* locally because it's in their financial best interest to slurp as much data as possible.
Apple does it locally because if they do it remotely they have no chance in hell to compete with Google and Amazon (another company that literally hires people to listen to the Alexa recordings in order to properly label data for their ML). So Apple came up with a different strategy of doing as much as possible locally in order to sell *privacy*, since they can't sell Google and Amazon levels of performance in this particular regard.
FattyA - Wednesday, September 16, 2020 - linkGoogle does do voice recognition locally starting on the Pixel 4 (I don't know if that is true for the budget phones). They use local voice to text on the recorder app. The assistant also works without a network connection, obviously if you ask for something that it can't do locally it will need a network connection, but doing things like setting alarms, launching apps, or other basic phone controls are done locally. They also can do song detection, like Shazam, without a network connection. I think the song detection was able to be fit into 500MB which was something they mentioned when they launched the pixel 4 last year. They made a point of talking about local processing so that everything would continue to work even if you have a poor network connection.
close - Monday, September 21, 2020 - link@FattyA, when you say "starting on the Pixel 4" do you mean "any Android phone launched after Pixel 4" or literally "on the Pixel 4" which is probably one of the worst selling Android phones so pretty irrelevant in the grand scheme of things? Is it Android which is prioritizing or defaulting to local processing in general or *just* Pixel 4 doing *just* the voice recognition locally while everything else still gets sent to the great Google blackhole in the cloud?
ceomrman - Friday, September 18, 2020 - linkYes, Google uploads everything. They do that to study the data and to make money off it. There's no reason Apple couldn't do it that way, too. Apple could lease 100% of AWS's capacity and still have $25 billion annual profit left over. In realistic terms, the cost of offloading ML would amount to a rounding error for Apple. They've just decided it's more lucrative to develop faster SOCs and do the ML locally. That's probably down to a combination of Apple being good at designing chips and being able to charge a premium for more privacy and other features that benefit from local ML. It's basically just a different philosophy. Google is an advertising company. They want to profit from selling ads, hence their data obsession. Apple is a hardware company. They want to profit by selling shiny devices.
close - Monday, September 21, 2020 - link@ceomrman, Apple could play the same game but they'd still lose against Google or Amazon. Google (or Amazon) has far, far more access to "free" data than Apple does. Google has the upper hand here between being on so many more phones and home assistants all over the world (this aspect is important) and mixing data they get from all of their other sources. Apple's problem isn't the lack of computing power but the lack of a high quality and extensive data set. So Apple could at best be a distant second or third. Or they could just not play a game they'd lose and instead turn it on its head and brand themselves privacy advocates, compete for the market Google simply can't.
Daeros - Tuesday, September 22, 2020 - linkDon't forget that Apple is a lifestyle brand. They actually make money selling the devices, unlike Google. Apple is incentivized to maintain a high-quality user experience on their devices, meaning it makes sense to move (or keep) things like voice, handwriting, and face recognition on the device, rather than subject to the whims of connectivity. I know that on my phone, the gboard voice recognition goes south fast if your WiFi/LTE connection are spotty.
Meteor2 - Wednesday, October 7, 2020 - linkA lot of misrepresentation of reality above.
Google led the world in applying ML to consumer products. It couldn't be done locally -- the tech did not exist. It was done in the datacenter, using x86 and GPU with the addition ofTensor Processing Units from 2015.
Apple, was following the same path until they made a decision to go for local-only processing (also in 2015) in order to create a USP of "your data doesn't leave for phone" for marketing.
Of course your data is as available to the rest of the world whether it's on your phone or in a datacenter; if a device is connected, it's connected. And of course iOS backs up everything to Apple's DCs anyway. As Apple says, it's only the processing that is done locally. Your data is shared with Apple just as much as an Android user's is shared with Google.
nico_mach - Thursday, September 17, 2020 - linkAre you serious? You can google machine learning, and that answer will have been provided by machine learning!
These chips have a specific configuration that is efficient at machine learning, meaning, according to the wikipedia's description of google' TPU chip:
"Compared to a graphics processing unit, it is designed for a high volume of low precision computation (e.g. as little as 8-bit precision) with more input/output operations per joule, and lacks hardware for rasterisation/texture mapping.".
It's a customized version of GPU. Do you ever question if games actually use GPUs? No, right? Why would you? These are huge companies and this is a major hardware feature they spent millions to develop. Skepticism is healthy, but keep it in perspective, please.