NVIDIA’s GeForce 800M Lineup for Laptops and Battery Boost
by Jarred Walton on March 12, 2014 12:00 PM ESTIntroducing NVIDIA’s GeForce 800M Lineup for Laptops
Last month NVIDIA launched the first of many Maxwell parts to come with the desktop GTX 750 and GTX 750 Ti, which brought a new architecture to NVIDIA’s parts, but one that isn’t radically different from the previous generation’s Kepler. While the features may be largely the same, however, NVIDIA did come out with a renewed focus on efficiency. The result was roughly a doubling of performance per Watt, with the GTX 750 Ti being nearly twice as fast as the GTX 650 with only slightly higher power draw (and some of that most likely comes from the increased load on the rest of the system thanks to the higher frame rates). That renewed focus on efficiency is nice and all on the desktop, but in my opinion where it’s really going to pay dividends is when we get the mobile SKUs.
Today’s launch of the 800M series will give us the first taste of what’s to come, but unfortunately there are two minor issues. One is that we don’t have any 800M hardware in hand for testing (yet – we should get a notebook in the near future); the second problem is that, as is typically the case, 800M will be a mix of both Kepler and Maxwell parts. The Kepler parts aren’t straight recycling of existing SKUs, however, as NVIDIA has a new feature that’s coming out with all ofthe GTX 800M parts: Battery Boost. But before we get into the details of Battery Boost, let’s cover the various parts. Both "regular" (NVIDIA has dropped the "GT" branding of their mainsream parts) and GTX 800M chips are being announced today, though we of course still need these to show up in shipping laptops; we’ll start at the high-end and work our way down.
NVIDIA GeForce GTX 800M Specifications | |||||
Product | GTX 880M | GTX 870M | GTX 860M | GTX 860M | GTX 850M |
Process | 28nm | 28nm | 28nm | 28nm | 28nm |
Architecture | Kepler | Kepler | Kepler | Maxwell | Maxwell |
Cores | 1536 | 1344 | 1152 | 640 | 640 |
GPU Clock | 954 + Boost | 941 + Boost | 797 + Boost | 1029 + Boost | 876 + Boost |
RAM Clock | 2.5GHz | 2.5GHz | 2.5GHz | 2.5GHz | 2.5GHz |
RAM Interface | 256-bit | 192-bit | 128-bit | 128-bit | 128-bit |
RAM Technology | GDDR5 | GDDR5 | GDDR5 | GDDR5 | GDDR5 |
Maximum RAM | 4GB | 3GB | 2GB | 2GB | 2GB |
Features |
GPU Boost 2.0 Battery Boost GameStream ShadowPlay Optimus PhysX CUDA SLI GeForce Experience |
GPU Boost 2.0 Battery Boost GameStream ShadowPlay Optimus PhysX CUDA SLI GeForce Experience |
GPU Boost 2.0 Battery Boost GameStream ShadowPlay Optimus PhysX CUDA SLI GeForce Experience |
GPU Boost 2.0 Battery Boost GameStream ShadowPlay Optimus PhysX CUDA SLI GeForce Experience |
GPU Boost 2.0 Battery Boost GameStream ShadowPlay Optimus PhysX CUDA GeForce Experience |
At the top, the GTX 880M carries on from the successful GTX 780M, using a fully enabled GK104 chip with 1536 cores. The difference is that thanks to improvements in yields and other refinements, the GTX 880M will launch with a base clock of 954MHz, which is a pretty significant 20% bump over the 797MHz base clock of the GTX 780M. Otherwise, the only real change will be support for Battery Boost. This is really the only chip where we won’t see any major performance improvement relative to the 700M part – we get a theoretical 20% shader performance increase and that’s about it.
GTX 870M follows a slightly different pattern, using the same GK104 core but with one SMX disabled, leaving us with 1344 cores. Along with the loss of one SMX, the GTX 870M cuts the memory interface down to 192-bits. (Interestingly, this is the same core count as the GTX 775M found in Apple’s iMac only with a 192-bit memory interface, but to my knowledge the GTX 775M never shipped in a notebook.) The previous generation 770M only had 960 cores running at 811MHz + Boost, so overall the 870M should provide a significant boost in performance relative to the previous generation – around 62% more shader processing power and 25% more memory bandwidth as well!
Where things get a little [*cough*] interesting is when we get to the GTX 860M. As we’ve seen in the past, NVIDIA will have two different overlapping models of the 860M available, and they’re really not very similar (though pure performance will probably be pretty close). On the one hand, the Kepler-based 860M will use GK104 with yet another SMX and a memory channel disabled, giving us 1152 cores running at 797MHz + Boost and a 128-bit memory interface. This will probably result in performance relatively close to the previous generation GTX 770M (slightly higher shader performance but slightly less memory bandwidth).
The second GTX 860M will be a completely new Maxwell part, using the same GM107 as the desktop GTX 750/750 Ti with all SMX units active. That gives us 640 cores running at 1029MHz + Boost, which interestingly is faster than the base clock of the desktop GTX 750 Ti (1020MHz). Memory bandwidth doesn’t quite keep up with the desktop card, and of course overclocking is something that will almost certainly result in higher performance potential on desktops, but in general the Maxwell GTX 860M should perform very much like the GTX 750 Ti – and likely at a lower power envelope as well. Even if both GTX 860M parts deliver a similar level of performance, the Maxwell variant should do so while using less power, so that's the one I'd recommend. NVIDIA states that the 860M will be around 40% faster than the GTX 760M.
Finally, the last GTX part being launched today is the GTX 850M. Yes, that’s right: the “x50M” has now been moved from the (now defunct) GT class to the GTX class. This is partly being done in order to make the GTX 850M look better (i.e. marketing), but it’s also being done as a way to segment software feature sets – as we’ll see in a moment, the mainstream 800M GPUs do not support GameStream or ShadowPlay. Also note that the GTX 850M is the only GTX part that does not support SLI, as that feature is only present in the GTX 860M and above. (And while I’m discussing the Features aspect, GFE is my abbreviation for “GeForce Experience”.) There’s a bit more marketing as well, as NVIDIA compares the performance of the new GTX 850M with the previous generation GT 750M in their slides, where perhaps a better comparison would be the GTX 760M with the GTX 765M going up against the GTX 860M. It’s not particularly important in the grand scheme of things, however.
Moving past the naming aspect of the GTX 850M, the specifications are basically the same as the Maxwell GTX 860M, only with a lower core clock of 876MHz. One nice benefit of moving to the GTX class is that the 850M will require the use of GDDR5. With previous generation mobile GPUs, NVIDIA often allowed OEMs to use either GDDR5 or DDR3. While in theory the gaming experience between the who would be “similar”, that really depends on the game and the settings. I know from experience that in some cases a GT 740M GDDR5 can end up performing nearly twice as fast as a GT 740M DDR3 laptop. Basically, DDR3 GPUs really shouldn’t be used in anything with more than a 1366x768 resolution display, and frankly 1366x768 should die a fast death – preferably yesterday, if I had my way. DDR3 will continue to be used in the mainstream 800M GPUs, and it appears GDDR5 is no longer even an option (maybe?); not surprisingly, NVIDIA states that the GTX 850M is on average 70% faster than the 840M. And that brings us to the second tier of mobile 800M GPUs: the "mainstream" class.
NVIDIA GeForce "Mainstream" 800M Specifications | |||
Product | 840M | 830M | 820M |
Process | 28nm | 28nm | 28nm |
Architecture | Maxwell | Maxwell | Fermi |
Cores | ? | ? | 96 |
GPU Clock | ? | ? | 719-954MHz |
RAM Clock | ? | ? | 2000MHz |
RAM Interface | 64-bit | 64-bit | 64-bit |
RAM Technology | DDR3 | DDR3 | DDR3 |
Maximum RAM | 2GB | 2GB | 2GB |
Features |
GPU Boost 2.0 Optimus PhysX CUDA GFE |
GPU Boost 2.0 Optimus PhysX CUDA GFE |
GPU Boost 2.0 Optimus PhysX CUDA GFE |
Obviously, NVIDIA is being a little coy with their specifications for these 800M parts. They were good enough to tell us that both the 840M and 830M will use Maxwell-based GPUs, but that’s as far as they would go. I’d guess we’ll see 512 core Maxwell GM107 parts in both of those, although perhaps they might drop another SMX and run with 384 cores on one (or both?) of those; we’ll have to wait and see. The 64-bit memory interface is going to be a pretty severe bottleneck as well, and I’m not sure the new 840M will even be able to consistently outperform the previous generation GT 740M – particularly if the 740M used GDDR5. Actually, scratch that; I’m almost certain a GT 740M GDDR5 solution will be faster than the 840M DDR3, though perhaps not as energy efficient.
And just in case you don’t particularly care for having a modern GPU, Fermi rides again and is available in the 820M. NVIDIA didn’t disclose specs in the launch information, but this part has already been launched so we know what's in here. Of course, this is Fermi in 2014 so really – who cares? 96 cores is at least better than 48 cores (705M), but NVIDIA is at least flirting with iGPU levels of performance with the 820M. In my book, if you don’t need anything more than an 820M, you probably don’t need the 820M!
You can see NVIDIA's images/renders of the various chips in the gallery below:
Wrapping up the specifications overview, NVIDIA was nice and forthcoming with estimates of relative performance. There will likely be exceptions, depending on the game and settings you choose to test, but here’s a nice table summarizing NVIDIA’s estimates:
NVIDIA's Performance Estimates for the 800M Series | |||
GPU |
% Increase Over Next GPU |
% of 820M | % of GTX 760M |
GTX 880M | 20% | 641% | 227% |
GTX 870M | 35% | 534% | 189% |
GTX 860M | 15% | 396% | 140% |
GTX 850M | 70% | 344% | 122% |
840M | 35% | 203% | 72% |
830M | 50% | 150% | 53% |
820M | N/A | 100% | 35% |
This is actually a pretty useful set of estimates (assuming it's correct), as it allows us to immediately see that all of the new GTX GPUs should be quite a bit faster than the GTX 760M, which was a fairly decent mobile GPU. We can also see that the gulf between the mainstream and GTX classes remains quite large. I’m still a bit skeptical of the 840M with DDR3 actually delivering a good performance, as a 64-bit interface is a huge bottleneck. Assuming this is again DDR3-2000 (most GPUs with DDR3 top out at DDR3-2000), we’re talking about a feeble 16GB/s of memory bandwidth – that’s lower than what most desktops and laptops now have for system memory, as DDR3-1600 with a 128-bit interface will do 25.6GB/s. Ouch. Of course it will depend on what settings you want to run at; for me, I don’t mind using medium quality in most games, but the low quality settings can often look quite awful.
91 Comments
View All Comments
ThreeDee912 - Wednesday, March 12, 2014 - link
Very minor typo on the first page."The second GTX 860M will be a completely new Maxell part"
I'm assuming "Maxell" should be "Maxwell".
/nitpick
gw74 - Wednesday, March 12, 2014 - link
why do I live in a world where thunderbolt eGPU for laptops are still not a thing and astronomically expensive, mediocre-performing gaming laptops are still a thing?willis936 - Wednesday, March 12, 2014 - link
Because outward facing bandwidth is scarce and expensive. Even with thunderbolt 2.0 (which has seen a very underwhelming adoption from OEMs) the GPU will be spending a great deal of time waiting around to be fed.lordmocha - Sunday, March 16, 2014 - link
the few videos on youtube show that it is possible to run graphics cards over TB1 4x pice, and be able to achieve 90%+ performance out of the card when connected to an external monitor and 80%+ performance when feeding back through the TB to the internal monitorso basically eGPU could really be a thing right now, but no-one is making a gaming targeted pcie tb connector. (sonnet's one needs an external psu if you are trying to put a gpu in it)
rhx123 - Wednesday, March 12, 2014 - link
Because GPU Makers can sell mobile chips for a huge increase over desktop chips.A 780M, which is roughly comparable to a desktop 660 nets Nvidia a hell of a lot more cash.
A 660 can be picked up for arround £120 these days, whereas on Clevo reseller site in my country a 770-780M upgrade costs £158.
Nvidia knows that a large percentage of very high end gaming laptops (780M) just sit on desks and are carried around very infrequently, which could easily be undercut piecewise with a eGPU.
My 13inch laptop, 750Ti ExpressCard eGPU, and PSU for the eGPU (XBOX 360) can still easily fit into a backpack for taking round to a friends, and costed much less than any similar performing laptop avaible at the time, and when I don't need that GPU power then I have an ultraportable 13 inch at my disposal.
CosmosAtlas - Wednesday, March 26, 2014 - link
Because intel does not give permissions for making thunderbolt eGPU. I was waiting for a thunderbolt based Vidock, however it will never happen because of this.willis936 - Wednesday, March 12, 2014 - link
So I take it nvidia hasn't hinted at the possibility of g-sync chips being included in laptop panels? I think they'd make the biggest impact in laptops where sub 60 fps is practically given on newer titles.JarredWalton - Thursday, March 13, 2014 - link
I asked about this at CES. It's something NVIDIA is working on, but there's a problem in that the display is being driven by the Intel iGPU, with Optimus working in the background and rendering the frames. So NVIDIA would have to figure out how to make Intel iGPU drive the LCD properly -- and not give away their tech I suppose. I think we'll see a solution some time in the next year or two, but G-Sync is still in its early stages on desktops, so it will take time.Hrel - Wednesday, March 12, 2014 - link
I'm surprised you guys didn't say anything about the 850M not supporting SLI. I was expecting a paragraph deriding that decision by Nvidia. I'm really upset. I would have loved to see how energy efficient SLI could get. Lenovo has had that laptop with 750M in SLI for about a year now and I've thought that was kinda stupid.But considering how power efficient Maxwell is maybe that could actually be a good idea now.
Maybe they'll still do it with underclocked GTX 860M's.
Hm, I bet that's what Nvidia wanted. To discourage OEM's from buying 2 cheaper GPU's/laptop instead of ONE hugely expensive one. Prevent them from SLI'ing the best GPU in their lineup.
Yep, pissed. I'm pissed.
Hrel - Wednesday, March 12, 2014 - link
best GPU for SLI* in their lineup.