Comments Locked

410 Comments

Back to Article

  • Pinn - Tuesday, September 1, 2020 - link

    When pressing higher bandwidths, do you chroma subsample or do compression?
  • imaheadcase - Tuesday, September 1, 2020 - link

    You shouldn't have to with new displayport and hdmi, i might be wrong though but i thought that is what everyone was excited about new cards for.
  • Pinn - Tuesday, September 1, 2020 - link

    Nope. They hit important limits at things like 4k/120 or 8k/60.
  • willis936 - Wednesday, September 2, 2020 - link

    4k120 or 8k30*

    It’s right there in the table. 8k is 4x the pixels of 4k, so an equivalent throughput would have a quarter of the refresh rate. Sorry for the pedantry, but laypeople skimming comments could draw incorrect conclusions.
  • Trackster11230 - Wednesday, September 2, 2020 - link

    HDMI 2.1 can support 8k60 @ 4:4:4.

    https://www.hdmi.org/spec/hdmi2_1
    https://www.anandtech.com/show/11003/hdmi-21-annou...
  • Trackster11230 - Wednesday, September 2, 2020 - link

    (Using Data Stream Compression, however. not sure if this was the point of contention you were making).
  • eddman - Tuesday, September 1, 2020 - link

    Wikipedia exists:

    https://en.wikipedia.org/wiki/HDMI#Refresh_frequen...
  • Ryan Smith - Tuesday, September 1, 2020 - link

    HDMI 2.1 supports Display Stream Compression. You generally won't need it unless you want to do 8K@60Hz with full RGB/4:4:4 chroma.
  • Pinn - Tuesday, September 1, 2020 - link

    Hi Ryan! Well we all want 10-bit HDR so that edges things up. If that's a must, do we chose between DSC and chroma subsampling? Maybe the DSC does chroma as part of it.
  • jospoortvliet - Tuesday, September 1, 2020 - link

    DSC of course. It is lossless...
  • nevcairiel - Tuesday, September 1, 2020 - link

    DSC is *not* lossless. Its claimed to be "visually lossless", but thats far from the same thing. Its not bit-for-bit identical afterwards.
  • Ryan Smith - Tuesday, September 1, 2020 - link

    The only time you'd use anything but full chroma is when your source signal doesn't provide that much info to begin with, such as a Blu-ray disc or streamed movie. Those are 4:2:0 to begin with, so there's no reason to use full chroma.

    Otherwise for gaming and such, you'll want DSC.
  • azfacea - Thursday, September 3, 2020 - link

    wouldnt DSC add latency ?? i'll take 1080p over 8k if you ruin the latency and then bite you.

    i play counter strike 1440x1080 (thats 1080p 4:3 black bars)
  • UglyFrank - Tuesday, September 1, 2020 - link

    I... i just don't see how AMD can combat this
  • Pinn - Tuesday, September 1, 2020 - link

    Release two console. Nvidia copied the CPU bypassing IO, so the AMD cards should already have that.
  • whatthe123 - Tuesday, September 1, 2020 - link

    AMD already had the SoCs for both consoles with the ps4 and xbox one. It made no difference.
  • Gigaplex - Tuesday, September 1, 2020 - link

    It made a big difference. AMD got sales from the consoles, which kept them in business.
  • whatthe123 - Tuesday, September 1, 2020 - link

    I might to market influence for their discrete cards. They're already down in marketshare even after having the best value card this gen.
  • Kangal - Saturday, September 5, 2020 - link

    To be honest, Nvidia has thrown a curveball and surprised me.
    That Samsung 8nm is impressive, not much worse than the 2018 Mainstream TSMC 7nm lithography. So I'm thinking a 2021 +5nm EUV TSMC wafers are going to provide something great to look forward to for the 2023 next next-gen cards too.

    I was expecting them to follow in the footsteps of the current-gen RTX-2000 series, by releasing cards with better Real-Time Raypath Tracing Technology, more AI/DLSS focus, and only mildly better rasterisation and compute performance. And I expected this at a slight price reduction.

    So my intuition for the announcement was: the RTX 3080 would be equal/slightly faster than the RTX 2080 Ti, and priced at $700. The RTX 3070 I expected it to be slightly faster than the RTX 2070-Super, and at $500. With the emphasis being on the new "RTX ON" feature dropping framerates from like 90fps down to 70fps, keeping above 60fps locked, instead of the 110fps to 50fps drop we've come to accept with the 2000-series.

    So, uh, good job Nvidia and please keep this up!
  • Kangal - Saturday, September 5, 2020 - link

    With the above said, I was one of the few people who were quite let down with RDNA1.
    They made a massive lithography jump from TSMC's 14nm to 7nm, and still lost both the performance AND efficiency crown to Nvidia that was using budget +14nm wafers. Even the comparisons to VegaVII were quite mixed.

    The divide in the architecture between AMD and Nvidia is just that huge. In fact the GTX 980 Ti, 28nm Maxwell from like 2014, is still punching blows with the RX 5600XT, a 7nm RDNA1 card from 2020 with the major difference being power draw 310W vs 190W. Just to illustrate the architectural differences. In fact, we've been told to wait for Radeon Big (Vega 64) before and it didn't deliver. Told to wait again for Navi Big (RDNA 1), and still didn't deliver. So I do not believe the whole RDNA-2 is double the performance, or +50% faster than RDNA-1. Cry wolf too many times. So things most likely won't be any different, in that it, RDNA-2 would struggle to deliver against the current RTX 2000-series. I say this because I believe AMD expected the same as me, that the RTX 3000 announcement to go something similar to the hypotheticals listed in the comment above.

    So here's what I was expecting to come from RDNA2; late-2020 launch, based on +7nm TSMC.
    With the RX 7700XT coming at equal to the RTX 2080 Super, and launched $450 (price drop to $400 later). Whereas the RX 7700 (non-XT) coming slightly faster than the RTX 2070-Super, and launched at $400 (price drop to $350 later). Both with RayTracing, but slightly inferior to the RTX-2000 series of "RTX On" (not complaining). So a Hypothetical RTX-3080 that's slightly faster than a RTX 2080 Ti, and priced at $700... well then you can make a good case for the hypothetical RX 7700XT at $450. And the Hypothetical RTX-3070 that's slightly faster than the RTX 2070-Super, and priced at $500... well, you can make a Great Case in favour of the hypothetical RX 7700 that's equally fast at a cheaper $400 price.

    Now with the RTX 3000-series, RDNA-2 is kind of dead in the water BEFORE launch. Just like it happened with the GTX-1000 series launch. The old arguments of "Nvidia's being aggressive because they're scared" is a non-sense argument. The RTX 3070 is the same price as expected, but it is several tiers faster, and the same goes for the RTX 3080. Heck, I am 100% expecting Nvidia to drop the RTX 3060 soon, priced at $350, and offering performance that is equal/slightly faster than the RTX 2070-Super. The current hypothetical/expected RDNA-2 cards they are going to STRUGGLE to compete with Used RTX-2000 cards, let alone new RTX-3000 cards.
    I think AMD has been relying on TSMC too much. The improvements since the R9 290 have been moreso from the node, than the architecture and software. Unlike the case with Nvidia (well, huge R&D budgets, duh). As TSMC gets more popular, the price of the wafers goes higher, and so the profit-margin shrinks for AMD cards. Eventually AMD will have to raise prices, thus putting them on equal price-to-performance as Nvidia. Nope. AMD has to change. They need to start competing properly. And to do that, they need to increase their driver quality, and improve their GPU Architecture.

    I always thought Nvidia could pull something off like this announcement (but expected them to be greedy), I just don't think AMD has the current capability to pull something off as impressive even though they aren't as greedy. I HOPE that what I wrote as my expectations for RDNA-2 cards are WRONG. I would gladly be PLEASANTLY surprised.
  • Spunjji - Monday, September 7, 2020 - link

    Honestly, this comment's pretty far off-base.

    To start, RDNA is pretty much even with Turing on power/performance - and yes, that's with a node advantage, so yes, it means that architecturally Turing is markedly more efficient than RDNA. That's not in dispute, but you're claiming that there was no efficiency improvement from Vega on 7nm to RDNA on 7nm which is just a lie - the 5700XT nearly matches Radeon VII's performance at ~100W less power.

    Then you compare the 980Ti to the 5600XT, which is a funny one, because the 980Ti also performs like that 1660Ti - Nvidia's competition for the 5600XT. In effect that's more just a comment on slow progress in the GPU industry up until now, but you frame it as a ding on AMD alone. You also quote the wrong power for the 5600XT - it's 150W board power by the spec, ~160w measured, so a little higher than the 1660Ti for a little more performance.

    You then launch off that total mischaracterisation to claim that AMD won't hit their targets for RDNA 2 - even though they hit their targets for RDNA and for every single Ryzen release. You're entitled to not believe them, that's fine, but you're not entitled to your own versions of the facts.

    Despite all that, I really don't disagree with you conclusion. AMD need to start competing properly instead of just going for the value proposition, else they won't be able to make the required margins to compete at the high-end. If they hit their 50% increase in PPW target, though, then their high-end card ought to compete with the 2080 but at a smaller die area / power draw. Whether they actually *do* remains to be seen.
  • Kangal - Monday, September 7, 2020 - link

    It seems like you completely disagreed with me, walked away, and landed to the same position. So not sure how far the base goes :\

    I never disputed that AMD won't hit their targets. They will. They determine what their targets are and when they'll hit them. It's easy. Whilst, AMD is being more objective than Intel, by creating targets in advance, and in general, I believe they do try to hit them with real-world targets. It's just sort of easy to fudge the figures a little here/there to save face during a low-point. Hence, I've woken up to the situation to never listen to the underdog or the market leader or any company... but wait for tests to be done by unbiased enthusiasts (ahem Anandtech). It's the Scientific way after all (peer review).

    I got the power draws from the reviews done here, figures may not be exact, but it doesn't detract from the point I made. There wasn't a slow progress in the GPU Industry, until, well Vega64 happened. Nvidia made a decent leap with Pascal, and AMD was doing okay in the low-end/midrange but delayed "Big Polaris" several times. But after the GTX 1080, Nvidia saw little reason to push further. I won't even blame the cryptocurrency boom. This one is on AMD for not being competitive enough, and this one is also on consumers for not buying AMD when they should.

    So I think I've done AMD justice in my analysis, and not a mischaracterisation. They need to do something about their debt/finances, and try to pump more money into their R&D. The architectural differences are definitely there, and I want to see them claw back more ground. I know they're stretched too thin. You've got supercomputers, servers, desktops, laptops, budget chipsets, embedded chips, iGPUs, dGPUs, gaming drivers, storage solutions etc etc. It's a lot. Frankly we're lucky that TSMC and Zen was a success, as was the console sales. Maybe I'm asking for too much, too soon...

    @tamalero
    Face it, the 8nm lithography from Samsung is impressive. I thought it would be in the middle of the 14nm-7nm node, but I was wrong, it is much much closer to the 7nm node than the 14nm wafers. Sure, TSMC's mainstream 7nm from 2018 would've been better but not by much. So they've done well, and as I said earlier, at least this gives us some runway to look forward to the next gpu cards (TSMC, Advanced +, 5nm, 3D Stacking) due in like 2023. And according to rumours floating around, Nvidia is still buying some 7nm wafers from TSMC for certain silicons, but they couldn't get a Tender for a large yields and stock, and at a price they wanted. It seems Nvidia were too aggressive on prices, and TSMC doesn't care, since they make more money off smartphone SoC's anyway. Thank goodness for Samsung Foundry who was ready to catch the fall, so to say. Hopefully we can see more competition in this market going forward (Intel Fabs and Global Foundries, I'm looking at you).
  • Spunjji - Tuesday, September 8, 2020 - link

    @Kangal - As I said, I thought the overall comment itself was off-base, but I agreed with your conclusion. There was never a point of total disagreement, just dissent on specifics.

    Anandtech do Total System Power for their power numbers here, which is why I went elsewhere to check board power numbers (and not the ones reported by the GPU itself).

    I don't think you can entirely blame a lack of competition from AMD for the slowdown. I'm going to give two reasons here:
    1) Nvidia just announced their biggest jump ahead in performance for a long time, and there hasn't really been any more or less competition from AMD of late than there was in the period leading up to Turing. I think AMD's relative market share might even be down on that period when Polaris was relatively fresh.
    2) There are self-evident reasons why Turing didn't move the performance bar forwards much: they were stuck on a similar manufacturing process, and they wanted to introduce RTX. That constrained the possibilities available to them - some of the die was consumed by expensive RT features, and they weren't getting any of that back from a shrink. Compare that to Pascal, where they benefited from a full node shrink and minor architectural changes.

    As for consumers, well, I agree in part but I feel like that's more complex too. Take the near-total lack of Polaris-based laptops - what was that about? Vega I can sort-of understand due to cost - although the one laptop with a Vega 56 in it actually competed pretty well with its 1070-based cousin - but iven the prices Polaris cards sold for on desktop, it should have been no problem at all for even a fairly clunky Polaris-based solution to compete on price alone, if not performance / thermals. At some point its not people's fault for not buying things if they just aren't *there*, and I genuinely don't know whose fault that is.
  • Sivar - Tuesday, September 8, 2020 - link

    Thank you for this example showing that we can disagree with someone, debate using facts, and yet remain respectful and open to the idea that the opponent's statement can have merits despite its flaws.
  • tamalero - Monday, September 7, 2020 - link

    "8nm impressive". According to who?
    Everyone else is saying that if Nvidia had eaten its ego, they would have been on the much much better 7nm from TSCM.

    8nm is not even on the density of 7nm.
  • eddman - Tuesday, September 1, 2020 - link

    They didn't copy anything. They've already had their own solution for a year.

    https://developer.nvidia.com/blog/gpudirect-storag...

    It can now be leveraged in windows, since MS is adding the DirectStorage API to windows.
  • inighthawki - Tuesday, September 1, 2020 - link

    Adding features like this to your hardware isn't something you do in a couple months after the hardware features of next gen consoles were revealed. This type of thing is something you start years in advance. It's pretty clear that there was some sort of collaboration between parties like MS/Sony/Game devs and the desire for hardware based decompression and loading for IO was expected to be a prominent feature of the next generation.
  • tipoo - Tuesday, September 1, 2020 - link

    You're suggesting that since the PS5 tech talk in March of this year, Nvidia architected a competing solution?

    These things take way longer than people seem to imagine. It looks like their views on where next gen games were going were quite similar.
  • Yojimbo - Tuesday, September 1, 2020 - link

    NVIDIA copied it so well they are coming out with it first...
  • Kakkoii - Wednesday, September 2, 2020 - link

    Nvidia has had this feature as part of NVLINK on their server side platforms for many years now... I've been eagerly anticipating its arrival on consumer GPUs.
  • JfromImaginstuff - Tuesday, September 1, 2020 - link

    And me, seriously how can AMD even come up with a product that can even come close to the amount that they are saying ( sure real world experience may be different but still hey why would they do this if they don't have anything to back it up with)
  • Cooe - Tuesday, September 1, 2020 - link

    You for realize the performance gains that Nvidia are suggesting are only for RTX titles, right???? In pure rasterization performance, the 3090 isn't going to be anywhere even CLOSE to 2x a 2080 Ti. And an 80CU RDNA 1 card wouldn't be tooooo far off the 3090 in raw rasterization performance, let alone an RDNA 2 card. Outside of ray-tracing performance (which is the big question here), it shouldn't be too hard at all for AMD to get in a roughly similar performance ballpark (+/-20%).
  • whatthe123 - Tuesday, September 1, 2020 - link

    It's in upgrade to 35.7 tflop within the same company, which generally doesn't result in less tflop/performance. How would you explain such a large FP performance boost in addition to bandwidth increases and latency reduction yet lose so much efficiency?
  • Yojimbo - Tuesday, September 1, 2020 - link

    The 1.8 times performance increase NVIDIA is claiming isn't just for RTX titles, but I think NVIDIA will be further away from their theoretical max FLOPS in this generation than last. If you go to the 2080, it has about 10 TFLOPS theoretical max, the 3080 has about 30 TFLOPS theoretical max. But the 3080 is about twice as fast as the 2080, not 3 times as fast. NVIDIA has rejiggered their architecture in a way to unlock more performance in general and in the areas they see most important going forward but in doing so their max theoretical performance measurement has gone way up, leaving their "real world application compute efficiency" in games to lag behind their earlier architectures.
  • whatthe123 - Tuesday, September 1, 2020 - link

    This is true, but the poster was suggesting a huge drop in efficiency considering the 2080ti boosts to 13tflops, vs the 3090 boosting to 36tflops. In order for it to be "nowhere near 2x" performance the loss of efficiency per flop would need to be massive. On a game by game basis it will probably not be 2x across the board, but I doubt there's that much loss in general.
  • Alexvrb - Tuesday, September 1, 2020 - link

    If performance is close to double on average, you're talking about extracting about 2/3 the gaming performance per FLOP, give or take. Basically... same story as always - ignore FLOPs across architectures and wait for benchmarks across a wide swath of titles. With that being said it's still a big jump in absolute performance, and the new process gives them room in the future to adjust pricing.

    Personally I'm more interested in the RT increase, especially in a future 3060 and below. RT previously wasn't worth bothering with if you had less than a 2070, it totally butchered performance. Might actually make it worth enabling as least SOME RT features in a mid-range card. I never spend more than ~$300 on a GPU, I can't justify it.
  • Spunjji - Thursday, September 3, 2020 - link

    100% with @alexvrb on this one (except that I'd consider up to $400, but no higher and preferably lower)
  • tamalero - Monday, September 7, 2020 - link

    Actually, it is.. the "up to 1.9" claim was on RTX with using both DLSS and Raytracing.
    With the average being 60% at most.

    And those are Nvidia powered titles.
  • tamalero - Monday, September 7, 2020 - link

    VEGA had a monstrous compute power in TFLOPS. Did it really beat Nvidia's cards at the time in gaming? NO it did not. Despite being a very powerful compute gpu.
  • xenol - Tuesday, September 1, 2020 - link

    Throw in DLSS 2.0 and the effective performance you get throws another curveball at AMD. And it's not like DLSS 2.0 or any future version seems to need specialized training anymore. It's now TAA on steroids.

    Combine this with VRR with the potential to reconstruct those lost details and AMD has a long road ahead of it to catch up to the effective performance.
  • Kurosaki - Tuesday, September 1, 2020 - link

    DLSS is a handicap. Never use these image distorting techniques if you have to!
  • Yojimbo - Tuesday, September 1, 2020 - link

    DLSS is a godsend.
  • whatthe123 - Tuesday, September 1, 2020 - link

    better stop playing games with subsurface scatter then, since its basically just a blur pass on top of a transparency layer.
  • SirDragonClaw - Tuesday, September 1, 2020 - link

    Umm. Do you even know what DLSS 2.0 is?

    Clearly not because it looks better than native.
  • haukionkannel - Wednesday, September 2, 2020 - link

    If dlls looks better than native you have to check you eyesight...
    Dlls is upscaling technigues. So 1080p try to look as near 4K as possible.
    Real 4K is definitely better looking than upscaled 1080p at anyday!
  • Spunjji - Thursday, September 3, 2020 - link

    The "better than native" claims are always a giveaway. It invariably means "better than native... with the worst possible form of AA switched on".
  • Yojimbo - Friday, September 4, 2020 - link

    Well if you allow yourself non-realtime AA then sure, native looks better. But since TAA seems to be standard then, yes, DLSS upscaled seems to give mostly better-looking results than TAA rendered at the native resolution.

    What the people who have a problem with "better than native" seem to forget is that "native resolution" is not perfection, nor is it any type of ideal. It is just the non-DLSS best practice. If native resolution were perfection then we wouldn't need any anti-aliasing at all. But if you render something at 4K without any anti-aliasing at all it's going to look awful. It is important to note that we are not dealing with trying to display a single picture, but rather the approximation of a video feed. And there is a certain continuity over time in the ideal video feed (what we want to get as close to as possible) that can be used to help reduce the problem. The strength of DLSS can be said to be that it's so much better at anti-aliasing than other methods which can be done fast enough for low frame times that the resulting image quality is better even if the "original render" is at a lower resolution. I put "original render" in quotes because once you start using temporal techniques the actual input to a final render is a function of both space and time.

    NVIDIA isn't creating the information from nowhere to make a better image-quality 4k render. It's not magic. They are using advanced statistical techniques to more efficiently extract the relevant information from space and time to obtain the information. That's why it looks "better than native".
  • Spunjji - Monday, September 7, 2020 - link

    I'm not arguing that DLSS looks better than TAA (it certainly does), I'm arguing that TAA is a weird choice for comparison because - to my eyes at least - it looks worse than native 4K without AA and runs way slower than an upscale from 2.5K. I play on a high-DPI display, though so jaggies aren't as noticeable as the fidelity loss from running at a non-native res.

    DLSS on high-quality is mostly better than a basic 2.5K upscale, but with the odd weird artifact. I consider those two to be comparable in practice.

    DLSS rendering at 1080p and output to 4K looks terrible to my eyes.
  • Spunjji - Wednesday, September 2, 2020 - link

    Oh yay, here come the "DLSS changed my life" shills.

    "It's now TAA on steroids" - TAA is shite, so yeah, big whoop. All their comparisons have been artificially hobbled by using computationally expensive and ugly forms of AA.

    Don't get me wrong - DLSS 2.0 is a revelation compared with the original implementation - but in practice it's not dramatically and unequivocally better than other methods of rescaling / sharpening. Anybody claiming otherwise is doing some serious post-hoc rationalisation of their very expensive impulse purchases.
  • imaheadcase - Wednesday, September 2, 2020 - link

    All types of AA is pretty much a "cheat" to make stuff look "better". It is only there as a help to those that don't have the hardware to begin with. If you can play at 4k/60 without AA do it. I personally don't see jaggies in games so i don't have a need for any AA.
  • Spunjji - Thursday, September 3, 2020 - link

    @imaheadcase - I notice jaggies quite a lot so I've always preferred *some* sort of AA, but TAA and FXAA just make things look blurry to my eyes which is I do find inferior to 4k without AA at all on a moderately high density panel.
  • Yojimbo - Friday, September 4, 2020 - link

    So I assume you play all your 4k games with any sort of temporal AA turned off...
  • Spunjji - Monday, September 7, 2020 - link

    @Yojimbo - I play games with the best settings my hardware can manage, so on *very rare* occasions that's 4K with MSAA, but mostly it's 2.5K with MSAA or none at all where that AA method is not supported. TAA looks like weird mush to me. I have a 4K 24" monitor, though, so I'm in a small minority when it comes to how these images look on my actual display.
  • eddman - Thursday, September 3, 2020 - link

    I don't have any personal attachment to any of this, but DF specifically compared DLSS 2 to sharpening and noticed there was detail in dlss image where it was missing in the sharpened one.
  • Spunjji - Thursday, September 3, 2020 - link

    @eddman - I can believe it, but I'm also sure you'd need to pause and check screenshots to really notice; that or play on an absolutely massive screen. Even then, in motion, if a person "can't tell the difference" between DLSS and native 4K (the dude mentions having played Metro on the 2080Ti using DLSS 1.0 in Basic mode, which looks like junk) then they almost certainly can't readily tell the difference between DLSS and a naive upscale from 2.5K.
  • eddman - Thursday, September 3, 2020 - link

    To some extent, the same can be said about 4K in motion.

    In any case, there are parts in games where you make very small movements, or none at all. Not all games require fast movements all the time; RPGs, strategy games, action-adventure games, adventure games, etc. In these cases you will notice the difference in details between DLSS 2 and sharpening.

    Obviously 4K looks the best all around, but when someone cannot afford a card capable of maintaining an acceptable frame rate, DLSS 2 looks like a good compromise.
  • Spunjji - Monday, September 7, 2020 - link

    @eddman I definitely agree that it's a good compromise, no argument there. I'm just tired of seeing that overstated for dramatic effect - "with DLSS Nvidia will crush AMD", etc.
  • umano - Tuesday, September 1, 2020 - link

    The kind of cards I am interested in have more than 10gb of vram. If amd is going to release something like an evolved radeon VII, with good drivers this time, maybe I will go RDNA2 if not the 3090 with 24gb of vram at 1500 is a good deal for me
  • MrVibrato - Tuesday, September 1, 2020 - link

    "with good drivers this time"
    this made me chortle...
  • Luminar - Tuesday, September 1, 2020 - link

    TSMC's 5nm EUV process should allow AMD to smash the 3090 with even a shrink of their RX580 card. Samsung's 8nm is horrible.
  • whatthe123 - Tuesday, September 1, 2020 - link

    RNDA2 is 7nm. 5nm is still at least a year away.
  • s.yu - Tuesday, September 1, 2020 - link

    At least they claim it's an improved 8nm.
  • Spunjji - Wednesday, September 2, 2020 - link

    Of course they do - everybody knows it's pants so they have to sing a song about how it's not, ackshually.
  • Luminar - Wednesday, September 2, 2020 - link

    I would hope they made improvements sometime in the last 2 years.
  • s.yu - Thursday, September 3, 2020 - link

    I agree, there should be some improvement, even for pre-EUV.
  • BackdoorBeauty - Tuesday, September 1, 2020 - link

    So the AMD from 3 years in the future will beat the current NVIDIA card? Impressive!
  • Gigaplex - Tuesday, September 1, 2020 - link

    NVIDIA might use TSMC's 5nm EUV process.
  • mdriftmeyer - Thursday, September 3, 2020 - link

    Apple and AMD have already bought up the fab capacity; hence why Nvidia signed a multi-year with Samdung.
  • Quantumz0d - Tuesday, September 1, 2020 - link

    Try hard next time, Samsung 8nm is horrible lmao from where did you get that ? Because Ampere power figures ? Great just great on how a simple node shift is going to make Nvidia bleed right.
  • SirMaster - Tuesday, September 1, 2020 - link

    >You for realize the performance gains that Nvidia are suggesting are only for RTX titles, right????

    I don't know about that.

    https://youtu.be/cWD01yUQdVA?t=509

    Doom Eternal is running 80-100% faster on a 3080 compared to a 2080.

    A 2080Ti is only about 30% faster than a 2080, so a 3080 should be doing like 45% more than a 2080Ti, and then a 3090 should do another 30% on that which compounds to near 2X the 2080Ti.
  • inighthawki - Tuesday, September 1, 2020 - link

    Awesome link, thanks for sharing! Those are some really impressive numbers.
  • Kratos86 - Tuesday, September 1, 2020 - link

    The host of that video starts the video by saying that he was only allowed to test games that were picked by Nvidia and he was not allowed to publish any data related to frame rates or frame timing.

    All the games demoed were running TAA, DLSS or RTX and in some cases various combinations of the three. Doom Eternal has baked in TAA and will be getting RTX support soon which is why it's on the list.

    It'd be awesome to see 3090 doubling the performance of the 2080Ti but if you are looking at your 2080Ti with disappointment because you can only get to 30FPS on Microsoft Flight Simulator at 4K, I think you might be expecting a bit much to see the same settings and twice the framerate on the 3090 even if the 3090 costs 1500 up from 1200 for the 2080Ti. I hope I am wrong though, doubling framerates in most games for a relatively small price premium would be a game changer.
  • eddman - Tuesday, September 1, 2020 - link

    "All the games demoed were running TAA, DLSS or RTX "

    No, the first 5 tests were done without DLSS or RT; some of these games do not even support RT or DLSS, like borderlands 3 and doom eternal.

    As for TAA, what of it? It's not an RT core or tensor core dependent. Having it enabled is not an issue. It's just a temporal AA.

    SirMaster is probably wrong about 3090's performance though. The difference compared to 2080 Ti will surely be smaller, based on the compute performance delta between 3080 and 3090, which is about 20%, not 30%.

    I guess 3090 would be about 60% to 80% faster than 2080 Ti.
  • Spunjji - Wednesday, September 2, 2020 - link

    "As for TAA, what of it?" - that's just it, though - why have it switch on? It's a rubbish, ugly way to do AA. They always did the TAA comparison with original DLSS to make it look like less of a train-wreck, so the most obvious reason they'd be using it here would be if it - for some reason - shows their latest architecture in a better light.

    It looks like they have some seriously fast cards. I'll be waiting for 3rd-party benchmarks before I make up my mind about how much faster, though. Everything they're doing right now will be calibrated to push as many pre-sales of their absurdly expensive high-end offerings as they can get in before the real benchmarks roll out.
  • eddman - Thursday, September 3, 2020 - link

    IINM DF always run their tests with AA, and since TAA is what we get with most games nowadays, that's what he had on in some of those titles. I hardly doubt there is any TAA performance difference between RTX 20 and 30 cards.

    He was comparing the performance, not the visuals. I don't think DF is the kind of channel to deliberately try to skew the results. He said this:

    "The games shown today are chosen by nvidia but... a fair proportion of those games don't have benchmark modes either and I chose the content for analysis on those games based on my experience in playing them. Testing was carried out right here at my place with no nvidia overseers or nvidia supplied hardware."
  • Kakkoii - Wednesday, September 2, 2020 - link

    This isn't true at all. Digital Foundry already did a test in a couple games with not RTX or DLSS and the 3080 alone absolutely destroyed the 2080. At 4K Ultra in Doom Eternal, the 3080 was getting between 1.7-2x the performance of the 2080... So you can imagine just how much more perf the 3090 has.
  • alufan - Tuesday, September 1, 2020 - link

    compare this to the Console chip in PS5 running the unreal engine its actually very unimpressive in comparison especially as that was showing 4k video in real time with in some cases 500 statues with hellish levels of triangles, in a console, Am thinking big Navi is going to spank Nvidia but I may be wrong
  • whatthe123 - Tuesday, September 1, 2020 - link

    unreals demo was running mostly 1440p and upscaled with dynamic resolution, confirmed by unreal to digitalfoundry.
  • alufan - Tuesday, September 1, 2020 - link

    it was 8k textures though in real time and with Billions of triangles and what looked like RT again bear in mind its a Console not a dedicated Graphics chip, am still betting on AMD having some serious stuff to show off, Lisa had that same smirk when she was telling the press the Ryzen 3000 was better
  • whatthe123 - Tuesday, September 1, 2020 - link

    It wasn't billions of triangles, the whole point of their nvme engine is that it streams geometry and was rendering about 20 million triangles at a time, streaming in triangles as needed. AMD and Nvidia have similar I/O engines planned for their GPUs (it's in the article called RTX I/O, I think AMD's is called BCC).
  • eddman - Tuesday, September 1, 2020 - link

    You don't stream triangles. They are created and handled by the GPU. It's a new GPU feature, not storage related.
  • whatthe123 - Tuesday, September 1, 2020 - link

    yes, you do indeed stream in geometry aka triangles with unreal's Nanite. that was literally the entire point of their presentation: https://www.unrealengine.com/en-US/blog/a-first-lo...

    "Nanite geometry is streamed and scaled in real time so there are no more polygon count budgets, polygon memory budgets, or draw count budgets; there is no need to bake details to normal maps or manually author LODs; and there is no loss in quality."
  • eddman - Tuesday, September 1, 2020 - link

    I don't know what they mean by that exactly, but all of this is basically mesh shading.

    AFAIK triangles are drawn. What gets steamed is whatever that needs to exist within that triangle (read texture).

    https://developer.nvidia.com/blog/using-turing-mes...

    In any case, rendering just a few million triangles out of billions and disregarding the rest is a GPU feature and not related to storage capabilities.
  • whatthe123 - Tuesday, September 1, 2020 - link

    It's not nvidia tech... it's unreal tech, which is virtualized geometry. I literally posted the link where they say, word for word, that the geometry is streamed, not a texture, which allows them to maintain the same level of scene geometry without loss through LOD.

    Geometry streaming has been in the works since ID talked about it in 2006. It is new technology, not mesh shading from turing.
  • whatthe123 - Tuesday, September 1, 2020 - link

    hell if you even read my link (or your own link) you'd realize mesh shading relies on shading transitions in LOD levels and eliminating primitives based on FOV, whereas unreal's method is streamed in real time. Mesh shading does not necessarily require streaming if LOD levels fit in memory whereas unreal's method is specifically stream based geometry.
  • eddman - Wednesday, September 2, 2020 - link

    When I wrote it's basically mesh shading, I didn't mean it's nvidia's mesh shading but something similar to it.

    The unreal team has already stated they leverage PS5's primitive shaders for their nanite path.

    https://www.eurogamer.net/articles/digitalfoundry-...

    "On PlayStation 5 we use primitive shaders for that path which is considerably faster than using the old pipeline we had before with vertex shaders."

    Primitive and mesh shading are not exactly the same but still quite similar.

    I've had seen that unreal blog post a while back. There isn't enough technical information in there on how nanite exactly works. What do they mean by "nanite geometry" exactly. All they say there is what it achieves, not how it does it.

    Again, whatever it is, it has to be performed by the GPU.
  • gescom - Tuesday, September 1, 2020 - link

    "confirmed by unreal to digitalfoundry."

    "digitalfoundry" everywhere.
    Nvidia's new PR factory?
  • whatthe123 - Tuesday, September 1, 2020 - link

    what? they're the guys that said the series X RDNA2 was already as good as a 2080 for an entire SoC console that will probably cost the same price as a 2080. they're also one of the first people touting consoles for their improved SSD decompression methods, and neither console uses nvidia hardware. what are you smoking? DF has been around forever.
  • Arbie - Tuesday, September 1, 2020 - link

    I watched the Digital Foundry video. It was very reasonable, with multiple limitations and caveats stated. The guy knows what he's talking about, had a 3080 and could compare it directly to a 2080. Do you expect him to just keep quiet? What?
  • Spunjji - Wednesday, September 2, 2020 - link

    And where exactly do you think he got that 3080 from? Blimey...

    Stating multiple limitations and caveats doesn't sidestep the fact that they're taking part in a PR exercise, which is what anybody publishing a "preview" under strict guidance from Nvidia is doing. That doesn't mean they're lying, it doesn't mean they're 100% wrong, but it does mean they're unlikely to be giving you 100% of the story.
  • imaheadcase - Wednesday, September 2, 2020 - link

    That is just silly logic, if that was the case EVERY review site that is getting a card SUPPLIED by nvidia is under same scrutiny. I don't see ads on this site, but i'm pretty sure Nvidia supports the site here and others even if not sponsored directly.
  • Spunjji - Thursday, September 3, 2020 - link

    @imaheadcase You missed my point - and maybe I didn't stress it enough - that I'm cautious because it's a *pre-release PREVIEW*. My logic isn't "anyone who has a card supplied by Nvidia is biased" - it's that being granted access to a lone reviewer this early wouldn't be done unless they could guarantee the results showed their product in the best possible light.

    As a reviewer, if you're given that kinda of extraordinarily rare opportunity, it would be commercial suicide to then do something that might jeopardise any future opportunities. There's no need to assume any corruption or skulduggery - he seems genuinely enthused about the product.
  • quiksilvr - Tuesday, September 1, 2020 - link

    Yes but the cheapest GPU alone costs as much as the PS5 which includes CPU, motherboard, IO, Blu Ray disc and 1TB NVMe storage. It isn't really an apples to apples comparison. It just sucks AMD is not competing in this segment so now we have a GPU that costs over $1000 which is ridiculous.
  • Ushio01 - Tuesday, September 1, 2020 - link

    I wasn't aware that Sony had revealed how much they are selling the PS5 for, or how much they are losing per console sold.
  • Kratos86 - Tuesday, September 1, 2020 - link

    I think you make an important point. The PS4 came to market for $400 but this was a console that is much less advanced than the PS5. Nowadays a mid range GPU costs nearly $500 from around 350 ish about a couple of years ago. The PS5 isn't going to be pricematching the PS4 or even coming close because it isn't just the GPU in these new consoles that will cost more than ever before. Having said that $600 is still a very reasonable amount for a system with the PS5's specs and it should be able to run Disco Elysium and Wasteland 3 just fine so I can live with that.
  • rabidkevin - Wednesday, September 2, 2020 - link

    $600 card is not mid-range. The 3070 is the bottom of the high-end market. The 3050 and 3060 will be mid-range cards coming out at at later date.
  • imaheadcase - Wednesday, September 2, 2020 - link

    But people buying one of these cards pretty much already have everything else in the PC already. Its not like people are going out building a new PC about the GPU. That is the fundamental difference between consoles and a PC. Like i have a 1080 TI in my system right now, i'm going to fork over for the 3090 because that is almost 5 years between cards. Makes sense.
  • MrVibrato - Tuesday, September 1, 2020 - link

    I don't think Big Navi spanks Nvidia. If there is some spanking, it will be the Radeon Technologies Group's drivers spanking big navi.
  • Spunjji - Wednesday, September 2, 2020 - link

    🤦‍♂️
  • Quantumz0d - Wednesday, September 2, 2020 - link

    Dude they said a 2070S would do similar to that demo showcase on a mobile laptop, google it man. AMD Is not going to spank Nvidia with 3080 and 3090 Class of performance, heck a 3070 is 2080Ti equivalent in raster and with RT its going to chomp down. AMD is dead unless they have a really competitive idea, Nvidia is just waiting for that to reveal the SUPER next year with improved performance and SMs and Memory of G6X.
  • Spunjji - Wednesday, September 2, 2020 - link

    I think you're very wrong, but I still hope it's competitive.
  • allanxp4 - Tuesday, September 1, 2020 - link

    TSMC N7+ vs Samsung 8nm, that's how.

    And RDNA 2 is rumored to fix RDNA 1 power hungriness and efficiency, so they definitely stand a chance there.

    I don't know why people were surprised by what Nvidia launched and are going "amd has a problem", it's exactly what people expected and eventually rumored and leaked, and I don't expect it to be any different on big navi.
  • Spunjji - Wednesday, September 2, 2020 - link

    Because the shills need to goad the mooks into pre-ordering before the benchmarks come out; this way they'll spend the rest of the generation aggressively rationalising their purchase and doing free PR for Nvidia, just like all the dildos who bought RTX 20 series GPUs with uselessly slow RT features have been doing for 2 years now.
  • WJMazepas - Tuesday, September 1, 2020 - link

    We have to wait for RDNA2 reveal, but if the rumors of perf/W being increased by 50%, then AMD wont be too much behind
  • Spunjji - Wednesday, September 2, 2020 - link

    Pretty much. RDNA and Turing are close to parity on perf/W despite the node difference. That makes the section that breaks down Nvidia's perf/W claims quite telling - 34% from 2080 to 3080, based on their numbers. If that's borne out - and AMD's 50% claim proves true - then AMD will be back to having a perf/w advantage over Nvidia for the first time since the 40nm era.

    I've been getting big Fermi vibes from this generation ever since the news of the 3090's cooler leaked out, and these figures are reinforcing that. I'm wagering we'll end up with a similar situation: Nvidia with the heavy hitters, AMD with the value proposition. My fingers are crossed.
  • Kevin G - Tuesday, September 1, 2020 - link

    Not too difficult: chiplets but for GPUs. That'd be an easy means throwing lots of silicon towards processing, something AMD has generally shied away from super massive chips the past few years. Vega 10 was 486 mm^2 which was AMD's largest GPU over the past few years. Compare that to the insanely large 828 mm^2 used for the A100 chip. AMD would be fine if they were able to say throw four 400 mm^2 dies with some HBM on a package to go up against the A100. Despite the challenges in packaging such a setup, AMD would likely end up cheaper as yields on a 828 mm^2 can't be great. This would also enable AMD to leverage the same base 400 mm^2 by itself for the midrange consumer and two for the gaming enthusiast. (The quad GPU die would be for HPC).

    So AMD could do this to attain the performance crown, though I highly doubt that they'd go that direction.
  • SaberKOG91 - Wednesday, September 2, 2020 - link

    As much as I hate to say it, Nvidia already have built and tested chiplet GPUs. There must be some problem they ran into that kept them from switching over instead of these huge dies. I forget who it was, but when Navi came out and wasn't chiplet based, AMD's folks said that the programming model would be different and they didn't want to go that route until they had it sorted.

    Realistically though, let's say Navi 2X may have 50% improved perf/W and is built on N7+ with 10% higher clocks and 20% higher density (83% size). If we assume 5120 shaders at say 2.2GHz, that's 22.5 TFLOPs. That's still a much smaller die (~420 mm^2) than Nvidia which will help keep the cost down. Power is kind of hard to guess since we don't know if it'll be HBM or GDDR6X, but a rough estimate might come from applying that 50% perf/watt figure to TBP of the 5700 XT, setting us at ~310W (seems a little low expect 350-375W). If that card sits between the 3080 and 3070 at say 500-600USD, it'll sell like hotcakes. Then it's just a waiting game for 5nm RDNA3 cards.
  • Gomez Addams - Wednesday, September 2, 2020 - link

    If the programming model changes because of a chiplet design then they didn't do it right.
  • Kevin G - Wednesday, September 2, 2020 - link

    nVidia's research into the topic wouldn't change the model at all: multiple dies on a package would abstracted by the driver as one large GPU to software. That's a huge win for scalability.
  • Gomez Addams - Wednesday, September 2, 2020 - link

    Exactly!
  • Kevin G - Wednesday, September 2, 2020 - link

    nVidia's usage of chiplets is expected in the future: they've published a paper on the topic a few years ago with some data. The main take away is that using multiple GPU chiplets can be abstracted to into one larger virtual GPU for software purposes and the previous issues with SLI scaling don't exist because SLI isn't used. There is a performance hit involved as hardware resources and schedulers are spread about and chiplet-to-chiplet bandwidth isn't as great or as low of latency as if it were a single monolithic die.

    Realistically all the players are going this way. nVidia has that research paper, Intel has formally announced that that is how they're starting for HPC and AMD has already used chiplets on the CPU side plus HBM on the GPU side. It is all a matter of who will move first and be daring enough to get this into mainstream consumer products.

    I'm not expecting AMD to go this route for this generation (I'd love to be wrong) but they could next iteration (same with nVidia). Intel could go this direction next year with that they've announced but so far hasn't' been committal to anything given their own foundry issues. My main point is that AMD isn't competing with nVidia with equally large dies to scale up performance. nVidia wins easily because they are simply throwing far more resources at the problem in silicon and for embarrassing parallel problem likes graphics, you get excellent result doing so.
  • SaberKOG91 - Wednesday, September 2, 2020 - link

    I'm aware of all of this. Chiplets are awesome for yields and for exceeding retical limits, but they need to be paired with fabric that doesn't destroy latency. I think we are finally there with TSMCs CoWoS process, but it takes years to pivot an architecture that way. It's also unclear how they will decompose the GPU. Nvidia's approach to this looks to basically be a NoC Mesh with RISC-V processors as the front-end to small GPUs. But they did this at a very small scale on a last-gen interposer. RDNA4 might finally accomplish this for AMD, but I'm pretty confident RDNA3 will be another monolithic design that takes advantage of 5nm's density improvements.
  • Spunjji - Wednesday, September 2, 2020 - link

    I'd be willing to take the bet on those TFLOP figures at ~310W.

    If they can do that then the inevitable second-tier product would be an absolute killer, too (as is tradition for AMD).
  • SaberKOG91 - Wednesday, September 2, 2020 - link

    I would like to see lower power, but the savings from GDDR6X to HBM2 aren't very good. AMD would need to go with HBM2E and I'm not sure that's yielding well enough to meet that kind of demand.
  • Spunjji - Thursday, September 3, 2020 - link

    It would also kill the value proposition.

    I did a few more calculations after I left that comment yesterday, and leaks/theoretical numbers say we'd be closer to 20TFLOPS on RDNA 2 at ~310W - which would make your power estimates for 22.5TFLOPS more likely given how AMD's architectures tend to scale at the shoulder of their voltage curves.

    That 20TFLOPS at ~310W figure could be mighty competitive with the 3080 if it is indeed running at 2/3 the FLOPS/fps efficiency of Turing.
  • SaberKOG91 - Friday, September 4, 2020 - link

    Agreed. RDNA2 is looking to be very competitive in perf/W even with Nvidia's peak throughput gains. I think they learned a lot from reining in power consumption of Vega for the 4000 series APUs. I have a theory that the original Navi 2X cards were going to be just bigger Navi 1X dies and still RDNA1. With the delays from yield issues in early 2019 and the pandemic, I think they gave up on releasing those cards and pushed for RDNA2 instead, giving them plenty of time to improve perf/W over RDNA1 with what they learned from the Vega redesign.
  • mdriftmeyer - Thursday, September 3, 2020 - link

    HBM2e yields are wide open [high yields] from Samsung and SK Hynix. Look it up.
  • SaberKOG91 - Friday, September 4, 2020 - link

    SK Hynix only started fully-ramped mass-production in July. Samsung was sampling in December, so they're probably at about the same point. Both companies will be dealing with pent up demand from partners before fulfilling new orders. AMD would have to be an early adopter to already be in on those orders, and I don't really see that happening due to the per-unit cost being much higher this close ramp, which would lead us to expensive cards like we ran into with Fury and Vega. There will no doubt be HBM2E on workstation and server cards, but I don't think the value proposition is there over GDDR6X unless AMD are hurting for the extra bandwidth. At which point the performance bump would need to serve as justification for selling a more expensive card.
  • Zingam - Wednesday, September 2, 2020 - link

    You are correct. AMD just has no guts putting such high prices!
  • zamroni - Wednesday, September 2, 2020 - link

    It seems samsung foundry gives nvidia very cheap price.
    Amd competitiveness will depend on tsmc's pricing
  • Olternaut - Wednesday, September 2, 2020 - link

    The only way they can is to lower prices even further than they planned. I was going to get the 3070 but I might want to wait to see AMD's response.
  • happiehappie - Wednesday, September 2, 2020 - link

    same.. RDNA1 doesn't have anything to even compete with 2070S. I really can't imagine them releasing something equal or better value than 500 USD for 2080 Ti performance
  • Spunjji - Thursday, September 3, 2020 - link

    Dude, the 5700XT very specifically competes with the 2070S (and the 5700 ith the 2050S). That's why Nvidia released them - because the OG 2070 and 2060 took a bit of a kicking. I went and checked yesterday because this comment seemed off.
  • Spunjji - Wednesday, September 2, 2020 - link

    By releasing smaller GPUs with competitive mid-to-high-end performance at a lower price than Nvidia can hit, just like they did the last time Nvidia were releasing absurdly bloated power-guzzling monstrosities into the market (GTX 200 / 400 series).

    Seriously though - I don't see why they *need* to do battle with a 350W $1400 GPU. The sort of tool that buys one of those for gaming purposes is buying it for the simple fact that Nvidia made it and it's "the fasterest", not because they actually need it.

    AMD would do well to look at hitting the ~200W TDP mark with a $400 card, maybe release a $600 ~275W amped-up version to slide into that yawning gap between the 3070 and 3080.
  • Samus - Thursday, September 3, 2020 - link

    Yeah, AMD is going to have a hell of a time designing a card capable of 30TFLOPS for $700. But as usual, their GPU market space will obviously not have much presence in the high end.

    Which is unfortunate because this isn't a field where economy of scale = profit. Margins are slim on low-end parts.

    While margins are fantastic on a $1500 video card.
  • Spunjji - Thursday, September 3, 2020 - link

    Margins are slim, but the sales are *so* much higher further down the range. The 1060 is the most popular Pascal card on Steam at 11.18%, and that combined with the 1050Ti and 1050 make up about 24% - that's before you even touch Turing cards.

    The 1080Ti is the first top-end GPU on the list and it stands at 1.55%, with a veritable stack of lower-end cards above it.

    Basically, AMD can and will do fine *if* they can find a way to get their card to punch above its weight in terms of manufacturing costs. I'm optimistic that between their detailed knowledge of TSMC 7nm and the tweaks to RDNA2, they might just manage to produce something that makes them decent margins (unlike the epic sadness that was Vega).
  • azfacea - Friday, September 4, 2020 - link

    the consoles can be unbelievable value props compared to this, but this is giant jump from nvidia in perf, perf/watt, perf/dollar. absolutely what intel refused to do for so long until it was too late. props to nvidia.

    I dont want PC to lose to consoles. PC brings privacy and freedom. consoles are slavery.
  • yankeeDDL - Tuesday, September 1, 2020 - link

    When did it become OK to burn 350W in a graphic card?
  • blppt - Tuesday, September 1, 2020 - link

    When we demanded max settings at 4k at 60fps.
  • mdriftmeyer - Tuesday, September 1, 2020 - link

    No. That demands an FPGA like Afterburner from Apple to do the heavy lifting, not an overly bloated, poor thermally coupled design that Nvidia is tossing out as normal.
  • blppt - Tuesday, September 1, 2020 - link

    Wait a minute, are you comparing a video accelerator card to a GPU?
  • Inteli - Tuesday, September 1, 2020 - link

    What? What on earth do FPGAs have to do with this, or a specialized 2D video accelerator card for that matter?
  • tipoo - Tuesday, September 1, 2020 - link

    In what world is the efficiency of a ProRes video accelerating FGPA comparable to a GPU running a game?
  • mdriftmeyer - Thursday, September 3, 2020 - link

    That ProRes video accelerator will be the beast behind Apple's GPGPU in Apple Silicon. The Metal Shading is processed in the FPGA more efficiently than in the GPU.
  • MrVibrato - Tuesday, September 1, 2020 - link

    Poor Nvidia. Not only has it to put up with the stockholm-syndrome-like bickering of the AMD fan crowd, now also Apple fanbois start dissing Nvidia. Oh man, what a time we live in...
  • Spunjji - Wednesday, September 2, 2020 - link

    MrVibrato, your response is just as daft and partisan as his FPGA ramblings.
  • MrVibrato - Wednesday, September 2, 2020 - link

    Yes!
  • prophet001 - Tuesday, September 1, 2020 - link

    Yo this lol

    facts *throws stack of papers*
  • chrisb2e9 - Tuesday, September 1, 2020 - link

    My AMD 290 takes 300 watts so 350 isn't that much more.
  • tamalero - Monday, September 7, 2020 - link

    Are you talking about full power to the wall? or are you talking about the video card alone? because in the 3090 case, they mean the VIDEO CARD ALONE.
  • sftech - Tuesday, September 1, 2020 - link

    My 295x2s burned 500W each
  • mdriftmeyer - Tuesday, September 1, 2020 - link

    We moved way past those days. Nvidia dumping back into it is not going to fly like people think. Those cards are DOA in the Enterprise Data Centers.
  • Dex4Sure - Tuesday, September 1, 2020 - link

    I don't think you realize that Nvidia is using 7nm TSMC on its data center GPU's. They're using Samsung 8nm just on consumer products. Nvidia's data center GPUs are more power efficient. Nvidia correctly predicted that consumers care way more about the price than power consumption. And data centers care more about power consumption, but don't mind paying the extra so they use TSMC 7nm there.
  • Spunjji - Wednesday, September 2, 2020 - link

    "correctly predicted that consumers care way more about the price than power consumption" - seems a bit early to make that judgement, especially as you're just speculating on their motivation.

    Given that Nvidia enthusiasts have been ranting about power efficiency since precisely the day AMD became unable to compete on that metric, it seems a little funny to assume they'll suddenly reverse course, no?

    Ack, who am I kidding. Fanboys sing to whatever tune the piper calls.
  • imaheadcase - Wednesday, September 2, 2020 - link

    Because these are GAMING cards. No gamer considers power anything when building a system. If you want to play a casual game, media, websurf living in a small apartment with 8 kids and a fat wife crammed into it with just fans for airflow by all means care about power draw.

    FFS people who talk about power consumption on a gaming card is like watching a hippie drive a hummer.
  • DirtyLoad - Wednesday, September 2, 2020 - link

    That's right, because all serious gamers are single and live in mommy's basement, they get her to buy them whatever they want....watts be damned. /s
  • Spunjji - Thursday, September 3, 2020 - link

    @headcase: I was talking about how *these same "GAMING" enthusiasts* keep changing their tune about power efficiency - not expressing my own personal opinion. But sure, feel free to draw yourself a childish little cartoon character to attack.

    That said, personally? Yeah, I do give a shit, because I like my systems to run cool and quiet. That's why I've always bought whatever card happens to give the best efficiency regardless of brand. It's the people that flip-flop on whether they care that amuse me.
  • Spunjji - Thursday, September 3, 2020 - link

    I dd enjoy the gatekeeping of what constitutes a "GAMER", though. I'll be sure to bear that in mind next time I emphatically tell someone that yes, I have been playing video games all my life but no, I'm not a "GAMER" because apparently that requires being some sort of single-issue maniac with no other concerns in life.
  • mdriftmeyer - Thursday, September 3, 2020 - link

    You realize Nvidia lost the contract to El Capitan for Compute because of what CDNA coming will do and what Nvidia cannot, right?
  • Mr Perfect - Tuesday, September 1, 2020 - link

    I don't know, but it's making mITX builds that much harder. People have successfully cooled 250w 2080 Tis in mITX cases, but an extra 100w might be to big an ask(assuming you get that 3090 size card in the case to begin with).
  • HilbertSpace - Tuesday, September 1, 2020 - link

    Yup, I built an ncase M1 (mITX) with a 1080 GTX (180 W), and couldn't keep temperatures reasonable; gave up and moved it all to a big case with more radiators. 320+ W wouldn't be pratical in mITX.
  • Mr Perfect - Wednesday, September 2, 2020 - link

    Lol, I have the same build. Other builders recommend an Accelero IV heatsink, stripping the fans off the Accelero and using the N1's two 120mm bottom case fans to cool the card. I tried it, and it works. If I push it hard, the card will ramp the fans up to 40%(the case fans are attached to the card) and stay in the mid 70s.

    Your water loop probably laughs at those temperatures though.
  • Mr Perfect - Wednesday, September 2, 2020 - link

    Sorry, Accelero III. It's the same heatsink as the IV, but uses front mounted heatsinks for the ram and the VRM. The IV uses a big backplate instead for those, and it doesn't fit in a mITX.
  • Spunjji - Thursday, September 3, 2020 - link

    That sounds like a really neat build!
  • Unashamed_unoriginal_username_x86 - Tuesday, September 1, 2020 - link

    Surely you could just downgrade GPU and save some cash, and still get a small efficiency bump? Or if you wanted, underclock if you need the VRAM and possibly get more efficiency?
  • Spunjji - Wednesday, September 2, 2020 - link

    That would be the smart move!
  • imaheadcase - Wednesday, September 2, 2020 - link

    Why the actual hell would you want to do that in the first place. You are trying to make something work that its not meant to work for in the first place. Did you also buy $1000 monitor for that.
  • Bp_968 - Wednesday, September 2, 2020 - link

    I wish Nvidia would stop changing their cards names around. The 3080 is clearly intended to slot into the 2080ti position as the 700$ card all us 1080ti owners expected. They tried to confuse the market with sliding the names around during turing because the cards cost so much to make.

    But everyone talking about "needing" to fit a 3090 in things or it being a price increase over turing are just confused by nvidias bipolar naming conventions. The easiest way to figure out where the cards really slot (beyond price) is by memory. The 3080 is 10gb so more closely aligns with the 80ti position (though i suspect we will see a 12+gb ti card when micro starts making bigger memory chips in the 6X form). The 24GB 3090 is clearly a RTX Titan replacement since no game needed or wanted double the video ram. They are clearly aiming for the compute/AI workstation crowd there with spill-over into the gaming world just like the RTX titan.

    As for the AMD vs Nvidia nonsense. Meh. Like another poster said, most of us buying a xx80+ class GPU don't care about power consumption. And Another posted who said AMD will target the lower end market is right. Nvidia is a industry juggernaut right now. Their market capitalization is higher than intel and dwarfs AMD. Lisa is focused on getting as much of that enterprise and laptop CPU market share as possible before intel becomes competitive again. I expect their GPUs to be competitive but its unlikely they will be anywhere near a major threat considering the solid lead nvidia has currently.

    But nvidias pricing on the 30xx series GPUs should make it clear to everyone that AMD has something that is at least competitive. Nvidia has shown us it has no problems charging market rates when it can, and yet we see the 30xx series dropping right back to 10xx class pricing? Even more revealing, the 3090/Titan card gets a substantial price drop? Ill be surprised if AMDs stuff isn't competitive based on thoss facts alone.
  • Spunjji - Thursday, September 3, 2020 - link

    I'm still stuck on whether I feel like this is a reasonable way to look at the range, or a post-hoc rationalization of their continued price gouging.

    I guess it partly depends on whether they release a Titan later, and partly on how the rest of the range lines up in gaming performance.

    The 3070 certainly *seems* to have the specs to be a 1080 equivalent in this lineup, but it's a bit short on VRAM for that status, and it remains to be seen whether the 3060 will have the clout to be up there as an xx70 equivalent.
  • ksec - Tuesday, September 1, 2020 - link

    If cooling wasn't a problem I dont mind 1KW for CPU and 1KW for GPU assuming Watts / Perf scales linearly.
  • rahvin - Tuesday, September 1, 2020 - link

    People made fun of the 480 for burning 250W and this thing does 350W. That's insane power consumption.
  • gescom - Tuesday, September 1, 2020 - link

    350W only? Let's wait for reviews I think it's gonna be much more
  • Spunjji - Wednesday, September 2, 2020 - link

    To be fair, the 480 was extremely late and had to burn that much just to barely stay ahead of the AMD competition. This is unlikely to be closely matched by AMD at any power level.
  • inighthawki - Tuesday, September 1, 2020 - link

    Why wouldn't it be OK? If people are willing to put in the power, why not let them?
  • WJMazepas - Tuesday, September 1, 2020 - link

    It's only OK if you want to get the most powerful card in the market. A lot of people are willing to spend US$1500 and use all that eletricity to get Cyberpunk 2077 running at max settings and 4k with RTX
  • Spunjji - Wednesday, September 2, 2020 - link

    Never, in my book. 0 interest.
  • Lord of the Bored - Thursday, September 3, 2020 - link

    Soon we'll be seeing cases with drive bays again, just so we can install those drive-bay GPU power supplies like it is 2006 again.
  • Rigorm0rt1s - Tuesday, September 1, 2020 - link

    When can we expect your full review?
  • haukionkannel - Tuesday, September 1, 2020 - link

    1 to 1.5 months from now!
  • Ryan Smith - Tuesday, September 1, 2020 - link

    Hopefully on the 17th!
  • boozed - Tuesday, September 1, 2020 - link

    If only video cards were more like bicycles.
    "Spy photos of new MTB, full review tomorrow!"
  • Gomez Addams - Tuesday, September 1, 2020 - link

    It is not stated in the article but if Nvidia continues with 64 cores per SM, as the last several generations have had, that means 164 SMs on the 3090. That is really impressive.
  • anonomouse - Tuesday, September 1, 2020 - link

    It's stated in the article that FP32 FLOPs have doubled per SM. Nvidia "CUDA cores" == Shader ALU, so we're almost certainly looking at 128 "cores" per SM.
  • Gomez Addams - Wednesday, September 2, 2020 - link

    That is not at all certain. Since computation level 7.0 (1000 series) everything they have made is 64 cores per SM. I think they will stay with 64 because it makes thread scheduling easier and divergence has less of an effect.
  • Gomez Addams - Wednesday, September 2, 2020 - link

    Actually, from some things I read at THW it's seems it will have 128 cores per SM so 82 SMs. I have read elsewhere the A100 will have instruction counters for each thread which means divergence is a non-issue. That could result in a big improvement in performance when threads in a block diverge.
  • Pinn - Tuesday, September 1, 2020 - link

    Massive sweet spot on the 3080.
  • Kjella - Tuesday, September 1, 2020 - link

    Was I the only one wondering who the f... "Dennard Scaling" is and why his death matters so much? That last capitalization threw me for a loop.
  • Srikzquest - Tuesday, September 1, 2020 - link

    Dennard Scaling (TL,DR:, when process node size decreases, so do required power) and Moore's law kind of go hand in hand but with recent Intel foundry problems and Nvidia's recent GPUs asking for more power every generation, they both are kind of dead.
  • edzieba - Tuesday, September 1, 2020 - link

    Dennard Scaling died LONG before then, nearly a decade ago at ~22nm. Gate oxide limit hit, voltage scaling stalled ~1v, cost/transistor started rising.
  • Ryan Smith - Tuesday, September 1, 2020 - link

    Indeed. But GPUs are really only finally starting to really suffer from its death. Their lower clockspeeds meant that for a while, they were still seeing decent reductions in power consumption with new (full) process nodes.
  • PixyMisa - Tuesday, September 1, 2020 - link

    Industry insiders were saying it died at 90nm.
  • Spunjji - Wednesday, September 2, 2020 - link

    Yup - that's exactly when we saw Intel fall short on its Netburst clock speed targets thanks to thermal runaway issues with Presshot. They're not exactly known for being a stupid bunch there, so it's safe to assume they were caught unawares by the changing dynamics of increasingly small transistors.

    After that, both AMD and Intel started focusing more on wider designs with higher IPC.
  • WaltC - Tuesday, September 1, 2020 - link

    That was one of the worst presentations I think I've ever seen. Very disappointing. He spent 9/10th of his time talking about rudimentary software--none of which impressed me, as I was there fore the hardware--and 1/10th of his time talking about the 3000-series hardware. "Ray-traced" Fortnite was horrible looking--but that's putting lipstick on a pig, if you know what I mean!..;) Even the clips he ran were often pre-rendered on non-RTX hardware--as usual! (Stars Wars chrome, for instance--not rendered on RTX at all.) I loved the "starting at" $600, $800 and then $1599 for the monster. What does "starting at" mean, actually...;) I got that the 3070 is supposed to ship in October--but missed the dates for the other products--he spent almost no time at all talking about when they'd be available. I hate to say it but this presentation reminded me of why I steer clear of nVidia. I feel as though I am being accosted by a snake-oil salesman. The only thing he missed at the end was "Beam me up, Scotty!" I hate to tell JHH, but no one is remotely interested to hear about 20 years from now--about which he knows nothing--like all of us!

    Now I'm really looking forward to AMD's RDNA2 presentation! This nVidia presentation was so weak compared to the kinds of things AMD does, that I guess I'm flabbergasted--I expected to see a real product demo--and for some reason I feel I didn't get that at all.

    But apparently, none of the new 8N Samsung-fabbed hardware "works" very well--as JHH never once said, "It just works!"...;) Too bad...;)
  • WaltC - Tuesday, September 1, 2020 - link

    I meant "for" instead of "fore" in the above post. When will Anandtech work out a useful bit of forum software? This is barely better than plaintext.
  • UglyFrank - Tuesday, September 1, 2020 - link

    The 3080 has double the CUDA cores of the 2080 Ti, AMD has no answer for this. I will get a Ryzen 5xxx CPU on release but i'll be using an Ampere GPU.
  • nathanddrews - Tuesday, September 1, 2020 - link

    Double the CUDA for half the price. I didn't think NVIDIA would actually have reasonable prices.
  • eddman - Tuesday, September 1, 2020 - link

    Well, they didn't increase the pricing, and set them at the same level as their direct predecessors, which is a good thing, but to me, as a regular customer, it's still not good enough.

    Now, if they were $600 and $400...

    I'm looking forward to the 3060, but the thought of paying $400 for it is bugging me.

    P.S. I'm well aware of the performance increase, but we've had similar increases in the past and yet lower launch prices.
  • Spunjji - Wednesday, September 2, 2020 - link

    This is the reason I haven't bought a GPU in 4 years; I refuse to participate in this circus.

    I've been holding out for a decent AMD alternative and keep getting disappointed. All I know is I'm not buying Nvidia until they cut this shit out.
  • Spunjji - Wednesday, September 2, 2020 - link

    They don't. I'm getting really goddamn tired of Nvidia pissing on us and *other consumers* telling me it's raining.

    Maxwell had great performance and great prices. Pascal was beginning to take the piss, especially on mobile, but the performance jump was still solid. Turing combined an underwhelming performance increase with an absurd price increase, and apparently you're okay with that pricing as The New Normal. It's breathtakingly deluded.
  • mikeatx - Tuesday, September 1, 2020 - link

    Wait uh, the MSRP is $1500 / $700 / $500. I mean, it is important to state the right prices.

    In my 15 years of buying video cards, I tend to go back and forth between ATI, AMD, NVIDIA. But for $700, the 3080 is a pretty fantastic upgrade for me. Looking forward to see if AMD's offerings can match this price/perf.
  • mdriftmeyer - Tuesday, September 1, 2020 - link

    Every vendor will sell their OEM products at $1600+ / $800+ / $600+ as MSRP are suggested retail prices.
  • xenol - Tuesday, September 1, 2020 - link

    What's a "real product demo" anyway?
  • imaheadcase - Tuesday, September 1, 2020 - link

    Um everything you started he showed in the demo. Did you just watch it super faster or have ADHD? He listed the prices, he told you in was real time on amp, I'm not sure if you are trolling or just really really out of touch with what you see.
  • UglyFrank - Tuesday, September 1, 2020 - link

    unsure what to expect from the mobility lineup for ampere.
    2080 is ~ 50% of its TDP for Razer 17, this 3080 would have to be at 33% of its TDP to fit in that same thermal design.
  • s.yu - Tuesday, September 1, 2020 - link

    That's what I'm worried about too. Maybe the Razers will have to put on some weight.
  • Spunjji - Wednesday, September 2, 2020 - link

    Expect them to get slimmer again, Because Reasons - probably still with Intel 14nm CPUs, so they can do a mid-cycle refresh to Tiger Lake when the 8-core variant finally surfaces.
  • Spunjji - Wednesday, September 2, 2020 - link

    Expect a return to the good old days where the mobile "3080" is more like a desktop 3060. They've already boiled that particular frog with the increasingly ludicrous Max-Q bullshit.

    To be specific, I suspect they'll be using whatever silicon powers the 3070 as the high-end mobile chip and underclocking it until it squeaks.
  • Pinn - Tuesday, September 1, 2020 - link

    PCIE4?
  • imaheadcase - Tuesday, September 1, 2020 - link

    yes
  • shabby - Tuesday, September 1, 2020 - link

    10gb for $700? I dunno...
  • mikeatx - Tuesday, September 1, 2020 - link

    yeah tbh not sure if 10gb for microsoft flight sim 4k ultra is going to cut it..
  • DigitalFreak - Tuesday, September 1, 2020 - link

    I'll wait for real world benchmarks between the 3090 and 3080. I was going to get the 3090, but more than double the price of the 3080 for what will probably be 10-20% more performance? That's insane, even for Nvidia. Will be interesting to see the price on the 3080 20GB cards.
  • Midwayman - Tuesday, September 1, 2020 - link

    It screams "This card only exists to keep the performance crown" to me. 3080 is priced aggressively compared to turing, and it must be where Nvidia expect AMD to compete.
  • Spunjji - Wednesday, September 2, 2020 - link

    "3080 is priced aggressively compared to turing"

    Am I going mad here? It's the exact same price as the 2080 was at launch. That's not "priced aggressively", it's copy/paste. It only looks like good value now because Turing's launch price was such a joke when measured against the performance it offered.
  • HammerStrike - Tuesday, September 1, 2020 - link

    I'm in the same boat as you are. I was thinking 3090 all the way, but a 3080 at 45% the price of the 3090 for only 10%-20% less performance is pretty compelling.

    Of course if I can't get a 3080 on the 17th I may be "forced" to pull the trigger on the 3090 a week later.

    Or sit around waiting to get either. I hope it's not that long as I sold my 2080 on Saturady (got $620 after fee/shipping for it on eBay!) - glad I did, $80 to upgrade to a 3080 is a no brainer. But the only GPU I got right now is a 1650 in my laptop and an old HD 5770 from 11 years ago.
  • Spunjji - Wednesday, September 2, 2020 - link

    "This is such amazing value compared to this complete rip-off" 🙄
  • eddman - Tuesday, September 1, 2020 - link

    I think we'll almost certainly see third-party 3080 and 3070 cards with double the memory. 10 and 8 GB just doesn't make sense to me. A freaking RX 480 had 8 GB in 2016.
  • Beaver M. - Tuesday, September 1, 2020 - link

    A lot of 3rd party models have leaked or presented already. They dont have more than 10 GB.
    And I highly doubt Nvidia allows it in the first place.
  • eddman - Tuesday, September 1, 2020 - link

    "They dont have more than 10 GB."

    ... yet.
  • Beaver M. - Tuesday, September 1, 2020 - link

    How long do you want to wait?
    You know, all these vendors would actually name their 10 GB cards "10 GB" or something if a 20 GB was even remotely close.
    Yeah, MSI does that, but it did that last gen too, so it doesnt count.
  • eddman - Tuesday, September 1, 2020 - link

    So what if we have to wait a bit? I never said we'll have double memory at launch.

    I'll wait till RDNA2 launches. If AMD goes for high memory capacities, then nvidia card vendors would have to counter.

    If not, having games that are already pushing or even surpassing the current memory limits, might encourage them to release double memory models.
  • Beaver M. - Wednesday, September 2, 2020 - link

    Its pretty sure that Nvidia will bring higher VRAM cards. But they cant before Micron doesnt have 16 Gb chips. So it wont be before 2021.
  • eddman - Monday, September 21, 2020 - link

    We just got a teardown of 3090, and there are memory modules on the back of the board too. AIBs could do 20GB 3080 cards right now.
  • Antiflash - Tuesday, September 1, 2020 - link

    Are new Ampere "Cuda Cores" comparable to previous generations? The quantity is DOUBLE from what one would expect from the “usual” generational improvement. I Mean one would expect the new XX80 to have have the cuda cores of the previous XX80ti, and the new XX70 is expected to have the previous XX80 qty of cuda cores. But here for example the 2080ti had 4,352 cuda cores and the new 3080 have 8704!
    If the new ampere Cuda Cores would be comparable with the old ones this card should be more than double performance on non RTX titles, not the expected 30%-40% generational improvement.
    Is it some renaming of what a “cuda core” is,
    Or there are architectural in what a cuda core is?
    Or this are really that much powerful GPUs???
    I mean doubling the computational units increasing at only 70W more power (250W to 320W for the 2080ti to 3080) would be too good to be true
  • Yojimbo - Tuesday, September 1, 2020 - link

    I think they are not comparable. The entire architecture has been rebalanced. Just have to wait for performance benchmarks. I have a feeling there will be some variation on the performance differences compared to Turing depending on how efficiently the new architectural structure is utilized in each particular game and with individual settings turned on or off. Overall, the architecture was probably optimized more for for ray tracing and DLSS 2.0 usage compared with the Turing architecture. My guess is that when the only compute coming into play is from the shaders, then in situations where shader performance is the limited factor (not memory bandwidth, etc), 1 Ampere CUDA core is probably less performant than 1 Turing CUDA core at the same clock. But until we get an architectural deep dive that's just speculation.
  • Spunjji - Wednesday, September 2, 2020 - link

    Speculation for sure, but pretty reasonable as far as that goes.

    The other possibility is that they're facing power density issues.
  • Ryan Smith - Tuesday, September 1, 2020 - link

    "Are new Ampere "Cuda Cores" comparable to previous generations?"

    Right now you're asking the $64K question. We'll know more once NVIDIA offers technical briefings.
  • BackdoorBeauty - Tuesday, September 1, 2020 - link

    It's a meaningless NVidia-invented term like RAYTRACING to get you to fork over $1500 for 10% actual FPS improvement
  • anonomouse - Tuesday, September 1, 2020 - link

    Uh, sorry, nvidia did not invent the term raytracing, nor did they invent the concept.
  • Spunjji - Wednesday, September 2, 2020 - link

    Every time I think Nvidia or Intel enthusiasts say the silliest things, along comes someone with a clanger like this.
  • Jhud007 - Tuesday, September 1, 2020 - link

    They are not only comparable but Claimed to be more efficient as well. Notice the FP32 performance has nearly doubled as with the CUDA core count
  • Yojimbo - Tuesday, September 1, 2020 - link

    That's theoretical max performance. That doesn't get more or less efficient for programmable shaders because it's always (for many years, anyway): (#cores) X (clock cycles per second) X 2, where 2 is the number of operations per clock. The question is whether the TFLOPS you get from that calculation using the listed number of CUDA cores and throwing it into the (#cores) variable is more or less efficient in terms of real world application performance. If at the same TFLOPS you get a lower real world performance, than the architecture is less efficient in terms of its theoretical performance.

    In this Ampere generation, it looks like the theoretical TFLOPS are applied less efficiently to real world performance because the 3080 has 3 times the theoretical max TFLOPS of the 2080 but the claimed real world performance advantage is only 2 times. Now the correct way to look at this, whether it is utilization efficiency or whether it is because the CUDA cores are, for example, somehow less independent and therefore not entirely comparable, depends on the architectural details.
  • SwAY256 - Wednesday, September 2, 2020 - link

    My guess is that, like in A100 Ampere GPU, NVIDIA is now talking about "mixed precision FP32" which is in fact BFLOAT16, which doubles the theoretical performance. That's why they say "Shader TFLOPS" and not FP32 TFLOPS anymore.

    A100 has 19.5 FP32 TFLOPS and 39 BF16 TFLOPS so the 18/36 TFLOPS of the 3090 would make sense.

    We will be able to confirm (or not) this assumption in a few weeks.
  • Yojimbo - Thursday, September 3, 2020 - link

    That's an interesting theory, but I'm 99.9% percent sure they mean 32 bit (single precision) FMA operations as performed on the normal shader cores. Any time NVIDIA have meant something else they've indicated it. Firstly, on their web site NVIDIA say "NEW SM 2X FP32 THROUGHPUT". Secondly, they have now released schematics through an AMA that shows two data paths for the single precision shader cores. One has only FP32 units and the other has both INT32 and FP32 units. Both data paths can operate simultaneously, so the FP32 units on both data paths can be operated together, or the FP32 units on one path can be operated along with the INT32 units of the other path. When giving the max theoretical TFLOPS for the card they are talking about all SMs using FP32 in both their data paths. Finally, BF16 is its own data type with much lower precision than FP32. There would need to be a data type conversion in the software. I think it would take a massive rewrite of code by game developers or risk very unexpected results.

    The reason the A100 does not have this 2x throughput is because it has a different SM architecture. It doesn't have 2 data paths that both have access to FP32 compute. It only has 64 FP32 units per SM whereas the 30 Series GPUs have 128 FP32 units per SM.

    As an aside, interestingly the A100 has FP16 at quad rate of FP32, something I always found confusing. So it seems that there is some sort of double throughput going on if you are operating on FP16, meaning that they may have some sort of data path 1 and data path 2 that are both able to do FP operations, but it doesn't work with FP32, only FP16. That makes me speculate that it might be possible the RTX 30 Series tensor cores will see a boost in performance in pure FP32 tensor operations, whereas the A100 instead relies on an internal reduced-precision TF32 data type to carry out FP32-typed tensor operations.
  • Yojimbo - Thursday, September 3, 2020 - link

    Oh, incidentally, I still don't know where the extra FP16 execution units come from unless their FP64 units can be split into 4 FP16 units. If that's the case, I wonder if the next generation of data center GPU may allow the FP64 units to also split into FP32 units and the GPU may also allow two FP32 data paths for a 2x FP32 throughput just like the 30 series GPUs. Perhaps then they'd also have boosted pure FP32 tensor core operations.

    it will be interesting to see NVIDIA's whitepaper on the 30 series Ampere architecture to get more info on how this stuff works and what sort of tensor core throughput they are claiming for the different data types.
  • mdriftmeyer - Thursday, September 3, 2020 - link

    They sure have deliberately hid their numbers behind new jargon. I agree with your theory. Show me the white paper.
  • Gomez Addams - Wednesday, September 2, 2020 - link

    I think what they have done is made special instructions that involve multiplication of values where they can launch two of them simultaneously. This theoretical improvement is attained with sustained use of those instructions. The degree to which that is possible depends on the use of those instructions in a particular application. I can think of several common applications for these instructions, one is matrix multiplication.
  • Spunjji - Wednesday, September 2, 2020 - link

    I can guarantee you that claimed performance increase will not be borne out in practice. It's the "why" part of that equation that will be interesting.
  • Yojimbo - Thursday, September 3, 2020 - link

    What do you mean by that? Their claimed performance increase is the 1.7x or 1.8x or whatever it is. The theoretical max performance is just that, it's not a "claimed performance", it's just a calculation based on the architecture.
  • Spunjji - Monday, September 7, 2020 - link

    Sorry, I should have been specific - I did indeed mean the theoretical max performance. It looks like the gap between that and their realised performance in games has widened with this generation.
  • Longtimelurker314 - Tuesday, September 1, 2020 - link

    10496 cuda cores for 3090? Seems like 2x too much
  • DigitalFreak - Tuesday, September 1, 2020 - link

    All the leaks were in the 5000 range, so Nvidia is probably double counting, or they redefined "cuda core".
  • nevcairiel - Tuesday, September 1, 2020 - link

    All the leaks only talked about a 40-50% performance uplift, while early hands-on testing has shown 70-80% without RT, or almost 100% with RT.
  • yeeeeman - Tuesday, September 1, 2020 - link

    Your figures are for 3080 vs 2080. 3090 vs 2080ti will be in that 40-50% better range.
  • SirMaster - Tuesday, September 1, 2020 - link

    That doesn't make any sense.

    If the 3080 is 70-80% better than the 2080, and the 2080Ti is only 29% better than the 2080.
    https://tpucdn.com/review/nvidia-geforce-rtx-2080-...

    Are you suggesting the 3090 wont be 29% faster than the 3080? If it is, then the delta between the 2080Ti and 3090 will be the same as between the 2080 and 3080.
  • Spunjji - Wednesday, September 2, 2020 - link

    "Early hands-on testing" filling in for "supervised product demos" here. It's promising, but after all the nonsense they promised around RTX and DLSS last time I'd like to wait and see how it looks in the hands of reviewers.
  • SwAY256 - Wednesday, September 2, 2020 - link

    Like a posted in an other comment, my guess is that NVIDIA's "Shader TFLOPS" is in fact BFLOAT16 TFLOPS like they did with the "mixed FP32 precision" in the A100 Ampere GPU.

    The real count of CUDA cores is 5k (as seen in the leaks) but as they can perform 2 BF16 calculation instead of one FP32, NVIDIA says there is 10K CUDA cores. Marketing trick :)
  • mdriftmeyer - Thursday, September 3, 2020 - link

    Yep.
  • Makaveli - Tuesday, September 1, 2020 - link

    I'll just leave this here.

    https://www.youtube.com/watch?time_continue=1&...
  • BackdoorBeauty - Tuesday, September 1, 2020 - link

    But will anyone click it
  • MadManMark - Tuesday, September 1, 2020 - link

    No one clicks links with no explanation given. It's not 1997 anymore
  • prophet001 - Wednesday, September 2, 2020 - link

    Umm you should click it. It's a youtube video about 3080 performance. Or just search "3080 Early Look" on youtube.
  • Spunjji - Wednesday, September 2, 2020 - link

    This.
  • Lord of the Bored - Thursday, September 3, 2020 - link

    I don't understand. All I see is a man singing about how he's no stranger to love.
  • Hxx - Tuesday, September 1, 2020 - link

    so glad i held out with a 2080 for 2 years. The 3080 seems to be the better "value"
  • yeeeeman - Tuesday, September 1, 2020 - link

    So glad I held with a gtx950m for 5 years. Now I can get 2080 super performance with rtx3060. Hopefully
  • GreenReaper - Thursday, September 3, 2020 - link

    So glad I held out with a HD 6310 for nine years. Now I can get an equivalent of the new consoles with AV1 support next year to tide me over for this decade.
  • yeeeeman - Tuesday, September 1, 2020 - link

    Can't wait to buy a new laptop with rtx 3060. Still a long way to go, but next year until March we should have it.
  • haukionkannel - Tuesday, September 1, 2020 - link

    So 2080ti is now low end gpu... even 3070 betas it hand down... if you buy used 2080ti, do not pay more than $300...
  • s.yu - Tuesday, September 1, 2020 - link

    As soon as I realized Nvidia's jumping from 12nm to 8nm, I gave up the thought of buying Turing. It's like buying Note 4 on the brink of Exynos 7420.
  • Kangal - Friday, September 4, 2020 - link

    That analogy isn't too good.
    If you want a better one, it would be to say skipping Zen+ purchase during the Zen2 announcement. Or if you want to stick with the phone analogy, a more apt would be to say forgo HTC One M9, to instead waiting for the ZTE Axon 7.

    Why? Because tthe Note 4-Exynos was one of the best devices in history. It was the best phone of 2014, still has a great custom rom scene, didn't get left behind the 64bit divide, and not to mention it has a User Removable Battery.

    The S6 lineup has been mostly a flop. The S7+ Exynos was a decent upgrade, but it was the start of the end: Curved Displays, Sealed Batteries became the norm, Exynos lagged behind henceforth, and eventually they even removed the Headphone Jack and microSD slot.
  • smaciokji - Tuesday, September 1, 2020 - link

    Anyone else notice the huge hole in the line up? 500, 700, $1500 USD? $700 TO $1500 is a pretty big hole! ?? AMDs RDNA-2 will have no answer to the 3090, its a pretty sure & safe bet. (I am not hating on AMD! I'm stating facts like looking at the last 5-6 years history fyi) The "Rumor mill" suggests that AMDs big-RDNA2 will have 16GB VRAM. Who wants to bet that nVidia will launch a 3080ti (or 3080super) with 20GB VRAM and cost $1000-$1200.. To trade blows with AMDs BIG-NAVI
  • haukionkannel - Tuesday, September 1, 2020 - link

    That is possible... wait and see...
  • eddman - Tuesday, September 1, 2020 - link

    A 3080 Ti, or whatever they end up calling it, is pretty much a certainty. Jensen himself said that 3080 is the flagship and 3090 is basically a renamed Titan, a halo card.
  • MadManMark - Tuesday, September 1, 2020 - link

    The fact they used the xx90 name means they probably will eventually come out with some kind of Titan too (probably binned + more/faster mem)
  • HammerStrike - Tuesday, September 1, 2020 - link

    While I don't know what the price of GDDR6X is I'm pretty confident that 10GB of it isn't going to be a $300-$500 uplift. Probably closer to $100,-$200 tops.

    It is a big price gap but:

    1. with 24GB of RAM Nvidia can only go so low without cannibalizing their Quadro line of cards (not defending them, just stating a reality).
    2. They can charge an obscene amount for a halo product.
  • rolfaalto - Tuesday, September 1, 2020 - link

    Not obscene if you do compute … this is vastly more powerful than a Titan RTX at a lower price. I'll buy 4 to start.
  • Spunjji - Wednesday, September 2, 2020 - link

    I'm always amazed by the number of commenters on tech sites who "do compute" with consumer cards.

    Some of us still remember when the titan was a $1000 proposition...
  • Gomez Addams - Wednesday, September 2, 2020 - link

    If you can, why not? I have two Titan RTXs in my machine, just for computation. I don't play games with them.
  • Spunjji - Thursday, September 3, 2020 - link

    I have no way to assess the truth value of this statement, though - let alone that of the guy saying he's going to buy 4 of them.

    All I know is that, as someone who uses gaming GPUs for gaming, it's been kind of a pain in my ass seeing the prices balloon thanks to this phenomenon.
  • Klimax - Wednesday, September 2, 2020 - link

    Just because your imagination is too limited doesn't matter. BTW: Check prices of professional cards. If they can get away with slightly limited consumer card, then they can save damn too much money and not many organizations will say no to that.

    And BTW: Iray says hi.
  • Spunjji - Thursday, September 3, 2020 - link

    I can *imagine* it, I just don't believe it's as common as tech-site comments would imply.
  • lightningz71 - Tuesday, September 1, 2020 - link

    Why not base the 3080ti or super off the 3090 and use 12GB in normal mode instead of clamshell mode? Split the difference in cost and still be clearly ahead of the 3080.
  • Bluetooth - Tuesday, September 1, 2020 - link

    I think they are starting at this price to get the eager buyers for the best and then later on this price will drop.
  • Kjella - Tuesday, September 1, 2020 - link

    I think any price drop is unlikely. Nvidia needs to balance the number of $1500 sales they can make from enthusiast gamers with the $2500 sales they lose of the Titan RTX. Next year they can get 2GB chips to fill out the gaps, plus probably a 48GB Titan RTX2. I suspect that was their original plan but the timing didn't come together so instead of 16/20/24 GB they had to go with 8/10/24. It's impressive that the RTX 3070 can beat the RTX 2080 Ti in FLOPS but 8GB is not very future proof.
  • yeeeeman - Tuesday, September 1, 2020 - link

    There will be a 20gb 3080. It all depends where big Navi fits but I would suspect it fits between 3089 and 3090, closer to 3080.
  • Beaver M. - Tuesday, September 1, 2020 - link

    When? Im not going to wait a year.
    Most likely they will appear when Micron releases 2 GB chips. Because there can only be 20 GB versions, unless they change the memory bus. But who knows how far off that is and how expensive those new chips will be.

    The 3080 really is tempting, but the 10 GB make me literally laugh, and then cry when I realize I need a new card soon. That VRAM is FAR too small! The 3070 is even more of a joke. A 2080Ti performance card with only 8 GB????
    WHAT ARE THEY THINKING???
    Nvidia hasnt learned anything from their last generation.
  • lightningz71 - Tuesday, September 1, 2020 - link

    Why not base a 3080ti off of the 3090, but down clocked and with only 12 GB VRAM in non-clamshell mode?
  • Spunjji - Wednesday, September 2, 2020 - link

    On the contrary, they have learned that their target audience will buy their products anyway.
  • Yojimbo - Tuesday, September 1, 2020 - link

    But those same rumors put big-RDNA2 at about 19 TF, don't they? That puts it in the 3070 range, not enough to compete with the 3080 let alone a 3080 Ti.

    But let's do it another way, a back-of-the-envolope calculation that will, I think, be generous to AMD. Let's look at the 5700 XT. At 225 W it has, rounding up, 10 TFlops. I am not sure how accurate this is, but I'm going to suppose 150 W of that is from the GPU. Now i'll take AMD's promised 50% increase in perf/watt from RDNA to RDNA 2, and I'll give a 150 W (GPU only) Big Navi 15 TFlops. Now assume AMD increases that power budget by 2/3 to 250 W and gets a linear scale on performance (unlikely but very beneficial to the AMD side of things in our calculations), giving a 25 TFlops GPU. Now we must add that 75 W for the rest of the board power back in, plus, say, another 10 W for the greater power envelope and RAM bandwidth needed (that seems generous to me). So we end up with a 335 W card that pushes 25 TFlops. That sits smack dab between the 3070 and the 3080 in performance with a power draw greater than the 3080. So even using AMD's promised numbers with some generous calculations we don't get Big Navi going up against a hypothetical 3080 Ti.

    There will probably be a 3080 Ti but I don't know if it's going to trade blows with AMD's Big Navi.
    I do
  • eddman - Wednesday, September 2, 2020 - link

    Why are you even comparing flops between different architectures?

    Ampere's gaming performance clearly does not line up with its compute performance. 3080 is a 30 tflops card and yet is, at most, twice as fast as the 10 tflops 2080, at least based on DF's tests.
  • Yojimbo - Thursday, September 3, 2020 - link

    Because it's a rough calculation and although FLOPS between architecture is not especially accurate it is probably good enough when you're talking about differences of 50% in this case. The reason for that is that Turing TFLOPS were "stronger" than RDNA TFLOPS. Ampere TFLOPS are apparently much weaker than Turing TFLOPS. They are probably weaker than RDNA TFLOPS but less so. Consider the following, and note that it is just a rough calculation to get a bracket of performance on RDNA2, it is not meant to predict an actual performance:

    Looking at the 5700 XT and the 2070, we can draw a rough equivalence of RDNA TFLOPS and Turing TFLOPS at 9.7 for RDNA is 7.5 for Turing. Then we draw a rough equivalence of the 2080 Ti performance to the 3070 performance and say 13.5 Turing TFLOPS is about 20.4 Ampere TFLOPS. Now we scale the 5700 XT performance up to 2080 Ti level and calculate the equivalent RDNA TFLOPS. (13.5/7.5) * 9.7 = 17.5. So 17.5 RDNA TFLOPS is about 20.4 Ampere TFLOPS. Therefore they are within about 16% of each other and by coming up with an upper bound of 25 RDNA TFLOPS in my calculation in the previous message, we can say that, accepting my assumption in the previous message the maximum level of Big Navi is the 3080 level (if we take 25 TFLOPS and add 20% to it we get at most 30 Ampere TFLOPS, the TFLOPS of the 3080. But that is the upper bound and the real performance likely sits well below it and in between the 3070 and 3080.
  • Spunjji - Monday, September 7, 2020 - link

    @Yojimbo - Your figures are off because you're comparing the wrong cards.

    In games, the 5700XT at 9.7TFLOPS is closer to the 2070 Super at 9TFLOPS - it loses to the 2070S by about 6%. You get similar results comparing the 5700 vs. the 2060 Super: performance is close with a 5% advantage to the 2060S, and that's at 7.9TFLOPS for the 5700 and 7.2TFLOPS for the 2060S (I'm working with numbers from Techspot here).

    In other words, RDNA needs ~15.5% more "raw" TFLOPS to hit an average in-game performance parity with Turing. That means a hypothetical RDNA scaled up to 2080Ti performance at 13.5TFLOPS needs to achieve 15.6TFLOPS.

    That has big implications for your next calculation. If we assume that RDNA2 is no more computationally efficient than RDNA (unlikely), then a hypothetical Big Navi hits 3080 performance levels with "only" 23.4TFLOPS. Again, though, that's assuming no progress whatsoever in computational efficiency.

    Honestly, the safe money is on AMD having a 3080 competitor in the 22TFLOPS / 310W ballpark.
  • Spunjji - Wednesday, September 2, 2020 - link

    Those power calculations are a bit off - Toms calculated board power at between 25W and 35W for Navi, so that's more like 190W for the GPU. The 5700XT is also the least-efficient implementation of Navi - the 5700 and 5600XT are up there with Turing:
    https://www.tomshardware.com/features/graphics-car...

    I'd go for a slightly different calculation. Using the 5700XT's power numbers, the rumoured ~20TFLOPS performance of Big Navi and the claimed 50% perf/W improvement, you end up at about 290W after adding board power back in - we'll call it 300W as the RAM is likely to be drawing more power this time around.

    We already know how RDNA's TFLOPS efficiency compared with Turing (similar, a break from AMD tradition), and now we have an idea how Ampere compares with Turing ("less efficient" in terms of theoretical TFLOPS vs. performance in current games, a break from Nvidia tradition). On that basis, I'd bet that a ~300W RDNA2 card could trade blows with the 3080 when it comes to rasterization performance (I make no bets on RDNA2's RT abilities).

    Much more interesting would be the tier below competing with the 3070 - both cards will benefit greatly from backing away from the bleeding edge of performance, but I think RDNA2 might potentially have the edge.
  • Yojimbo - Thursday, September 3, 2020 - link

    I chose something that was very beneficial to AMD on purpose. I boosted the power AND the TFLOPS by 67%. If I were to subtract less power out for other components and then boost the power by 67% I will get an even higher power usage necessary for the same number of TFLOPS. Example: 225 - 25 = 200. 200 * 1.67 = 334. 334 + 25 = 359 W. So using 25 W for other components, as you suggest, results in the 25 TFLOPS card needing 359 W instead of 335 W like I calculated in my generous calculation before.

    And Big Navi is going to be inefficient (comparatively to lower-wattage cards) just like the 5700 XT since it is only going to draw more power than the 5700 XT. In fact it will probably be less efficient, but I extrapolated linearly from the 5700 XT to get the upper bound. I think when looking at the max performance of Big Navi, 5700 XT efficiency is the proper one to look at.

    Also, you should not confuse power efficiency with computational efficiency with respect to max theoretical FLOPS. It's irrelevant how Turing's power efficiency compares with RDNA. AMD said that RDNA2 will have 50% higher power efficiency than RDNA, and, taking them at their word, I applied that to the 5700 XT, which, as argued above, I believe is the proper power efficiency to use. If you want to calculate what is 150 W RDNA2 card would be like, then use the 5600XT power efficiency. But if you want to know what a 300+ W card that is intended to push the max performance of the GPU would be like, why in the world would you choose the 150 W 5600 XT over the 225 W 5700 XT?!

    BTW, it's similarly irrelevant how RDNA2 TFLOPS compare with RDNA TFLOPS. We are calculating in RDNA FLOPS to get a real-world performance metric we can compare roughly to Ampere real-world performance.
  • Spunjji - Monday, September 7, 2020 - link

    @Yojimbo - how "generous" you're being wasn't my qualm - my issue was entirely with you using the wrong power consumption figures, for better or worse. Bad data in, bad data out. I believe that a hypothetical 25TFLOPS RDNA2 card would indeed need around 360W, but I do disagree with you about how that would perform, for the reasons I outlined in one of my other replies.

    The rest of your comment seems to be talking past me: I'm not confusing power efficiency and computational efficiency, I was talking specifically about what kinds of computation performance can be achieved in a given power level based on AMD's claims.

    Similarly, I didn't use the 5600XT for my numerical comparison, so I'm not sure why you're talking to me like I did - I specifically said "using the 5700XT's numbers". I agree that, at the top end of the architecture, that's where to make the performance comparisons. I raised the 5600XT and 5700 simply to illustrate that in terms of perf/W in games, Navi and Turing are comparable.
  • MadManMark - Tuesday, September 1, 2020 - link

    I think everyone assumes there will be a Ti eventually, just like every previous gen, If there's anyone who takes your bet, I'd like to ask it they will bet me that the sun doesn't rise int he east tomorrow lol
  • Revv233 - Tuesday, September 1, 2020 - link

    Am I the only guy that doesn't care one bit about ray tracing?

    I've yet to see anything done with it that convinces me it is superior enough to be worth the cost/ performance hit.
  • SirMaster - Tuesday, September 1, 2020 - link

    It's still early days, eventually it wont have a performance hit. It's clearly the future as its significantly easier to make realistic looking effects.

    It already has almost no performance hit on the 30 series now.
  • Spunjji - Wednesday, September 2, 2020 - link

    Still has a pretty big cost hit, though.
  • MenhirMike - Tuesday, September 1, 2020 - link

    That's the problem with a lot of the features: On paper, the RTX 2000 and 3000 have a lot to offer, but games need to use it. My hope is that with hardware raytracing on the PS5 and Xbox Series X, more games will make use of it.

    During the RTX 2000 lifetime, Raytracing has been mostly a promise with too few games to really be worth it, but the feature itself is really solid and useful.
  • xenol - Tuesday, September 1, 2020 - link

    It really depends on what games you play and how nit-picky you are about it.

    For me, there are flaws in traditional rendering that are glaring to me. Like Screen Space Reflections breaking down when you can't see the object in question anymore to lighting or reflections (via cube maps or something) that can't exist. My favorite example of the latter is when I noticed a stream of water reflecting light from the sun... when it was in the shadow of a building.

    Sure we can fix this with traditional rendering, but it'll be expensive. I think Ryan Smith said it earlier: if you're going to spend a lot effort to cheat, you may as well do it right.
  • BackdoorBeauty - Tuesday, September 1, 2020 - link

    It is absolute trash.. I was feeling rich 2 Septembers ago so I got SLI RTX 2080Ti. What a waste ;)
  • Arbie - Tuesday, September 1, 2020 - link

    Lots of girls don't care.
  • mdriftmeyer - Tuesday, September 1, 2020 - link

    What are the Double Precision numbers?
  • catavalon21 - Tuesday, September 1, 2020 - link

    After all the talk about 12-pin connectors, the FE cards have either 1 or 2 8-pin connectors. So there's that.
  • catavalon21 - Tuesday, September 1, 2020 - link

    Didn't intend to post as a reply, sorry. Hopefully better than 1/32, but ... I'm hoping they'll counter Radeon VII's impressive ratio, but I'll believe it when I see it.
  • mdriftmeyer - Thursday, September 3, 2020 - link

    Agreed.
  • webdoctors - Tuesday, September 1, 2020 - link

    No one asked the important question, with ETHerium prices doubling in the last month, can I buy this card and basically get passive income monthly and pay for the card itself in 1-2 months?

    Please add ETH mining to your benchmark review! Thanks!!
  • BackdoorBeauty - Tuesday, September 1, 2020 - link

    You're still using electricity to propagate that scam?
  • Notmyusualid - Monday, September 14, 2020 - link

    @ Backdoor - Is it a scam if I made money out of it? And I still got lots of h/w paid for too....
  • vladx - Tuesday, September 1, 2020 - link

    If RTX 3090 mines 2x better than RTX 2080 TI you'll get ~$10/day so ROI will be in around 5-6 months.
  • Supercell99 - Tuesday, September 1, 2020 - link

    Nvidia is going to have a blowout christmas with 3070's at only $499 yet being 50% faster (by cuda) than the 2080ti at 1/2 the price.
  • Beaver M. - Tuesday, September 1, 2020 - link

    Depends on if there are enough idiots who dont realize its VRAM is far too small.
    Last generation there were enough, but this is a new generation now and the VRAM hasnt increased at all, but has actually decreased. Maybe more will realize this time, because its even more of a problem now than on Turing.
  • domboy - Tuesday, September 1, 2020 - link

    I'm just hoping the xx50 card for this generation goes back down to the price range the 1050 and prior cards had. The 1650's price jump pretty much put it in a different price bracket and it didn't seem like a good replacement for the 1050.
  • Spunjji - Wednesday, September 2, 2020 - link

    That would be nice, but it looks like their intention here is to move all the price categories inexorably upwards. Either way, we'll have to wait a long while to find out.
  • mdriftmeyer - Tuesday, September 1, 2020 - link

    Nvidia doesn't bother to go into the compute spec results on their own site. What gives?
  • MadManMark - Tuesday, September 1, 2020 - link

    The Individual products aren't launched yet. This is just a new line announcement.
  • catavalon21 - Tuesday, September 1, 2020 - link

    Which Compute specs are you looking for? FP64 is missing, yes.
  • oRAirwolf - Tuesday, September 1, 2020 - link

    So does this mean that the 3080 will not have nvlink and only the 3090 will? I am running 1080 ti's in SLI right now and am probably going to buy a single 3080 to replace them, but I like having the option of buying a second one since I use a 43" 4K TV as my computer monitor and may need the additional horsepower for 4K gaming.
  • s.yu - Tuesday, September 1, 2020 - link

    Hmmm, from the perf/W figure a hypothetical 120W 3080 mobile part would only be ~33% faster than 2080 mobile?
  • vladx - Tuesday, September 1, 2020 - link

    Yep, except maybe when RTX is ON.
  • s.yu - Tuesday, September 1, 2020 - link

    Then it's gonna be a hard sell, maybe except the 4kg+ monsters that try to run desktop versions.
  • Spunjji - Wednesday, September 2, 2020 - link

    They *might* be seeing better perf/W scaling at that level than they are at the bleeding edge.

    Either way, the pricing on the 3080 is going to be painful.
  • Spunjji - Wednesday, September 2, 2020 - link

    Mobile 3080, that is.
  • damianrobertjones - Tuesday, September 1, 2020 - link

    Can't wait to buy... a 2070 for a great price once they hit eBay.

    3080 generational leap? Does nVidia mean an extra 20fps, over the previous gen, compared to the usual 10fps (over the previous gen)?
  • BackdoorBeauty - Tuesday, September 1, 2020 - link

    lol so true
  • eddman - Tuesday, September 1, 2020 - link

    What are you on about? 20 fps from what? Going from 140 to 160 (~14%) would be nothing to write home about, but going from 50 to 70 (~40%) could be the difference between playing at locked 60 or not.

    Digital foundry preview tested the 3080 against 2080 and found it to be 60% to 100% faster in non-RT tests. That's a pretty substantial increase.
  • BackdoorBeauty - Tuesday, September 1, 2020 - link

    He's mocking the advertised FPS gains outside of useless benchmarking bullshit; people who have bought into NVidia's last few halos will get the joke. The gains are minimal, but the price is 100%.

    You'll be playing in 4k and still won't be able to turn all settings to ultra without FPS dives during gameplay; just like RTX 2080Ti SLI/NVLINK or GTX 1080Ti SLI. Don't even think about turning on raytracing lol
  • BackdoorBeauty - Tuesday, September 1, 2020 - link

    Worth upgrading SLI RTX 2080Ti to a single 3090?

    Money isn't endless; I could waste it on marijuana concentrates or 10 higher-end golf course days
  • vladx - Tuesday, September 1, 2020 - link

    How much higher performance do you get in SLI compared to a single RTX 2080 TI?
  • BackdoorBeauty - Tuesday, September 1, 2020 - link

    It depends, but 70-80% usually vs a single rtx 2080 ti. It really is a useless overkill PC; i9-7900x @ 4.9, rtx 2080ti sli, an incredible liquid loop(5 360 rads), etc

    It's still not quite enough to push 120 frames on AAA games @ ultra on a 120hz 4k monitor, unfortunately, which is irritating.

    Jensen is going to lie like he always does and say you'll be doing 8k gaming on these new GPUs, but you'll have to lower your settings way down, of course. I bet you can get 8k on low with RTX off! ;)
  • vladx - Tuesday, September 1, 2020 - link

    Wait for 3090 TI then, RTX 3090 won't get you better performance than that.
  • BackdoorBeauty - Tuesday, September 1, 2020 - link

    My math was crap; 65% gain maybe. Thanks for the advice.
  • catavalon21 - Tuesday, September 1, 2020 - link

    DEFINITELY high-end golfing.
  • bananaforscale - Tuesday, September 1, 2020 - link

    The initialism is FLOPS, not FLOPs. FLoating point OPerations per Second.
  • SwAY256 - Tuesday, September 1, 2020 - link

    I strongly suspect that the "Shader FLOPS" are not "2x FP32" but in fact BFLOAT16 FLOPS as seen in the A100 Ampere GPU. If there were really FP32 FLOPS, it would be labeled as FP32. As Microsoft is always talking about TFLOPS for the next xbox, I guess that NVIDIA wanted to respond.

    That means that the 10k CUDA cores are a marketing trick to say that there are 5k CUDA Cores that are capable of working on floats that have the same dynamic as FP32 but far less precision (BFLOAT16).

    If that's correct, I don't know the impact of using BF16 instead of FP32 in game rendering.
  • anonomouse - Tuesday, September 1, 2020 - link

    Nvidia's website literally says "2x FP32 throughput" (https://www.nvidia.com/en-us/geforce/graphics-card...
  • SwAY256 - Wednesday, September 2, 2020 - link

    Yes but that's what they said for A100. Turned out that it was a "mixed precision FP32" (BF16 in fact). Given that A100 has 19.5 FP32 TFLOPS and 39 BF16 TFLOPS, I wouldn't be surprised that GF3090 has 18/36.
  • Spunjji - Wednesday, September 2, 2020 - link

    It's a solid rule of thumb that if a company starts aggressively touting a metric they weren't really pushing before, it's probably because they found a way to get that metric to make them look better (e.g. AMD and Cinebench, Intel and AVX-512).
  • Icehawk - Tuesday, September 1, 2020 - link

    Looking forward to the 17th, been nursing a 970 the last year or so - it handles 1440p surprisingly well still but it's time to upgrade. I've gotten my money's worth over 5.5yrs for sure. I figure if I amortize across 4yrs the 3080 isn't a terrible value based on how much I game. Power is high but I'm running a 3900x in this box so that helps offset overall draw, a 550W should still be sufficient.
  • Spunjji - Wednesday, September 2, 2020 - link

    You'll be close to the limits of that PSU. If it's 5.5 years old you'll definitely want to give it a good clean, and expect it to make a racket 😁

    The 970 was a cracking card - $330 of ridiculously good value, even with the stupid 3.5GB thing.
  • NRico7 - Tuesday, September 1, 2020 - link

    Will there be an issue with the rtx 3080 on pcie 3.0?
  • Makaveli - Tuesday, September 1, 2020 - link

    The cards are not out yet how is anyone suppose to answer this question?
  • BackdoorBeauty - Tuesday, September 1, 2020 - link

    It will still work, but will halve the data flow. If you're going to upgrade, I'd make sure you have PCIE 4.0

    16GB/s vs 32GB/s
  • Beaver M. - Tuesday, September 1, 2020 - link

    Yes.
  • Spunjji - Wednesday, September 2, 2020 - link

    Nope. Give that the previous cards were fine on PCIe 3.0 8x, even with double the performance the 3080 wouldn't need more than the bandwidth from a 3.0 16x slot.
  • YaleZhang - Tuesday, September 1, 2020 - link

    Wow, 2x more CUDA cores/SM, but the power draw is too high. Is there going to be a RTX 3080 super later? The chance of NVIDIA porting the design masks to TSMC is nil right? So how much more efficient can Samsung 8nm get?
  • ballsystemlord - Tuesday, September 1, 2020 - link

    Spelling error:

    "...but how easily Ampere can fill those additional cores is going to be a critical factor in how well it can extra all those teraFLOPs of performance."
    Wrong word:
    "...but how easily Ampere can fill those additional cores is going to be a critical factor in how well it can extract all those teraFLOPs of performance."
  • Gigaplex - Tuesday, September 1, 2020 - link

    "The immediate oddity here is that power efficiency is normally measured at a fixed level of power consumption, not a fixed level of performance. With power consumption of a transistor increasing at roughly the cube of the voltage, a “wider” part like Ampere with more functional blocks can clock itself at a much lower frequency to hit the same overall performance as Turing. In essence, this graph is comparing Turing at its worst to Ampere at its best, asking “what would it be like if we downclocked Ampere to be as slow as Turing” rather than “how much faster is Ampere than Turing under the same constraints”. In other words, NVIDIA’s graph is not presenting us with an apples-to-apples performance comparison at a specific power draw."

    Well... if you use V-Sync/G-Sync/Freesync to eliminate tearing, this can be a valid metric.
  • eastcoast_pete - Tuesday, September 1, 2020 - link

    The key for me is the price/performance ratio of the smallest Ampere card: look at the number of CUDA cores for the 3070, its MSRP, and how even this card exceeds the 2080ti, at least by processing power. This basically relegates all current Navi and most other, previously high-end Turing cards to mid-level and lower ranks. AMD better have a really big and fast Navi coming out, or the dGPU battle in 2021 will be NVIDIA vs. Intel, with Intel as the underdog.
  • Spunjji - Wednesday, September 2, 2020 - link

    So you're counting out Navi, but not Intel, who haven't even show a product with that level of performance and efficiency yet?

    K then.

    I really wouldn't make assumptions about its real-world performance based on the CUDA core count alone.
  • samerakhras - Tuesday, September 1, 2020 - link

    So the RTX 3090 is just 20% faster then RTX 3080 and Nvidia is asking 115% more for it ?

    there is no way people are buying RTX 3090 for more than double the price of RTX 3080 just for 6 more tfolps .. not a chance !

    IMO the RTX 3090 real price should be $1000
  • Yojimbo - Tuesday, September 1, 2020 - link

    The RTX Titan cost was $2500. The RTX 3090 is twice as fast as an RTX Titan and costs $1500. They didn't give it 24 GB of RAM just for bragging rights. The card has a market other than gaming, but they are offering it as the halo gaming card as well. They are retiring the Titan name, I guess, and bringing the concept back into the GeForce brand. Maybe they will later offer a 3080 Ti with 11 or 12 GB of RAM that costs $1000.
  • Zingam - Wednesday, September 2, 2020 - link

    For that other market they have the Quadro stuff. Gaming GPUs are poor man's Deep-learning hardware. Pros would get the real deal.
  • Yojimbo - Wednesday, September 2, 2020 - link

    The Quadro and the Titan are two different markets. A Quadro card with the specs of a Titan costs like 3 times as much. The Quadros come with lots of professional software certification, and, perhaps I am misremembering this, but ECC as well. The Titan would be used for, for example, machine learning applications where the large VRAM is important but the software certification is not needed. There's not much reason for deep learning people to go for a Quadro that has the same underlying GPU as this RTX 3090 or an RTX Titan unless it has access to a greater number of NVLink connections.
  • Gigaplex - Tuesday, September 1, 2020 - link

    That's why they're calling the 3080 the flagship, and the 3090 is effectively the Titan replacement.
  • Hrel - Tuesday, September 1, 2020 - link

    Based on this the RTX 3060 could be just as fast as the 2080Ti, at least at lower resolutions like 1080p and 1440p. Maybe two monitors at 1080p. Which if that is the case then good on Nvidia, that's quite impressive. Only problem being price, if the RTX 3060 is going to cost $400, then it's still a bad deal.

    RTX XX60 graphics cards are targeting the same market segment that the 8800GT targeted. I'm saying this from memory, so it could be a bit off, but that card started right around $200 and made its way down to about $140, I think it even hit $125 after MIR, which is BS so $140, but still, technically.

    To ask that market segment, that could afford around $150, to jump all the way up to $300 or $400 is absolutely batshit insane!
  • Icehawk - Wednesday, September 2, 2020 - link

    You seem to be forgetting inflation, figure 2% per year as conservative. Also a significantly larger amount of memory. Look I’m not thrilled at $700 for the 3080 either but if it lasts as long as my 970 has the amortized costs vs amount I game makes it a decent value IMO.
  • Spunjji - Wednesday, September 2, 2020 - link

    Swing and a miss. Accounting only for inflation, a $200 8800GT would be $257 now.

    Chip design complexity has gone up, but the market has also grown. Card design complexity has gone up too, though, which doesn't scale that way in terms of costs.

    Even so, you should be talking $300-350 for that performance tier (your 970 launched at $330), dropping to $250 after a year or so. Alas, that is just not how things work anymore.

    $700 for the 3080 is terrible no matter how I look at it. I just can't afford to spend that on a GPU, no matter how thinly I slice it in my head.
  • Tomatotech - Thursday, September 3, 2020 - link

    Don’t then. You’re not the target market. I’m very tempted by the 3070, and I can afford it but it just doesn’t work out for me in terms of the limited time I have for gaming nowadays. The sensible choice for me would be to not buy a new card at all. The ‘treat myself to something nice’ choice would be a used 3060. ‘Splashing out on something rather silly’ would be buying a used 3070.
  • Spunjji - Thursday, September 3, 2020 - link

    I *was* the target market, though - I've been buying gaming GPUs since I first got my greasy hands on a GeForce DDR - but I got out of the pot when it started getting warm (Pascal). Being stuck on older (Maxwell) hardware has had the cumulative effect of me playing fewer new games, which gives me less of an incentive to upgrade. 🤷‍♂️

    That said, given the potential performance on offer, I too might end up with a 3060 at some point - as long as they don't do something silly with it to hobble performance.
  • RTXtech - Friday, September 4, 2020 - link

    The fundamental problem with what you seem to be suggesting, is how the goalposts can easily be changed by moving the point of reference.

    A new generation card can always come up short in a certain metric so long as the card you are comparing it to is the particular generation that will always win that metric, whether it be price, performance, power efficiency, or really anything else for that matter.

    Setting up a comparison that you know is going to be a loss is just the same as Nvidia itself setting up a comparison to make their cards look better, but only in reverse.

    In any case, different people have different requirements. you for example might prefer double the performance at the same price, while the real-life market economics may make it nothing less than a mere fantasy.
  • RTXtech - Friday, September 4, 2020 - link

    If the world was a vacuum, little would prevent Nvidia from making a card that's triple performance at half the price, but it is not. There may be no particular technical reason why this cannot be done as long as they can make even the slightest amount of profit, but unless the competition really really really really heats up I doubt it's going to happen anytime soon.
  • RTXtech - Friday, September 4, 2020 - link

    To be clear, I'm not defending higher prices, but I am arguing that since there are a variety of factors that eventually set the price, you can never really know all what's involved into why the price was the way it was, and we're just left to speculate.

    I just don't prefer that this speculation drives a purchase decision, I'd rather wait for benchmarks.
  • Spunjji - Monday, September 7, 2020 - link

    We're actually in agreement here - you took the point I was struggling to make and put it forwards in a more concise manner. I'm just tired of people buying into the hype that "this card provides X more performance than previous gen at X price, so it's a bargain" without any kind of external frame of reference - that's what led to people deciding that Turing was "good value" despite the massive price inflation over Pascal.
  • croc - Tuesday, September 1, 2020 - link

    Dimensions... 313mm or 12.3"length height 138mm or 5.4" (!) and 3 slots for the 3090, 285mm or 11.2" length 112mm or 4.4 height and 2 slots for the 3080 Better check your cases....
  • koekkoe - Wednesday, September 2, 2020 - link

    8 and 10 GB RAM in 3070 and 3080 might be a problem in the future.
    Nvidia being skimpy at RAM department has caused the cards to age badly, consider 680 2GB vs 7970 3GB for example.
    I expected at least 12GB for 3080.
  • Spunjji - Wednesday, September 2, 2020 - link

    Congratulations, you've hit upon how they'll sell the mid-gen refresh.
  • TheJian - Wednesday, September 2, 2020 - link

    TDP is heat generated not WATTS PULLED. How much heat do you need to dissipate from the cpu/gpu etc. Not how many watts you are pulling from the wall. Amazing you guys still can't seem to grasp this (your sister site doesn't get it either). TDP on a heatsink is not talking about how many watts it uses. It's how much heat it can dissipate. Or in the case of cpu, how much heat you are REQUIRED to dissipate to function properly (it isn't telling you how much you need at the wall). It's given so you know what you need for water cooling, your special heatsink etc. For example, bumpgate (NVgate? whatever) didn't happen because HP etc wasn't giving NV gpus enough watts, it was because they were not follow TDP recommendations on cooling! Get it, it wasn't wattgate, it's TDPgate. NV said in court, you didn't follow our TDP recommendations so you heated up and bumpgated! Get it? It's HEATGATE I guess, not wattgate.

    https://en.wikipedia.org/wiki/Thermal_design_power
    My A+ book from decades ago says the same stuff (even if AMD used to report watts pulled circa 2006 as TDP, it was not using the term correctly as it's aimed at cooling not watts pulled).
    "The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component (often a CPU, GPU or system on a chip) that the cooling system in a computer is designed to dissipate under any workload."

    To further illustrate this a little later on the page:
    "Some sources state that the peak power for a microprocessor is usually 1.5 times the TDP rating." IE, TDP can be far less than watts pulled or this could not be true correct? Read an A+ book. Comptia thinks TDP is how much heat you must dissipate in watts, not how many you use at a wall. HP etc didn't dissipate enough according to NV, so bumpgate. True or not who knows, but the point is NV means how much heat you need to get rid of in your case before they fail in some way shape or form. WATTS NEEDED to function is a whole other topic.

    "The TDP of a CPU has been underestimated in some cases, leading to certain real applications (typically strenuous, such as video encoding or games) causing the CPU to exceed its specified TDP and resulting in overloading the computer's cooling system."

    Again, it failed due to NOT ENOUGH COOLING, not because NOT ENOUGH WATTS GIVEN. Do you see the difference? I've corrected you before ryan, are you ignorant or stupid?

    "Since safety margins and the definition of what constitutes a real application vary among manufacturers, TDP values between different manufacturers cannot be accurately compared. For example, while a processor with a TDP of 100 W will almost certainly use more power at full load than a processor with a 10 W TDP from the same manufacturer, it may or may not use more power than a processor from a different manufacturer that has a 90 W TDP."

    Again, they are telling you TDP doesn't mean watts used. Power can be all over the map depending on scenario, manufacturer etc, but TDP is was NEEDS TO BE COOLED to function. It may be somewhat relative here, but it isn't the same thing. A heatsink can dissipate so many watts of heat, it doesn't care how it got created or how many watts were pulled to cause it, it just needs to dissipate that much heat. There it's said 3 ways from one page, and all of them meaning NOT WATTS PULLED. Now SDP (intel crap) is actually attempting to guess watts in a scenario, not heat needed to be dissipated. But that is Intel mucking with terms because of all the confusion (probably not helped by AMD in 2006 reporting TDP as watts pulled...some engineers need to be sent to re-education or something), not TDP suddenly changing definitions. It is still heat needed to be dissipated, period. Watts needed to run, totally different than watts needed to be cooled because of watts needed to be run...LOL. C'mon, that's funny.

    One more time, TDP is watts needed to be dissipated (in your case, on your cpu, whatever, DISSIPATED!), not WATTS USED. You need to get that heat OUT or you die (throttle, whatever...you get the point). Again, not talking about how big your PSU needs to be, but how much heat you need to remove from an area before problems come.

    If you say watts pulled, you won't pass the A+ test. Well, you can miss so much it's not even really a test IMHO...ROFL. That said, you will get it wrong on A+ if you think it's watts pulled instead of HEAT NEEDED TO BE DISSIPATED. It was ON my test ;) I got 1 wrong, and I chose the one to get wrong...ROFL. In my defense, at that time they were making people re-take some in 100% cases thinking you cheated. When you know your stuff you know you got 100% (I thought maybe I had one on raid wrong, but not TWO, so chose to fail one question). That failed one, immediately cause another in the same domain (I chose my most knowledgeable topic, knowing this happens), which of course I knew and it stopped. A+ was variable back then, if you got them all right you could walk in 28Q's in some cases. I got 58 IIRC which made me nervous until I saw results (but I was done in 1/4 allotted time so maybe they just add more if you're fast?). I thought one wrong should be ~30-35, not 58! Whatever...Just a tough day I guess on that machine maybe? I sat there for another 1/2 hour just to fake it more! WTH? I figured a 2hr test should probably not be done in ~28mins. But I literally was thinking the answers before I finished most of the reading of the questions.

    All of my MSFT tests were done in exact number of questions the test said so far for server or desktop (might be different now, haven't taken one in a while). Not sure what my network+ test had, I studied to death for that and didn't care. A+ same story on study work, just wasn't as confident back then (at test taking, not doing the job) as it was the first test I took. I really don't like that you have to get one wrong just to fake you are not a paper cert, but whatever. I wouldn't do it today, I'd risk you forcing me to take it again...ROFLMAO. It's a character flaw I know, I don't like being wrong...Who does? pfft, we're all flawed right? :) Only Jesus...Oh never mind. THERMAL should be a clue. It's not WDP (watt design power), it's THERMAL...I don't know what you're pulling at the wall, but TDP is how much we need to move out of this place before death. Get it?...I digress...Ryan, 1440p still isn't mainstream. Steam shows 6% using it, 65% still using 1080p, 4k? ROFL. 2.3%. 4k, but, 4k, 4k, but but ,,....4k...LOL. Wake me when 1440p hits 10%, never mind 4k hitting it. I'm not talking TV's here, we are talking PC. And no, next gen consoles won't be doing 4k maxed details, devs already complaining like mad about fake 4k and lacking perf. That said, I think 3090 might have a chance. I'll wait for reviews, but I always said 5nm before 4k for real so...AMD (and ryan) have been claiming this crap since 660ti. I doubt my next monitor will be 4k. I'd rather have 1600p and you'll have to pry my 1200p from my cold dead hands at this point. Wider isn't better unless I'm using spreadsheets or watching movies. I watch movies on TVs and only see a spreadsheet at work. 16:10 please. Nobody asked for 16:9, it was cheaper to make. Ask us, we'll say 16:10 on PC. I'll gladly pay $1200 for a 16:10 30in or more with a Gsync chip inside (not that fake compatible crap, I want the chip and the real fix). Bring back 16:10 on a 30in+ and you will sell out at pre-order. We are DYING to get it back! Take a poll. Nobody watches movies on a PC when you can get a 65in TV for $500 on black friday (heck daily I think). Just checked, newegg 65in $599, so yeah, BF $500. Even samsung has a $700 4k LED. Who watches movies on a PC? You can't afford a $500 TV? Get a better job so I can have my monitor back ;)
  • Bobby3244 - Wednesday, September 2, 2020 - link

    r/iamverysmart
  • Spunjji - Wednesday, September 2, 2020 - link

    100%. I got to the bit where he summarily declared that bumpgate didn't happen and switched off.
  • Tomatotech - Thursday, September 3, 2020 - link

    And also r/iknowhowtomakefriendsandinfluencepeople
  • Gigaplex - Wednesday, September 2, 2020 - link

    Perhaps you should go back to school. TDP is equivalent to power consumed because that power is expended as heat (well, RGB lighting may emit a tiny portion of energy as light, and fans as sound). See the first law of thermodynamics, conservation of energy. If the card is dissipating ~220W of heat at equilibrium, it's drawing ~220W of power from the PSU.
  • Luminar - Wednesday, September 2, 2020 - link

    No, a card dissipating 220 watts is using at least 300 watts. Remember, only some of the power going to the card is dissipated as heat. The remainder is used for actual mechanical work.
  • Icehawk - Wednesday, September 2, 2020 - link

    Last time I checked we don’t live in a perfect thermally efficient world. No idea what the rate is but 100W of heat as a byproduct is coming from a larger draw.
  • Spunjji - Wednesday, September 2, 2020 - link

    A GPU doesn't *do* any "mechanical work" that isn't immediately undone at great speed. All of the electricity that goes in comes out as heat in the end.
  • catavalon21 - Wednesday, September 2, 2020 - link

    "All of the electricity that goes in comes out as heat in the end."

    No, it's not. Some is converted into signals that move within the chip, and between components on the board. Some becomes the signal output to a monitor. Some moves the fans which move air (a mechanical process) and generate noise. Some drives RGB illumination, if so equipped. A LOT of heat is generated in modern high-end cards, but not all input power (energy) is converted into equivalent heat energy.
  • Spunjji - Thursday, September 3, 2020 - link

    Okay, allow me to rephrase: *the majority of energy that goes in comes out as heat*, and what little does not often comes out in other forms of energy that rapidly degrade into heat (RGB illumination, output signal to monitor). I'm pretty sure the only exception there would be the fans, which are going to be in the 6W total range for a triple-fan card - some of which will still be lost as heat in the motor and bearings anyway.
  • catavalon21 - Wednesday, September 2, 2020 - link

    "TDP is heat generated not WATTS PULLED....<> Amazing you guys still can't seem to grasp this"

    GPU articles over the years at AT, many by this author, have referred to the power drawn by the card as "TDP", "Total Graphics Power", "Typical Board Power", "Power Consumption", ... such that many readers understand what is implied by "TDP", but you are correct, dissipated power is not consumed power - it is heat.
  • Yojimbo - Thursday, September 3, 2020 - link

    "TDP is heat generated not WATTS PULLED."

    https://youtu.be/6vxHkAQRQUQ
  • Zingam - Wednesday, September 2, 2020 - link

    If consoles were to allow mouse&keyboard games this madness shall end!
  • eddman - Wednesday, September 2, 2020 - link

    They do and there are already X1 and PS4 titles that are playable with mouse & keyboard. The problem is that it's up to the game developers to implement them, and most choose not to.
  • Zingam - Wednesday, September 2, 2020 - link

    Ray-tracing only accelerators may be cheaper and may allowed other players to join in again + much better graphics.
  • Yojimbo - Wednesday, September 2, 2020 - link

    I doubt we'll see 100% ray traced games for many years.
  • Zingam - Wednesday, September 2, 2020 - link

    We will see. I am think 2025 is the year where we could finally say where is the industry going. Consoles, SoCs supporting Raytracing will be 5 years old, which should be enough time to make any conclusions.
    What I believe we may see new players with Raytracing because there should be less patents about hardware but I may be totally wrong.
    In any case Raytracing is more interesting than rasterization already.
  • RedOnlyFan - Wednesday, September 2, 2020 - link

    With direct storage and data decompression happening on gpu does it mean gaming with these cards would reduce the stress on cpus? Meaning even a decent cpu could be good enough?
  • Zingam - Wednesday, September 2, 2020 - link

    I think the answer is: it depends. You can have an engine that uses a lot of GPU and less CPU or the other way around and also an engine that maxes out both. These new techs just going ve new opportunities and don't fix your hardware.
  • Zingam - Wednesday, September 2, 2020 - link

    :) sorry about the incomprehensible part...
  • Zingam - Wednesday, September 2, 2020 - link

    The most important features: low power, low cost, VRR, HDMI 2.1, DP 2.0, USB 4, AV1 encoding/decoding, Raytracing/AI/Compute performance.
  • Zingam - Wednesday, September 2, 2020 - link

    How come 3090/3080 have the same amount of transistors but totally different CUDA cores count?
    Why is that discrepancy? Where did the 3080's transistors go?
  • Ej24 - Wednesday, September 2, 2020 - link

    They're physically the same die. The number of transistors is identical. Many are dedicated to pcie communication, memory controller, etc. Not just cuda cores. The number of fully functioning transistors may ultimately be different though. Thus less functioning Cuda cores.

    Im curious why the Nvidia Cuda core numbers are exactly double what the AIBs have been saying..? That's strange.
  • eddman - Wednesday, September 2, 2020 - link

    Nowhere, they are just disabled. They both use the same chip, so the transistor count is the same.
  • Zingam - Friday, September 4, 2020 - link

    I would think they wouldn't count the disabled transistors. That's an interesting way to boost your numbers for marketing.
    Thanx for the answers!
  • eddman - Friday, September 4, 2020 - link

    It has always been this way, for both nvidia and AMD/ATI.
  • Olternaut - Wednesday, September 2, 2020 - link

    Can you please explain to me how the $1500 price of the 3090, the Titan replacement, being almost $100 L E S S than the last Titan, is somehow reaching new heights in pricing as you put it? You are confusing me.
  • Olternaut - Wednesday, September 2, 2020 - link

    I meant to say $1000 less. Why doesn't your site have an edit button?
  • Luminar - Wednesday, September 2, 2020 - link

    Why can't you proofread your comment before smashing the submit button?
  • Spunjji - Wednesday, September 2, 2020 - link

    It's okay, your comment was wild either way, something which will be conclusively demonstrated when they eventually release a 3000-series Titan.
  • DanielLW - Wednesday, September 2, 2020 - link

    Scarcity. Perhaps 80% of dies end up with enough working CUDA cores to be a 3080. 15% have enough working cores to be a 3090. And maybe 5% have enough to be a (speculating) 30 series Titan.
  • Storris - Wednesday, September 2, 2020 - link

    TDP - ?

    NVidia lists a Power Consumption figure, not a TDP figure.
  • Storris - Wednesday, September 2, 2020 - link

    Also the CUDA Core counts haven't actually changed... depending on how/what exactly a CUDA Core even is... they've 'just' increased the amount of work FP32 work each one can do. If FP32 is the only thing that CUDA does, then sure it's effectively doubled, but it isn't.
  • Yojimbo - Thursday, September 3, 2020 - link

    CUDA core denotes an FP execution unit. CUDA is a programming environment. They are two different things. The CUDA core counts have changed. They have put more CUDA cores (FP32 execution units) into the chip.
  • wr3zzz - Wednesday, September 2, 2020 - link

    If 3090 and 3080 are using the same die why is the 3090 card so monstrous in size? The extra 30W needs that much cooling?
  • DanielLW - Wednesday, September 2, 2020 - link

    Maybe a premium cooling option for a premium card? We will see when the reviews come out, but it will be interesting to see if a 3090 is actually quieter than a 3080...
  • just4U - Wednesday, September 2, 2020 - link

    The 3080 looks somewhat interesting with the additional memory.. and should outperform the 2080TI for 30% less cost wise.. The TDP is a bit concerning but.. ok.. As for the 3090... all I can say is NOPE. Now to see what AMD has to offer up with their big navi line.
  • Showtime - Wednesday, September 2, 2020 - link

    Wow if those numbers are accurate. Nice upgrade to my 1080 ti for about what used 1080 ti's sell for. Current RTX was priced so high that I never considered buying one.
  • John_Strambo - Wednesday, September 2, 2020 - link

    That MSRP is fishy as hell. A famous price aggregator here in CH shows the price graphs since launch and entry price of 2080 is mostly around 950-1000 at resellers even thou the official September 2018 MSRP was 699+. Can't imagine any reseller trying to clear 20xx stocks with that 3080 MSRP.
  • ArmedandDangerous - Thursday, September 3, 2020 - link

    I was mostly interested in any NVENC improvements Nvidia may have introduced, and while they didn't say anything during the presentation, their website does show that the RTX3000 series uses Gen 7 NVENC while Turing was Gen 6.
  • Spunjji - Thursday, September 3, 2020 - link

    I understand that AV1 is now covered.
  • catavalon21 - Thursday, September 3, 2020 - link

    Yep.

    https://www.nvidia.com/en-us/geforce/news/rtx-30-s...
  • Yojimbo - Thursday, September 3, 2020 - link

    I don't know much about encoding and decoding but AV1 is decoded in the NVDEC block. There's no AV1 encode in the NVENC block as far as I see.
  • Santoval - Friday, September 4, 2020 - link

    Half covered. AV1 encoding is still not supported.
  • Spunjji - Monday, September 7, 2020 - link

    Damn!
  • Yojimbo - Thursday, September 3, 2020 - link

    According to the NVIDIA website, for the 30 Series the NVDEC was upgraded but NVENC is still the same one as in the 20 Series, which is gen 7.
  • azfacea - Friday, September 4, 2020 - link

    Thank you NVIDIA for restoring the PC/Consoles advantage. The coming console generation is pretty dangerous to our way of life, freedom and privacy.

    I fear useful idiots would line up to buy the cheap consoles w/- knowing what they surrender.
  • Santoval - Friday, September 4, 2020 - link

    Ampere is a far bigger beast than I expected. So the RTX 3080 effectively succeeds both the RTX 2080 Ti and the RTX 2080, being more than twice as fast as the former and, quite spectacularly, *3* times as fast as the latter(!!), while oddly providing even better (FP32) performance per dollar than the RTX 3070. This must have been the greatest generational increase in performance since the first graphics cards were released. I thought Big Navi had a slim chance to compete with Nvidia's top end cards, but it seems it will rather compete with the RTX 3070, continuing the status quo..

    All that performance in exchange for 1 GB less RAM than the 2080 Ti. I suppose Nvidia plan to release a 3080 Ti later with 12 GB of RAM and ~9500 CUDA cores, in order to bridge the gap -more in price than performance- with the 3090 (the new Titan apparently); so if they had added 11 GB RAM to the 3080 the difference with the 3080 Ti would have too slim.

    I didn't quite get how the RTX 3090 has a 384-bit bus out of 24 chips though. By that : "..RTX 3090 gets 24GB of VRAM, but only by using 12 pairs of chips in clamshell mode on a 384-bit memory bus.." did you mean that all the chips are on one side of the card, with each "pair" of them a stack of two chips that shares 32 bits (thus each chip is allocated only 16 bits)? Or is "clamshell" a reference to each chip being on opposite sides of the card and then -somehow- each "pair" (can two chips be called a pair if they are divided by a PCB?) still sharing 32 bits of the memory bus?
  • Santoval - Friday, September 4, 2020 - link

    edit : "This must *be* the greatest generational increase in performance..."
  • Spunjji - Monday, September 7, 2020 - link

    I don't think it helps much to simply compare the FP32 numbers. The performance figures we have so far indicate that Ampere gets less out of its theoretical TFLOPS than Turing did.
  • alpha754293 - Sunday, September 6, 2020 - link

    I don't understand why Nvidia keeps making their next generation video cards worse and worse.

    The 3090 Ampere double precision performance is around 918 GFLOPS whilst the Nvidia Titan V and Titan V CEO Edition can get over 6000 GFLOPS for double precision performance.

    Stupid, in my opinion.
  • Spunjji - Monday, September 7, 2020 - link

    Not stupid - just market segmentation. The 3090 isn't a Titan, or a Quadro for that matter. At some point they'll probably release a model that isn't hobbled, and they will charge accordingly.
  • futurepastnow - Tuesday, September 8, 2020 - link

    Why is the "Ray Perf." in the comparison chart a "?" for the 2080 Ti? No way to test?
  • Davideo - Thursday, September 10, 2020 - link

    It's a great time to be alive
  • Davideo - Thursday, September 10, 2020 - link

    8k!? These new video cards should be supplied with a built-in smoke detector.
  • alpha gammer - Friday, September 11, 2020 - link

    Hmm 2 x big Navi RX6000 in xfire on a single pcb with a custom slim lined water cooling solution could possibly beat a 3090 rtx!
  • Xinyang - Wednesday, September 16, 2020 - link

    Does anyone have any idea that A100 actually having more CUDA cores while delivers less single precision performance when compared to RTX3070?
  • bairlangga - Friday, October 2, 2020 - link

    Some couple of days had passed and still there are no review coming on anandtech.

    Man, what had happened between you and Jensen?
  • bairlangga - Friday, October 2, 2020 - link

    Some couple of days had passed and still there are no review coming on anandtech.

    Man, what had happened between you and Jensen?
  • tygrus - Sunday, December 20, 2020 - link

    Looking at the detail not all of the specs have increased in line with SM. While single-precision TFLOPS and DP TFLOPS stay similar (or 2x SM & 0.5x rate per SM per MHz).
  • Sweepi - Wednesday, March 3, 2021 - link

    Error found: The RTX 2080 Ti Specs use the reference clocks for the fp32 performance, but the FE clocks for the Tensor fp16 performance. (Src: nVidia Turing whitepaper p. 15) Please settle for one version and add „FE“ or „Reference“ accordingly in the Table header :)

Log in

Don't have an account? Sign up now