As Chef Alvin would've said, taste is king! Okay, quote makes no sense. I'd take Vega's industrial design over the bunch of spaceship-esque LED-ridden cards any day.
LOL. Go watercooling or go home? If these compete with the 1080 vanilla, they need to hit the market around $450-500 for the aircooled version, and a bit more for the watercooled version. Personally that's still too high for me, I'm hoping there will be a partially-enabled model for substantially less I might be able to afford.
Pre-release drivers score around a 1080. I guess the will be able to squeeze out an additional 20% via driver refinement. Which will put it close but below the 1080 ti.
Vega will definitely be superior computationally, so if the price is right, miners will sweep it, if not - nobody save for a few die hard fanboys will buy it.
Either way, it won't reach gamers in decent quantities, that is unless the mining fad suddenly dies in the weeks to come.
The main driver of the current *GPU* mining craze is Ethereum. When eth transitions to Proof of Stake, mining demand will plummet. The question isn't if, but rather when, this happens.
I'm not convinced the miners will wipe it out. They're releasing cards built just for miners and buying Vega now, whatever it's price, will mean buying it at its most expensive price. Not an approach miners are in a rush to take. Not to mention that this is going to be a hot card. Maybe if the hashing performance of Vega is massively superior to all of the bonafide mining cards...
Miners can't afford to wait forever. The window of profitability is closing with every passing second. If miners were holding out, you wouldn't have that phenomenon that poralis gpus are pretty much sold out and the little that are available cost more than nvidia counterparts.
Well, I'm hoping that difficulty ramping and incoming PoS discourage Eth farm investment in Vega. Vega 56 looks like it will offer very good bang for the buck, and I'd bet it overclocks decently given it's big brother's clocks (even factoring in binning).
Given the information we have about the die size and power envelope it would normally be a safe bet that they would be coming in ~20% faster than the 1080Ti, but given this really, really odd hype train they must have something substantially better then that. Clearly with the way they are rolling this out, we must be looking at 30%-50% faster then the 1080Ti/TitanXp, nothing else would possibly make sense for this roll out.
Now given, they have to hit those numbers, but if they do hit those numbers this will be remembered as an utterly brilliant launch. Worst case scenario, it's only ~20% faster then the 1080Ti and AMD will look very, very foolish. I'm hoping their PR people aren't that profoundly inept.
Those benches people are trying to float around having this in the non Ti 1080 range, come on. Given the size and power requirements of this part, the timing, the launch hype- AMD would have be the biggest group of morons on the face of the Earth. A remotely competent company wouldn't dream of releasing anything that was close to that slow this big, this power hungry, this late with this extremely lengthy hype train. They would have to employ all of the dumbest people in the industry.
It can't be real. ~30% faster then Ti would be the absolute bottom of the barrel for this roll out that wouldn't be laughable.
No, analytical. We know the die size is larger then the Ti, we know the power consumption is higher then the Ti, it is launching long after Pascal, unless we assume that the engineers at AMD are utterly and completely inept, a bottom of the barrel worse case scenario should have this showing up at ~20% faster then Ti.
That would be if we were just looking at a normal launch and assuming reasonable levels of competence in their job. Given this absurd hype train, I was seeing Vega benches AMD was promoting for a long time now, they must have something markedly better then that. Why on Earth would anyone with an iota of PR savvy allow these sorts of shenanigans to draw immense levels of scrutiny on to their launch if they had anything but a monster part that was going to simply blow Pascal out of the water.
There is no other rational reasoning behind it. You have to assume a level of idiocy that due to the nature of publicly traded companies would flirt with criminal activity for any executive to allow this to continue unless they had something that could smash Pascal and smash it hard.
Volta is right around the corner, we know this, expecting Vega to be closer to Volta in performance then Pascal given the timing of its launch is not only reasonable, it is the only logical conclusion one could make unless you consider AMD to be the biggest group of idiots in the world.
You’re reading way too much into this. They need all the PR they can get, and even Vega being 5-10% slower than 1080 won’t matter much if it’s >25% cheaper.
Going by your logic companies releasing a product that’s _not_ top of the line would be doing less PR — why? Spend years developing Vega and then _not_ tell the world about it because a fraction of the enthusiast market is very loudly dissapointed? That’s what wouldn’t make sense.
Your post just sound like you need a reason to be overly optimistic. You’ll just end up disappointed that way, though.
Actually, being slower then a 1080 would matter, especially at this time. Sure, they can keep offering more value solutions like they have with Polaris, but if they can't compete in the high-end segment, its really quite a sad day for GPUs.
I don’t have the numbers so I can’t say. But if middle price range is where most sales are done, as long as Vega offers much better performance that Nvidia at the same price I also don’t think buyers will not care much.
But we’ll see. My point was more that the whole "they would only do agressive marketing if there were zomg 7369 times faser than 1080 ti" argument does not seem logical.
True technology-wise — but do you believe that matters for most people buying a GPU? I think most people on a budget (AMD’s target audience for the most part) might be fine with that as long as it’s much cheaper.
I don't see vega coming on top of the 1080 ti. It uses almost the same power, and vega will inevitably be less power efficient, since it is more of a "generic" chip that will go into several products, including prosumer accelerators, whereas the 1080 ti is essentially stripped from everything that is not essential for gaming.
A realistic optimal case is vega eventually coming within 10-5 % below a 1080 ti as drivers mature.
Then again, as already mentioned, since AMD will be forced to price it very competitive, most of vega will go into mining rather than gaming.
Which kinda sucks for AMD. They will have to offer vega at a good price to retailers, only for retailers to price-gouge as mad dogs and pocket all the premium due to strong demand and weak supply.
Woah, man. AMD aren't the biggest group of idiots in the world. They are cash-strapped. This is mostly why they are late to market, less energy efficient, and less die area efficient than NVIDIA.
I think they also made a few poor strategic decisions some years back. First was that GCN was designed for their Fusion line, which never ended up working out. Second was that they seemed to gamble on HBM, which hasn't lived up to promise. Third is that they seemed to ignore the warning signs that efficiency would become very important for both high power and low power devices, leaving AMD to contend only in the middle, which is is not a high margin position to be in.
You can't compare AMD to Nvidia's architecture based on clock speed. Clock for clock GCN has done a lot more for a long time now, they just haven't clocked as high. For example with the Fury X vs 980 Ti, the Fury X was clocked around 13% slower (1050 vs stock boost of 1202), yet could run pretty close to neck and neck with a non-overclocked 980 Ti.
If the 1080 Ti is 20% higher clocks, I'd guess that worst case Vega should be no more than 10% slower just based on the past instructions per clock differences.
Comparing frequencies of different architectures is idiotic. What about the number of shaders? Think... A 1050Ti is much faster than this Vega and it seems it will be 1/3 the performance. So? Is it a meaningful comparison?
Once you have taken into account the shaders and the frequencies (so in a word the theoretical TFLOPS) you have to look at bandwidth (and internal algorithms and cache hierarchy to spare it), number of TMUs and ROPs and the way they are configured to be exploited.
Once you have done that, you still do not know exactly how the whole architecture is going to behave while managing complex tasks like those needed for a game engine.
In the end, even talking about frequencies for what could be a comparison between two different architectures is simply idiotic.
"AMD would have be the biggest group of morons on the face of the Earth. A remotely competent company wouldn't dream of releasing anything that was close to that slow this big, this power hungry, this late with this extremely lengthy hype train. They would have to employ all of the dumbest people in the industry."
Tomshardware has already confirmed that it's slower than a regular gtx 1080, but not by much. The only thing that's gonna be saving AMD in regards to Vega are miners and compute heavy consumers.
It is a 13 TFLOPS card, at most it will match a TI which I believe it is possible, however I bet against them on this. I sold my share when price skyrocket this week. I believe Vega will go against a 1080 GTX and not a TI. The pricing scheme from AMD is from a display + card setup and we all know the displays used in their demo have a ridiculous price disparity of 500$. This is leaving us with Vega matching a TI for more at a more expensive price, or Vega matching and better a 1080 GTX at a more expensive price.
Could be a monster card, who knows, but pricing don't look good. It was to be expected, it's HBM2 driving the price.
AMD typically needs more TFLOPS to match NVIDIA cards in real-world scenarios because NVIDIA can use the power more efficiently, while AMD may come with more raw power but typically isn't able to fully leverage it. At least this was the situation in the past. We'll see about Vega.
Nvidia (Geforce) are very very good at hiding their faults because they have gotten very good at using fancy clock gating or whatever to focus on gaming oomph at often the most basic levels, whereas AMD (Radeon) are giving the full enchilada that needs code optimization for the best results (HASHING has show this as truth many times over)
Not best way to put it but Geforce are like a fast dual core Radeon are like a modest quad core, one is much easier to get what it has whereas the other you need to really pay attention to using what it has with more attention.
Why does that fancy 1080 or whatever need to run at very high clock speeds to "match" the Radeons at lower clock speeds? when you do not have as much that can "help" the deficit or removing all the extras you have to run the engine harder for lack of a better term.
Bet you if you took the fastest Radeon currently for single gpu and the fastest Geforce for single gpu compare them at a clock/clock, watt/performance, mm2/mm2 GCN designs win hands down.
Geforce may need 2-3Mhz to match similar performance level, now watt/clock yes the Geforce may "win" but if they give less actual performance (if able to "hammer them" without fancy tricks) even though use less power in watts they are NOT in fact more efficient because they need to run higher clocks for similar work. it is afterall a give and take, chop things away, it takes higher clocks therefore more power to feed the chip, I do not trust digital circuits as the results can easily be skewed so you think it is only using say 150w when in fact it is using 175w or whatever (would be far from first time Nv has lied about things wouldn't it)
I know more or less for a fact mm2/mm2 GCN is a superior design, even comparing Maxwell to Pascal the only reason as it stands today that Pascal is faster watt/watt, mm2/mm2/, watt/clock comparatively speaking they are slower actually BUT they can ramp clock speeds up while keeping power somewhat lower so "appear" faster when they are not, basically paying more for something that is less :D
There are many sides to these type of things, as well as FEW programs/apps/games that are not biased to specific architectures i.e cannot be "tricked" into using it other than how it was built.
There is a reason why AMD graphics cards have done so very well at hashing, there is a bunch of untapped performance to be had, even if the clock speeds are not stupid high, VLIW5, VLIW4, GCN are more complicated to tap the potential, but when you do, they are crazy potent.
Speed be damned, I want fast at a decent price that is built for the long haul, not fast today replacing tommorow, slower faster than should be as has been shown many a time over with Nv, have also shown many times to be of a similar mind when it comes to "quality" of the end product, build minimal as possible highest price folks might be willing to pay instead of as should be build best possible at minimum price you can afford to sell it at.
Anyways, the leverage if the part that is factual in a different way, AMD does not build games, they build the product and the drivers, it is the coders/developers that optimize the code for the product, the devs will often take the least amount of time possible to get the finished product on the shelf, if it takes 1/2 as long to get the dired results with Geforce as it does with Radeon, they will use Geforce, or, if they are paid extra to optimize for one or the other etc etc.
I cant see what fittings are used or how the fan is connecting to the motherboard but im guessing they would not be easily modified? Im curious how if you could get the fan and rad off and attach it to an existing cooling loop...???
JayzTwoCents did a clip recently where he expands a Fractal Design Celsius S36 AIO https://www.youtube.com/watch?v=fDeDnm3V3j4 It would look fairly dogsballsey and you are mixing who knows how many different types of loop components, you are definitely doing some unfriendly things to your warranty... you need to add a reservoir... on the plus side you are spending $250NZ (no idea what you would need in the final build) and getting a CPU+GPU loop at an intermediate difficulty build level...
And yes I guess you could just add am S36 and a watercooled Vega to a system and not combine the loops... the GPU loop is a single 140mm (w.a.g) radiator, the S36 with its 360mm radiator is going to give a better cooling area for the component that is producing more heat that the less than 95W produced by the CPU and for a less complicated build than a full custom loop...
And yes I have never done any kind of liquid cooling so my napkin math may be completely flawed, interesting idea though, feasible?
If you're a gamer and nothing else, then perhaps Nvidia is a better choice for you...that's great. But if you want to save a few hundred dollars and don't mind a bit less gaming performance but prefer a higher overall compute performance then Vega is for you. If you're a Data Center, then Vega is a great choice/alternative for you. If provide a cloud-based virtualized GPU service (VDI or virtual gaming consoles etc) then Vega is for you. If you're a developer rendering large workloads Vega is better choice. If you're developing an APU or SoC (Raven Ridge) that needs a GPU with infinity-fabric and HSA to capture market-share and make mobile devices higher-performance and lower cost then Vega is a darn perfect architecture. If you're a Miner, then unfortunate for the rest of us, Vega will stocks will probably be swept up by the miners if it were priced like a GTX1080
There were a few sacrifices Vega had to make in order to solve a much bigger problem. I believe the TSMC 16nm node is also more efficient than the Samsung 14nm node so it can be clocked higher and consume less power, but samsung has much better yield which in turn affects price. I see this as a fair trade-off.
Vega brings a lot of architectural changes most that will compete against Volta. Some folks think Vega is just catching up to Maxwell, that's plain wrong. Driver-wise Vega has a bunch of features that are not even enabled yet which is why it's just performing like it was a 14nm Fiji. Give it some time and then this beast will start to open up. And once that's done, we'll be ready for 7nm Navi.
I think AMD is actually ahead of Nvidia in the Exascale race, put 40,000 o Vegas into 10,000 EPYC servers interconnected infinity fabric with HSA APIs and we have 1+ ExaFLOP environment that lower cost and higher efficiency than an intel/nvidia solution.
Amd pushed GlobalFoundries14nmLP to the clock and voltage wall so it appears to be less efficient than TSMC 16nmFF(Because LP processes shine at low voltage and low clocks i.e. smartphone SoCs)
Zangheiv says.... "Some folks think Vega is just catching up to Maxwell, that's plain wrong. Driver-wise Vega has a bunch of features that are not even enabled yet which is why it's just performing like it was a 14nm Fiji." ------------------------------------------------------------------------------------------------------- Sorry Zangheiv, but Vega will never catch up to Maxwell (Driver-wise)
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
47 Comments
Back to Article
IanHagen - Saturday, July 29, 2017 - link
That's one pretty GPU!Tchamber - Saturday, July 29, 2017 - link
Yeah it is! Too bad they're really milling this "trickle marketing campaign" though.pheno.menon - Saturday, July 29, 2017 - link
Agreed, it looks great. I wonder how much the liquid cooled version will run..Lord of the Bored - Saturday, July 29, 2017 - link
It probably is. Too bad they stuck it in a generic sheet metal box so we can't see it.Alexvrb - Sunday, July 30, 2017 - link
Yeah I'm with you, man - heatsinkless exposed GPU cards are the way to go! Toss in some arr gee bee LEDz for the ultimate casemooding!Lord of the Bored - Monday, July 31, 2017 - link
*rolls eyes*nevcairiel - Sunday, July 30, 2017 - link
Really? Its just an aluminium brick with a hole for a fan in it. Well, no accounting for taste, I suppose.IanHagen - Sunday, July 30, 2017 - link
As Chef Alvin would've said, taste is king! Okay, quote makes no sense. I'd take Vega's industrial design over the bunch of spaceship-esque LED-ridden cards any day.MrSpadge - Sunday, July 30, 2017 - link
Could be a lot worse ;)Flunk - Sunday, July 30, 2017 - link
I honestly couldn't care what it looks like, performance is the only thing that matters.quantcon - Saturday, July 29, 2017 - link
I kept wondering for a minute what that blue ribbon streaming out of the right GPU was.cpucrust - Saturday, July 29, 2017 - link
That's the participation award. (trigger me not)Alexvrb - Sunday, July 30, 2017 - link
LOL. Go watercooling or go home? If these compete with the 1080 vanilla, they need to hit the market around $450-500 for the aircooled version, and a bit more for the watercooled version. Personally that's still too high for me, I'm hoping there will be a partially-enabled model for substantially less I might be able to afford.That is, if mining demand has diminished by then.
ddriver - Sunday, July 30, 2017 - link
Pre-release drivers score around a 1080. I guess the will be able to squeeze out an additional 20% via driver refinement. Which will put it close but below the 1080 ti.Vega will definitely be superior computationally, so if the price is right, miners will sweep it, if not - nobody save for a few die hard fanboys will buy it.
Either way, it won't reach gamers in decent quantities, that is unless the mining fad suddenly dies in the weeks to come.
Alexvrb - Sunday, July 30, 2017 - link
The main driver of the current *GPU* mining craze is Ethereum. When eth transitions to Proof of Stake, mining demand will plummet. The question isn't if, but rather when, this happens.Magichands8 - Sunday, July 30, 2017 - link
I'm not convinced the miners will wipe it out. They're releasing cards built just for miners and buying Vega now, whatever it's price, will mean buying it at its most expensive price. Not an approach miners are in a rush to take. Not to mention that this is going to be a hot card. Maybe if the hashing performance of Vega is massively superior to all of the bonafide mining cards...ddriver - Monday, July 31, 2017 - link
Miners can't afford to wait forever. The window of profitability is closing with every passing second. If miners were holding out, you wouldn't have that phenomenon that poralis gpus are pretty much sold out and the little that are available cost more than nvidia counterparts.Alexvrb - Monday, July 31, 2017 - link
Well, I'm hoping that difficulty ramping and incoming PoS discourage Eth farm investment in Vega. Vega 56 looks like it will offer very good bang for the buck, and I'd bet it overclocks decently given it's big brother's clocks (even factoring in binning).Wardrop - Sunday, July 30, 2017 - link
Same. Didn't realise a guy could have such a large thigh gap.sor - Saturday, July 29, 2017 - link
Is the GPU-crotch guy holding a Threadripper chip?extide - Sunday, July 30, 2017 - link
Better/higher res image: https://i.redd.it/f4enyemf6icz.jpgBenSkywalker - Sunday, July 30, 2017 - link
Given the information we have about the die size and power envelope it would normally be a safe bet that they would be coming in ~20% faster than the 1080Ti, but given this really, really odd hype train they must have something substantially better then that. Clearly with the way they are rolling this out, we must be looking at 30%-50% faster then the 1080Ti/TitanXp, nothing else would possibly make sense for this roll out.Now given, they have to hit those numbers, but if they do hit those numbers this will be remembered as an utterly brilliant launch. Worst case scenario, it's only ~20% faster then the 1080Ti and AMD will look very, very foolish. I'm hoping their PR people aren't that profoundly inept.
Those benches people are trying to float around having this in the non Ti 1080 range, come on. Given the size and power requirements of this part, the timing, the launch hype- AMD would have be the biggest group of morons on the face of the Earth. A remotely competent company wouldn't dream of releasing anything that was close to that slow this big, this power hungry, this late with this extremely lengthy hype train. They would have to employ all of the dumbest people in the industry.
It can't be real. ~30% faster then Ti would be the absolute bottom of the barrel for this roll out that wouldn't be laughable.
FaaR - Sunday, July 30, 2017 - link
Are you being sarcastic?BenSkywalker - Sunday, July 30, 2017 - link
No, analytical. We know the die size is larger then the Ti, we know the power consumption is higher then the Ti, it is launching long after Pascal, unless we assume that the engineers at AMD are utterly and completely inept, a bottom of the barrel worse case scenario should have this showing up at ~20% faster then Ti.That would be if we were just looking at a normal launch and assuming reasonable levels of competence in their job. Given this absurd hype train, I was seeing Vega benches AMD was promoting for a long time now, they must have something markedly better then that. Why on Earth would anyone with an iota of PR savvy allow these sorts of shenanigans to draw immense levels of scrutiny on to their launch if they had anything but a monster part that was going to simply blow Pascal out of the water.
There is no other rational reasoning behind it. You have to assume a level of idiocy that due to the nature of publicly traded companies would flirt with criminal activity for any executive to allow this to continue unless they had something that could smash Pascal and smash it hard.
Volta is right around the corner, we know this, expecting Vega to be closer to Volta in performance then Pascal given the timing of its launch is not only reasonable, it is the only logical conclusion one could make unless you consider AMD to be the biggest group of idiots in the world.
xype - Sunday, July 30, 2017 - link
You’re reading way too much into this. They need all the PR they can get, and even Vega being 5-10% slower than 1080 won’t matter much if it’s >25% cheaper.Going by your logic companies releasing a product that’s _not_ top of the line would be doing less PR — why? Spend years developing Vega and then _not_ tell the world about it because a fraction of the enthusiast market is very loudly dissapointed? That’s what wouldn’t make sense.
Your post just sound like you need a reason to be overly optimistic. You’ll just end up disappointed that way, though.
nevcairiel - Sunday, July 30, 2017 - link
Actually, being slower then a 1080 would matter, especially at this time. Sure, they can keep offering more value solutions like they have with Polaris, but if they can't compete in the high-end segment, its really quite a sad day for GPUs.xype - Sunday, July 30, 2017 - link
I don’t have the numbers so I can’t say. But if middle price range is where most sales are done, as long as Vega offers much better performance that Nvidia at the same price I also don’t think buyers will not care much.But we’ll see. My point was more that the whole "they would only do agressive marketing if there were zomg 7369 times faser than 1080 ti" argument does not seem logical.
Nagorak - Sunday, July 30, 2017 - link
Being slower than a 1080, while drawing twice the power would honestly be quite pathetic at this point.xype - Sunday, July 30, 2017 - link
True technology-wise — but do you believe that matters for most people buying a GPU? I think most people on a budget (AMD’s target audience for the most part) might be fine with that as long as it’s much cheaper.ddriver - Sunday, July 30, 2017 - link
I don't see vega coming on top of the 1080 ti. It uses almost the same power, and vega will inevitably be less power efficient, since it is more of a "generic" chip that will go into several products, including prosumer accelerators, whereas the 1080 ti is essentially stripped from everything that is not essential for gaming.A realistic optimal case is vega eventually coming within 10-5 % below a 1080 ti as drivers mature.
Then again, as already mentioned, since AMD will be forced to price it very competitive, most of vega will go into mining rather than gaming.
Which kinda sucks for AMD. They will have to offer vega at a good price to retailers, only for retailers to price-gouge as mad dogs and pocket all the premium due to strong demand and weak supply.
Yojimbo - Sunday, July 30, 2017 - link
Woah, man. AMD aren't the biggest group of idiots in the world. They are cash-strapped. This is mostly why they are late to market, less energy efficient, and less die area efficient than NVIDIA.I think they also made a few poor strategic decisions some years back. First was that GCN was designed for their Fusion line, which never ended up working out. Second was that they seemed to gamble on HBM, which hasn't lived up to promise. Third is that they seemed to ignore the warning signs that efficiency would become very important for both high power and low power devices, leaving AMD to contend only in the middle, which is is not a high margin position to be in.
Alexvrb - Sunday, July 30, 2017 - link
Cost is huge factor, Trollwalker.Alistair - Sunday, July 30, 2017 - link
1080 ti runs at 20 percent faster clocks, so I'm hoping for 20 percent slower than 1080 ti, or 10 percent faster than the normal 1080.Nagorak - Sunday, July 30, 2017 - link
You can't compare AMD to Nvidia's architecture based on clock speed. Clock for clock GCN has done a lot more for a long time now, they just haven't clocked as high. For example with the Fury X vs 980 Ti, the Fury X was clocked around 13% slower (1050 vs stock boost of 1202), yet could run pretty close to neck and neck with a non-overclocked 980 Ti.If the 1080 Ti is 20% higher clocks, I'd guess that worst case Vega should be no more than 10% slower just based on the past instructions per clock differences.
CiccioB - Sunday, July 30, 2017 - link
Comparing frequencies of different architectures is idiotic.What about the number of shaders? Think... A 1050Ti is much faster than this Vega and it seems it will be 1/3 the performance. So? Is it a meaningful comparison?
Once you have taken into account the shaders and the frequencies (so in a word the theoretical TFLOPS) you have to look at bandwidth (and internal algorithms and cache hierarchy to spare it), number of TMUs and ROPs and the way they are configured to be exploited.
Once you have done that, you still do not know exactly how the whole architecture is going to behave while managing complex tasks like those needed for a game engine.
In the end, even talking about frequencies for what could be a comparison between two different architectures is simply idiotic.
hapkiman - Sunday, July 30, 2017 - link
"AMD would have be the biggest group of morons on the face of the Earth. A remotely competent company wouldn't dream of releasing anything that was close to that slow this big, this power hungry, this late with this extremely lengthy hype train. They would have to employ all of the dumbest people in the industry."Uh....yeah ok.
Morawka - Sunday, July 30, 2017 - link
Tomshardware has already confirmed that it's slower than a regular gtx 1080, but not by much. The only thing that's gonna be saving AMD in regards to Vega are miners and compute heavy consumers.eva02langley - Sunday, July 30, 2017 - link
It is a 13 TFLOPS card, at most it will match a TI which I believe it is possible, however I bet against them on this. I sold my share when price skyrocket this week. I believe Vega will go against a 1080 GTX and not a TI. The pricing scheme from AMD is from a display + card setup and we all know the displays used in their demo have a ridiculous price disparity of 500$. This is leaving us with Vega matching a TI for more at a more expensive price, or Vega matching and better a 1080 GTX at a more expensive price.Could be a monster card, who knows, but pricing don't look good. It was to be expected, it's HBM2 driving the price.
nevcairiel - Sunday, July 30, 2017 - link
AMD typically needs more TFLOPS to match NVIDIA cards in real-world scenarios because NVIDIA can use the power more efficiently, while AMD may come with more raw power but typically isn't able to fully leverage it. At least this was the situation in the past. We'll see about Vega.Dragonstongue - Sunday, July 30, 2017 - link
Nvidia (Geforce) are very very good at hiding their faults because they have gotten very good at using fancy clock gating or whatever to focus on gaming oomph at often the most basic levels, whereas AMD (Radeon) are giving the full enchilada that needs code optimization for the best results (HASHING has show this as truth many times over)Not best way to put it but Geforce are like a fast dual core Radeon are like a modest quad core, one is much easier to get what it has whereas the other you need to really pay attention to using what it has with more attention.
Why does that fancy 1080 or whatever need to run at very high clock speeds to "match" the Radeons at lower clock speeds? when you do not have as much that can "help" the deficit or removing all the extras you have to run the engine harder for lack of a better term.
Bet you if you took the fastest Radeon currently for single gpu and the fastest Geforce for single gpu compare them at a clock/clock, watt/performance, mm2/mm2 GCN designs win hands down.
Geforce may need 2-3Mhz to match similar performance level, now watt/clock yes the Geforce may "win" but if they give less actual performance (if able to "hammer them" without fancy tricks) even though use less power in watts they are NOT in fact more efficient because they need to run higher clocks for similar work. it is afterall a give and take, chop things away, it takes higher clocks therefore more power to feed the chip, I do not trust digital circuits as the results can easily be skewed so you think it is only using say 150w when in fact it is using 175w or whatever (would be far from first time Nv has lied about things wouldn't it)
I know more or less for a fact mm2/mm2 GCN is a superior design, even comparing Maxwell to Pascal the only reason as it stands today that Pascal is faster watt/watt, mm2/mm2/, watt/clock comparatively speaking they are slower actually BUT they can ramp clock speeds up while keeping power somewhat lower so "appear" faster when they are not, basically paying more for something that is less :D
There are many sides to these type of things, as well as FEW programs/apps/games that are not biased to specific architectures i.e cannot be "tricked" into using it other than how it was built.
There is a reason why AMD graphics cards have done so very well at hashing, there is a bunch of untapped performance to be had, even if the clock speeds are not stupid high, VLIW5, VLIW4, GCN are more complicated to tap the potential, but when you do, they are crazy potent.
Speed be damned, I want fast at a decent price that is built for the long haul, not fast today replacing tommorow, slower faster than should be as has been shown many a time over with Nv, have also shown many times to be of a similar mind when it comes to "quality" of the end product, build minimal as possible highest price folks might be willing to pay instead of as should be build best possible at minimum price you can afford to sell it at.
Anyways, the leverage if the part that is factual in a different way, AMD does not build games, they build the product and the drivers, it is the coders/developers that optimize the code for the product, the devs will often take the least amount of time possible to get the finished product on the shelf, if it takes 1/2 as long to get the dired results with Geforce as it does with Radeon, they will use Geforce, or, if they are paid extra to optimize for one or the other etc etc.
eva02langley - Sunday, July 30, 2017 - link
http://wccftech.com/amd-rx-vega-64-pricing-clocks-...Price leaked
AMD Radeon RX Vega 64 Air: $499
AMD Radeon RX Vega 64 Air Limited Edition: $549
AMD Radeon RX Vega 64 Liquid : $599
AMD Radeon RX Vega 64 Liquid Limited Edition : $649 [UPDATED $649/$699]
AMD Radeon RX Vega 56 : Pending [UPDATED $399]
All in all, I don't believe it will match a TI, however, pricing look better than expected.
Dragonstongue - Sunday, July 30, 2017 - link
LOLZ using them as a source of proof, they leak leaks from leaks.Bullwinkle J Moose - Sunday, July 30, 2017 - link
Not many people seemed to notice when SSD's hit $1 per GigabyteBut they started noticing bigtime when SSD's hit 50 cents/GB
Call me when I can get a graphics card for $1 per Watt
I'd buy a 1080Ti Kingpin for $250
But for AMD, I think I'll wait for 50 cents/Watt
It will be a long wait for sure
WatcherCK - Sunday, July 30, 2017 - link
I cant see what fittings are used or how the fan is connecting to the motherboard but im guessing they would not be easily modified? Im curious how if you could get the fan and rad off and attach it to an existing cooling loop...???JayzTwoCents did a clip recently where he expands a Fractal Design Celsius S36 AIO https://www.youtube.com/watch?v=fDeDnm3V3j4
It would look fairly dogsballsey and you are mixing who knows how many different types of loop components, you are definitely doing some unfriendly things to your warranty... you need to add a reservoir... on the plus side you are spending $250NZ (no idea what you would need in the final build) and getting a CPU+GPU loop at an intermediate difficulty build level...
And yes I guess you could just add am S36 and a watercooled Vega to a system and not combine the loops... the GPU loop is a single 140mm (w.a.g) radiator, the S36 with its 360mm radiator is going to give a better cooling area for the component that is producing more heat that the less than 95W produced by the CPU and for a less complicated build than a full custom loop...
And yes I have never done any kind of liquid cooling so my napkin math may be completely flawed, interesting idea though, feasible?
zangheiv - Sunday, July 30, 2017 - link
If you're a gamer and nothing else, then perhaps Nvidia is a better choice for you...that's great. But if you want to save a few hundred dollars and don't mind a bit less gaming performance but prefer a higher overall compute performance then Vega is for you. If you're a Data Center, then Vega is a great choice/alternative for you. If provide a cloud-based virtualized GPU service (VDI or virtual gaming consoles etc) then Vega is for you. If you're a developer rendering large workloads Vega is better choice. If you're developing an APU or SoC (Raven Ridge) that needs a GPU with infinity-fabric and HSA to capture market-share and make mobile devices higher-performance and lower cost then Vega is a darn perfect architecture. If you're a Miner, then unfortunate for the rest of us, Vega will stocks will probably be swept up by the miners if it were priced like a GTX1080There were a few sacrifices Vega had to make in order to solve a much bigger problem. I believe the TSMC 16nm node is also more efficient than the Samsung 14nm node so it can be clocked higher and consume less power, but samsung has much better yield which in turn affects price. I see this as a fair trade-off.
Vega brings a lot of architectural changes most that will compete against Volta. Some folks think Vega is just catching up to Maxwell, that's plain wrong. Driver-wise Vega has a bunch of features that are not even enabled yet which is why it's just performing like it was a 14nm Fiji. Give it some time and then this beast will start to open up. And once that's done, we'll be ready for 7nm Navi.
I think AMD is actually ahead of Nvidia in the Exascale race, put 40,000 o Vegas into 10,000 EPYC servers interconnected infinity fabric with HSA APIs and we have 1+ ExaFLOP environment that lower cost and higher efficiency than an intel/nvidia solution.
ILostIT - Sunday, July 30, 2017 - link
Amd pushed GlobalFoundries14nmLP to the clock and voltage wall so it appears to be less efficient than TSMC 16nmFF(Because LP processes shine at low voltage and low clocks i.e. smartphone SoCs)Bullwinkle J Moose - Sunday, July 30, 2017 - link
Zangheiv says...."Some folks think Vega is just catching up to Maxwell, that's plain wrong. Driver-wise Vega has a bunch of features that are not even enabled yet which is why it's just performing like it was a 14nm Fiji."
-------------------------------------------------------------------------------------------------------
Sorry Zangheiv, but Vega will never catch up to Maxwell (Driver-wise)
Windows XP is not supported by Vega Drivers!