One thing I noticed before I return to the reading is the odd bit about chipsets and memory speeds. Pretty sure the memory controller is on the CPU itself as opposed to the chipset, and I've been running DDR4-3200 XMP CL16 on my Ryzen 1 on both x370 and x470 MSI motherboards with no problems--the same DDR4 2x8 config moved from one motherboard to the next.
Guaranteed supported memory speeds and what overclocked memory can generally be used are two very separate things. And yes, that 3200 memory is considered an overclock for the CPU.
Right--so why tie the memory controller to the chipset? QUote: "Some motherboard vendors are advertising speeds of up to DDR4-4400 which until X570, was unheard of. X570 also marks a jump up to DDR4-3200 up from DDR4-2933 on X470, and DDR4-2667 on X370." Almost every x370, x470 motherboard produced will run DDR-4 3200 XMP ROOB. There's an obvious difference between exceeding JEDEC standards with XMP configurations and overclocking the cpu--which I've also done, but that's beside the point. Pointing out present JEDEC limitations overcome with XMP configurations is a far cry from understanding that the chipset doesn't control the memory speeds--the memory controller on the cpu is either capable of XMP settings or it isn't. Ryzen 1 is up to the task. You can also take a gander at vendor-specific motherboard ram compatibility lists to see lots of XMP 3200MHz compatibility with Ryzen 1 (and of course 2k and 3k series).
The new chipset means new boards, to which can be applied more stringent requirements of trace routing for DDR. Same as with the more stringent requirements for PCIe routing for PCIe 4.0.
OK--understood--but improved trace, imo, is mainly for PCIe4.x support with x570-- really not for DDR 3200 support, however, which has already been supported well in x370/x470 motherboards--which I know from practical experience....;) In my case it was as simple as activating the XMP profile #2 in the bios, saving the setting and rebooting. Simply was surprised to see someone tying the mem controller to the chipset! I know that the Ryzen mem controller in the CPU has been improved for Ryzen 3k series, but that has more to do with attaining much higher clocks > 3200MHz for the ram, and is relative to the CPU R 3k series, as opposed to the x570 chipset, since the mem controller isn't in the x570 chipset. All I wanted to say initially is that both DDR 4 3000 & 3200MHz have been supported all the way back to x370 boards, not by the chipset, but by the Ryzen memory controller--indeed, AMD released several AGESA versions for motherboard vendors to implement in their bioses to improve compatibility with with many different brands of memory, too.
You mentioned 2x8GB. Try with 2x16GB and you might not be as lucky or will have to work harder to get the timing right. Motherboards that only seat two DIMMs will be noticeably easier than four DIMM motherboards.
If AMD did anything to help grease the wheels, I'm sure many users will appreciate that.
WaltC, you're correct. The memory controller is part of the IO die, not the chipset. The chipset is connected to the IO die via 4 PCIe lanes.
While the subsequent iterations of Ryzen have indeed improved memory support along with the new chipsets, the chipsets have nothing to do with that. I'm assuming the author is using the chipsets to delineate generations of memory improvement, but it could be just as easily (and more clearly) stated by referring to the generation of Ryzen processors.
Well, the thing is that motherboard manufacturers, motherboard revisions, motherboard layout and BIOS versions do play a role as well, though. The memory controller is just one piece of the puzzle. If you have a CPU with a great memory controller, it doesn't mean it performs the same on all boards. And it doesn't mean it performs the same with all RAM either. Sometimes the actual traces on motherboards are crap for certain clockspeeds. Sometimes the BIOS numbers for secondary and tertiary timings are crap at certain clockspeeds and get better in later revisions, seemingly allowing for better memory clockspeeds when it really was just a question of auto vs manual if you knew what you were doing. Sometimes the SoC voltage is worse on that board vs the other and that influences things. The thing is, across the board, X570 motherboards have higher advertised OC clockspeeds for the memory and Ryzen 3000 has higher guaranteed clockspeeds. And Anandtech believes that is the thing that counts, not if you can get x clockspeed stable. At least in the vanilla CPU articles. They do separate RAM articles often.
"Some motherboard vendors are advertising speeds of up to DDR4-4400 which until Zen 2, was unheard of. Zen 2 also marks a jump up to DDR4-3200 up from DDR4-2933 on Zen+, and DDR4-2667 on Zen."
How about now? :)
And I believe the authors mean to say that official support for is up to 3200 on X570 boards, while older boards were rated lower "officially" corresponding to the generation they launched with. Speeds above that would be listed with (OC) clearly marked in memory support. Anything above the 'rated' speeds, you're technically overclocking the Infinity Fabric until you run in 2:1 mode which is only on Zen 2 anyhow, so your mileage will definitely vary.
Even the 9900K 'officially' supports only DDR4-2666 but we all know how high it can go without any issues combined with XMP OC.
In Zen and Zen +, the infinity fabric speed was tied to the memory speed. So overclock the RAM and you were also overclocking the infinity fabric. In Zen 2 infinity fabric is independent of the RAM speed.
I am curious about the DDR4-3200 CL16 memory in the Ryzen test. CL16 RAM is considered the "cheap crap" when it comes to DDR4-3200, and my own Hynix M-die garbage memory is exactly that, G.skill Ripjaws V 3200CL16. On first generation Ryzen, getting it to 3200 speeds just hasn't happened, and I know that for gaming, CL16 vs. CL14 is enough to cause the slight loss to Intel(meaning Intel wouldn't have the lead in the gaming tests).
Regardless of whether or not a 'crap' DRAM kit having CL16 vs. a much more expensive kit with lower CL rating, it isn't going to make any significant difference in performance. This has been proven again and again.
"CL16 RAM is considered the "cheap crap" when it comes to DDR4-3200"
Since when? Yes its cheap(er) but I'd disagree with the "crap" part. I needed 32 Gb of RAM so that's either 2x16 with 16 GB modules usually being double sided (a crap shoot) or 4x8 with 4 modules being a crap shoot. Looking at current pricing (not the much higher prices from back when I bought) New egg has the G-skill ripjaws 2x16 CAS 16 kit for $135 while the Trident Z 2x16 CAS 15 for $210 or the CAS 14 Trident Z for $250. So I'd be paying $75 to $115 more...for something that isn't likely to do any better in my real world configuration. Even if I could hit its advertised CAS 15 or 14, how much is that worth. So I'd say the RipJaws is not "cheap crap". Its a "value" :)
It's considered "cheap crap" because you can't guarantee that it's Samsung B-die at those speeds while you can with DDR4 3200 MHz CL14 as nothing else is able to reach those speeds and latencies then a good B-die. What that means is that you can actually have a shot at manually overclocking it further while keeping compatibility with Ryzen (if you tweak the timings and sub-timings) while you couldn't really with other memory kids on the first two generations of Ryzen. I don't have a Ryzen 3xxx series of chip so I can't really comment on those...
Since about the 2nd AGESA implementation, on my original x370 Ryzen 1 mboard, my "cheap crap"...;)...Patriot Viper Elite 16CL 2x8GB has had no problem with 3200Mhz at stock timings. used the same on a x47- mboard, and now it's running at 3200MHz on my x570 Aorus Master board--no problems.
Right now, there's no indication what CL / timings are applied to all the other systems. CL16 is indeed bottom of the barrel for DDR-3200, you would hope there's no shenanigans with Intel getting CL12 DDR-2666. Why not just run all the systems with the same DDR-3200, it's not like they can't do it.
Not to be too rude, but the IMC is on the io chipLet, not the chipSet. The chipset actually has an important role for the memory speed, in that a chipset defines a platform and a platform imposes requirements on the power supply and trace routing. While the IMC in 3rd gen can handle 3200MT/s+ completely fine, it is guaranteed to do so only one X570. Anything older is a dice roll as the boards were not designed for such speeds (not a requirement for the older platform).
Do you know george stevenson story?He earn 3657$ every month at home just working few hours on internet see more by open this connection and click home button.FOR MORE INFORMATION COPY THIS SITE.......... www.online-3.com
Just as note for those who haven’t been following: This review wasn’t written by our usual resident CPU editor, Dr Ian Cutress as he unfortunately the timing didn’t work out. We only had a few days time with the new Ryzen CPUs, as such, you might noticed a few bits and pieces missing in our article that we’ll try to address in the next hours and days. We’ll be trying to update the piece with more information and data as soon as we can. Thanks. Also huge thanks to Gavin Bonshor who actually did all the testing and collected all the data for this review, thumb up to him.
Ha-ha...;) So because AMD has a newer architecture without most of the vulnerabilities that plague Intel's ancient CPU architectures--it should be held against AMD? Rubbish...;) Look, what is unfair about testing both architectures/cpus with all the mitigations that each *requires*? I can't see a thing wrong with it--it's perfect, in fact.
Well the results are close enough for a lot of tests to be error margin, that the mitigation would put AMD in the lead.
The tests should reflect real world as of when the article is published, using old results without declaring that Intel doesn't have mitigation applied on every page is the equivalent of falsifying the results as people will buy based on these tests.
"using old results without declaring that Intel doesn't have mitigation applied on every page is the equivalent of falsifying the results as people will buy based on these tests."
Oh, that's just inane. They quite openly state the exact test specification on the "Test Bed and Setup" page, including which mitigations are applied. Arguing that not putting one particular piece of information on every page means it's the equivalent of falsifying the results is completely ridiculous.
Well, at least Ryzen 3000 CPU were tested with the latest Windows build that includes Ryzen optimizations, but tbh I find it a bit "lazy" at least to not test Intel CPU on the latest Windows release which forces security updates that *do* affect performance negatively.
This may or may not have changed the final results but would be more proper.
Not criticising thereview per se, but you see AT staff going wild on Twitter over people accusing them of bias when simple things like testing both Intel and AMD systems on the same Windows version would be an easy way to protect themselves against criticism.
It the same as the budget CPU review where the Pentium Gold was recommended due to its price/ performance, but many posters pointed out that it simply was not available anywhere for even near the suggested price and AT failed to acknowledge that.
Zombieload ? Never heard of it.
This is what I mean by lazy - acknowledge these issues or at least give a logical reason why. This is much easier than being offended on Twitter. If you say why you did certain things, there is no reason to post "Because they crap over the comment sections with such vitriol; they're so incensed that we did XYZ, to the point where they're prepared to spend half an hour writing comments to that effect with the most condescending language. " which basically comes down to saying "A ton of our readers are a*holes.
Sure, PC related comment sections can be extremely toxic, but doing things as proper as possible is a good way to safeguard against such comments or at least make those complaining look like ignorant fools rather than actually encouraging this.
Thanks. I appreciate the feedback, as I know first hand it can sometimes be hard to write something useful.
When AMD told us that there were important scheduler changes in 1903, Ian and I both groaned a bit. We're glad AMD is getting some much-needed attention from Microsoft with regards to thread scheduling. But we generally would avoid using such a fresh OS, after the disasters that were the 1803 and 1809 launches.
And more to the point, the timeframe for this review didn't leave us nearly enough time to redo everything on 1903. With the AMD processors arriving on Wednesday, and with all the prep work required up to that, the best we could do in the time available was run the Ryzen 3000 parts on 1903, ensuring that we tested AMD's processor with the scheduler it was meant for. I had been pushing hard to try to get at least some of the most important stuff redone on 1903, but unfortunately that just didn't work out.
Ultimately laziness definitely was not part of the reason for anything we did. Andrei and Gavin went above and beyond, giving up their weekends and family time in order to get this review done for today. As it stands, we're all beat, and the work week hasn't even started yet...
(I'll also add that AnandTech is not a centralized operation; Ian is in London, I'm on the US west coast, etc. It brings us some great benefits, but it also means that we can't easily hand off hardware to other people to ramp up testing in a crunch period.)
But you already had the Intel processors beforehand so could have tested them on 1903 without having to wait for the Ryzen CPU? Your argument is weird.
Exactly. They knew that they needed to re-test the Intel and older Ryzen chips on 1903 to have a level, relevant playing field. Knowing that it would penalize Intel disproportionately to have all the mitigations 1903 bakes in, they simply chose not to.
Sorry, Ryan, but test beds are not your "daily drivers". With 1903 out for more than one month, a fresh install of 1903(Windows 10 Media Creation tool comes in handy), with the latest chipset and device drivers, it should have been possible to fully re-test the Intel platform with all the latest security patches, BIOS updates, etc. The Intel platform should have been set and re-benchmarked before the samples from AMD even showed up.
It would have been good to see proper RAM used, because anyone who buys DDR4-3200 RAM with the intention of gaming would go with DDR4-3200CL14 RAM, not the CL16 stuff that was used in the new Ryzen setup. The only reason I went CL16 with my Ryzen setup was because when pre-ordering Ryzen 7 in 2017, it wasn't known at the time how significant CL14 vs. CL16 RAM would be in terms of performance and stability(and even the ability to run at DDR4-3200 speeds).
If I were doing reviews, I'd have DDR4-3200 in various flavors from the various systems being used. Taking the better stuff out of another system to do a proper test would be expected.
"ho buys DDR4-3200 RAM with the intention of gaming would go with DDR4-3200CL14 RAM"
Well I can tell you who. First Ill address "the intention of gaming". there are a lot of us who could care less about games and I am one of them. Second, even for those who do play games, if you need 32 GB of RAM (like I do) the difference in price on New Egg between CAS 16 and CAS 14 for a 2x16 Kit is $115 (comparing RipJaws CAS 16 Vs Trident Z CAS 14 - both G-Skill obviously). That's approaching double the price. So I sort of appreciate reviews that use the RAM I would actually buy. I'm sure gamers on a budget who either can't or don't want to spend the extra $115 or would rather put it against a better video card, the cheaper RAM is a good trade off.
Finally, there are going to be a zillion reviews of these processors over the next few days and weeks. We don't necessarily need to get every single possible configuration covered the first day :) Also, there are many other sites publishing reviews so its easy to find sites using different configurations. All in all, I don't know why people are being so harsh on this (and other) reviews. its not like I paid to read it :)
Thanks for your reply Ryan. I did not intend to be rude when saying "lazy" but rather show that I do not think this is something that was done by intent.
Like I said - mention these things and it helps clear up misunderstandings.
It is definitely very positive that you test the Ryzen CPU with the latest builds though. I also like that you mention if prices include an HSF or not, but it would have been nice to mention the price of HSF used for Intel systems (when not boxed), as e.g. the Thermalright True Copper is a rather expensive CPU cooler.
I think you already addressed not using a faster nVME drive (a PCIe 4 version would have been ideal if available - this would also have given an indication of potentially increased system power use for the Ryzen with PCIe 4 drives) on Twitter.
Those are little nitpicks, so not intended to be a criticism of the overall article. It is just that people tend to be rather sensitive when it comes to Intel vs. AMD CPU comparisons, given Intel's history of things they are willing to do to keep mind- and marketshare.
Whether or not it is intentional, AT has had an increasing Intel bias over the last several years. Watch to see how long it takes for an AMD article to get pushed down by rumors or vaporware from Intel or Nvidia.
I think Ryan brings up several salient points, and whether or not you think that they did or did not have the time to do what you wanted (they were also a man down without Dr. Cuttress), the fact of the matter is that AMD dropped a bunch of CPUs and GPUs all at once and literally everyone was scrambling to do what they could in order to cover these launches.
I don't think it's coincidence that even in the tech Youtube space, if you watch 10 different reviews you'll largely see 10 different testing methodologies and 10 (somewhat) different results. Every single reviewer I've talked to said that this was one of, if not the most, difficult launch windows they've ever dealt with. Additionally, launching on a weekend with all of its associated complications (not just on reviewers' ends, but partners as well) is a bitch, with everyone scrambling at the last minute on their days off getting in last-minute updates and whatnot.
When AMD tells you at the last minute, "Oh, the brand new Windows 10 update includes something new" you don't necessarily have time to go back and redo all the benchmarks you had already done on the Intel platform.
TL;DR while there may have been flaws in some of the testing, take the details with a grain of salt and compare them to the myriad of other reviews out there for a better overall picture if necessary.
You are making a good point and unfortunately this was an - unfortunately - typical AMD CPU launch with things still being beta. I would assume testers are none too happy about having to re-do their tests.
What I don't get from AMD (even if (and that's a capital IF) it's not their fault, it's their responsibility) is how they cannot see how this makes their products appear in a less favorable light. Let's say the buggy bios cost them 5%, the conclusion with a 5% better performance would have been even more in Ryzen 3000's favor.
It's a bit like showing up to a job interview wearing the clothes you wore for the previous day's physical activity.
Lazy isn't in it. Intentionally misleading is more like it. On one page, where AMD wins more than it looses in the charts, out of 21 paragraphs, 2 had something positive to say about AMD or Ryzen 3k without following up with something along the lines of "but we know Intel's got new tech coming, too"
To be sure, they're still valid. The patches for Fallout and ZombieLoad are not out yet (I only mention them because the vulnerabilities have already been announced).
Yes, the Intel CPUs should have been re-benchmarked on 1903, updated after 14 May when the OS-side fixes for the new MDS-class flaws were released. That's only fair and it's quite reasonable to expect that users will apply security updates, not leave their systems unpatched and vulnerable for perhaps a percent or two of performance.
Ryan: how is this not explained in the article? I am reading this site for more then a decade and I trust you most. and I trust you will provide such information. I would expect, you update the article with this info.
Is there a compilation test coming for chromium or another big source tree, that would show if new IO arch brings wider benefits for such CPU+IO workloads?
But one of the big features is PCIe 4 support, so testing with an nvme drive as well to show difference would be important? People spending $490 on a CPU only are probably going to be buying an Nvme SSD.
Yep, PCIe 4.0 NVME is going to be beta at this point at best.
Last I read the first 4.0 NVME to be released is essentially running an overclocked 3.0 interface, which the list of NVME that can saturate 3.0 is pretty short as it is.
For consumers yes but the first PCIe 4.0 host system was the IBM POWER9 released ~18 months ago. As such there are a handful of NIC and accelerators for servers out there today.
The real oddity is that nVidia doesn’t support PCIe 4.0. Volta’s nvLink has a PHY based upon PCIe 4.0. Turing should as well though nVidia doesn’t par those chips with the previously mentioned POWER9.
It's hard to get one's head around this, but basically: *all* the Intel benchmarks *do not* include the security patches for the MDS-class flaws. The 9000 and 8000 series tests do include the OS-side Spectre fixes, but that's it. No OS-fixes for other CPUs, and no motherboard firmware fixes for any Intel CPUs
At the very least, all the Intel CPUs should be retested on Windows 10 1903 which has the OS-side MDS fixes.
Most if not all the motherboards used for the Intel reviews can also have their firmware upgraded to fix Spectre and most times MDS flaws. Do it.
This is sensible and reasonable to do: no sensible and reasonable user would leave their OS vulnerable. Maybe the motherboard, because it's a bit scary to do, but as that can be patched, it should be by reviewers.
This would result in all the Intel scores being lower. We don't know by how much without this process actually being done. But until it is, the Intel results, and thus the review itself, are invalid.
While you're at it Anandtech, each year buy the latest $999 GPU for CPU testing. Consider it a cost of doing business. Letting the GPU bottleneck the CPU on most game resolutions benchmarked is pointless.
Intel gets 5% better FPS In games is really not a win. I’ll consider that a tie. In multiple applications AMD gets 20% more performance. That’s a win!!
hopefully Anal lists :P see how much a "win" the Ryzen 3k / x5xx / Navi truly are, not only to get AMD margins even higher but to take more market share from Intel and "stagnate" Nvidia's needing to "up the price" to make more $$$$$$ when AMD "seems" to be making "as much if not more" selling a small amount less per unit (keep in mind, AMD is next Playstation and Xbox which are 99/9% likely to be using the same silicon, so, AMD take a "small hit" to get as many Ryzen gen 3 and Navi "in the world" which drums up market/mindshare which is extremely important for any business, at this stage in the game is VITAL for AMD.
I’m honestly wondering what the point is of the gaming benchmarks for CPU tests anymore.
It seems like the game is either so easy to render that we are comparing values in the hundreds of FPS, or they’re so hard to render that it’s completely GPU dependent and the graph is flat for all CPUs.
In the vast majority of tests here one would have an identical experience with any of the top four CPUs tested.
Game engines are starting to use more cores, and at lower resolutions(which do not stress the video card all that much) will show improvements/benefits of using one CPU or even platform over another. In this review, due to the RAM being used, the gaming benchmarks are almost invalid(DDR4-3200 CL16), since moving to CL14 RAM would potentially eliminate any advantage Intel has in these benchmarks.
yes i called it a tie because of the margin of error and patches were not taken into account. also, Intel get enormouse game support so really many factors as they are not equal playing ground
Intel's bad moment just started. Clearly while there are some areas where Intel chips are still doing well, however the victories are significantly lesser now. Looking at the power metrics, they lost the fab advantage, so they are now in the disadvantage. To top it off, Intel is still charging monopolistic prices on their existing chips. Have not really seen the rumored price cuts, which may be too little and too late.
Intel is waiting for 10nm, considering the fact AMD didn't even match Skylake prepatches performance... IF Intel fixes the 10nm, AMD will be be smashed to the ground. If it is a big if, but it is a fact.
AMD actually beat Intel on a clock for clock basis now. What you're seeing is Intel's higher boost clocks saving the day (somewhat).
If Intel can't go past 5GHz with their 10nm, due to the new core design, and only are able to get say 10-15% more performance per clock then Gen3 Ryzen will most likely end up, with its 7nm+ and improvements AMD aren't done making, in tough competition.
Intel won't be doing any smashing anytime soon there Max.. I was damn pleased with the overall value/performance of my 2700x in comparison to my highly overclocked 8700K (4.9Ghz) and basically shrugged of the 9 series intel. The addition of a 12core.. with great performance levels really changes the game.
Even if Intel brings something out it's not going to destroy anything. All we've seen over the past 5 years is small bumps upwards in performance.
Maxiking intel has been waiting for 10nm for 204 years now.. and they are still kind of waiting for it. skylake prepatch ? as in specture and meltdown ? um.. kind of need those fixes/patches in place, even if it means a performance hit.. but by all means.. get skylake, dont fix/patch it, and worry about that.. and spend more.. its up to you... either way.. zen2.. looks very good....
What RAM was used in the Intel system? The Ryzen system used DDR4-3200, but it's CL16, not CL14 RAM. That CAS latency difference would be enough for Ryzen to at least tie the 9900k if not beat it in the gaming tests.
I dont think so its not easy to refine 10nm like you think how many year it take Intel to refine 10nm it has been already 4 to 5 years dont get your hope up. If volume aint there there is no chance. AMD surelly moving to 7nm euv quicker then 5nm
I havent seen them drop any price i9900k went back up at amazon.com. Intel continue to ignore, non response and not caring for their competition. they want their margin this is all this company care about not your feeling or desire
Local anandtech yield and node experts got hit again. I wonder how many hits you can take before you shut up.
As predicted, Intel still faster in games and AMD OC ability more or less unchanged, slighty worse. It is a new node after all buy yeah, you know better, so keep dreaming about those 5ghz on the majority of chips.
That isn't the thing I was talking about. My point was that local experts, I mean, trolls, know nothing about the manufacturing cost, yields, about the node in general. As it has been showed recently in the reviews, OC ability of the chips is terrible and lower core count parts tend to perform worse, reaching only 4.1 - 4.2 ghz.
It is good enough in terms of competition and that we can get things cheaper.
But not when the raw performance is tconsidered. It is a hypothetical scenario, but had there been no 10 nm problems for Intel, AMD would have been in the bulldozer position again.
I haven't owned an AMD CPU since my K500 a very long time ago, but let's call it what it is - AMD has a CPU at the $500 price point that Intel is charging $1200 presently to compete with, and Intel's solution uses far more power. That's a win for AMD in any domain.
The problem is that the PC market is stagnant atm, if you are already a intel owner, absolutely no reason to upgrade to amd CPU. Most people who have systems now don't really have any need to upgrade like it used to be.
He stated in article it took amd 15 YEARS to get this good CPU finally out and sounded like he was impressed by that?
Its a impressive CPU, but lets be real here, Intel has dominated the market already for years because it has better marketing, better suppliers.
Based on previous article comments, most people are still rocking 2600K CPU..FROM 2011! They still are very good CPU.
Thats not counting the price difference, while yes the one intel cpu is crazy expensive, its not a normal CPU most people have to go by, if you a regular user with the previous mentioned 2600K CPU..that requires a total system overhaul if you wanted to go AMD route...which to be honest is a risk on betting that a new amd system is not going to last as long as a 2600K did for you.
Most people are still on Sandy bridge, ivy bridge or haswell. All of these are nothing compared to what 3900x offers and also 3700x. That is the main idea here. There is no point in buying 9900k just to pay a lot more for 5% fps increase at 1080p. That is nitpicking at its best. You are much better off with a 3900x. You get 2950x mt performance, you get more than enough gaming performance and you get lower power consumption than 9900k.
"He stated in article it took amd 15 YEARS to get this good CPU finally out and sounded like he was impressed by that?" No. That's why it was awarded a Silver.
Please don't take our current numbers as any sign of overclockability - we didn't have enough time for it and motherboard firmwares are still getting updated.
Those cpus don't even POST past 4.3ghz, so no, firmware isn't the problem and never was and it never improved OC-ing, only compatibility and made systems more stable. They reached the limit of the node.
"Those cpus don't even POST past 4.3ghz, so no, firmware isn't the problem and never was and it never improved OC-ing, only compatibility and made systems more stable. They reached the limit of the node."
Like you've reached the limit of your ability to speak in a way that others can make sense of? Perhaps you need to focus on that rather than whatever multinationals are up to you're trying to defend. It will do you more good in the long term, trust that.
Sorry what? There are benchmarks out showing 5 GHz all core on the 3900X, that is with nitrogen, but I'm expecting at least 4.8GHz possible.
Intel is worse per clock than AMD with the new node, plus AMD has about 105W to play with on the 3900X to match the power usage of the 9900K on all core.
The only troll here is AMD. They advertise 4,6ghz boost while reaching 4.2 ghz and 4.3 ghz max when manually OC-ed. This is called false advertising and fraud.
Now I know you're an idiot. That's single core boost, not all core. Intel doesn't even state all core boost... except on a single product, the 9900KS, which is a last ditch effort to be like "Hey guys, we can hit 5ghz all core! don't look at the Ryzen Chips... please!"
FYI, AMD hit 5ghz all core before Intel did, with the terrible FX9590 or something like that. It was not a good CPU.
Maxiking still faster by 5%, and probably costing MORE for that 5%.. no thanks... seems intel is also dreaming about 5 ghz. you are criticizing amd for doing something they havent really been able to do in years.. so go by intel cpus, and pay a lot more....
Are the tests re-run on Intel CPUs? 9900K seems to be losing more ground to 9700k than when they were launched. Is it the effect of patches on Intel HT CPUs?
I was hoping that you would clock the 2700x, 3700x and intel chip with the same manual clock speeds so that we can see a real IPC comparison between zen+, zen2 and intel.
CPUs are designed with memory latencies in mind when clocking at a certain clock - the current comparison at slightly different clocks is still perfectly valid for IPC.
Actually that’s a bad example because we did see 200-300mhz bumps each generation between Ryzen 1000 and 3000. At that pace it’s possible to see a 5ghz turbo next generation, or be close enough.
If you’re saying that everyone expected 5ghz with Ryzen 2000, then yes, if that’s true then those people were being unreasonable. At this point though it’s not a big leap.
It is a perfectly valid example, the bump between 1st gen and 2ng gen ryzen was 200mhz. Max OC on 1st gen was 4.1ghz, the max on the 2nd gen was 4.3ghz.
There is no bump this year. I am highly skeptical they would be able to reach 4.6ghz on all cores with 7 nm+ next year. TSMC nodes are nothing special, you hit the wall and you are done regardless of voltage used.
last year we had 3.7/4.3ghz as the flagship Ryzen 7 desktop. This year we get 3.9/4.5ghz in Ryzen 7 and up to 4.7ghz single threaded in Ryzen 9.
These parts are well beyond anything we saw in the 2000 series, to claim there has been no frequency improvement is disingenuous at best. Going to 5ghz is just a small process tweak away this time.
Maxiking where did you read that we were promised this ??? you are bashing AMD for making promises.. what about all the promises intel has made over the years?? i dont see you bashing them for that. didnt intel promise 5ghz ?? but yet... we only see that in ONE chip, and its a special binned chip, in limited quantities... and practically needs exotic cooling
It is one chip sure. The difference is Intel can reach its boost on a single core for unlimited amount of time unlike AMD and its sporadical 100ms long 4.55ghz boosts and when confronted they lie on twitter there is not such thing as boost in their CPU anymore lol. What does have Intel with this?
geeze.. drop this BS already.. maybe intel can. but how much power is it using to achieve this ?? 150 watts on what intel says is a 95 watt cpu ?? just drop this crap already, you obviously have NO real proof, other then your own BS words.. cause if you did.. you would of posts links to this garbage
I'd be really interested in a recap of which CPU includes which optionnal features. It's been my experience that Intel mostly, but also AMD a bit, play a shell game with advanced mutimedia, virtualization, security,... extensions and it's a pain to suss out which CPU supports what.
AFAIK AMD support hyper V, no issues with docker for me. Check your use case though, think there was a small feature AMD did not support, but can't remember what it was, didn't affect me for docker or if rendering video.
Still reading, but one minor complaint, the latency graphs, I can either easily read the key or see the entire graph while clicking the button, but not both.
I have to zoom out to see the entire graph then the text gets pretty small.
Not a huge thing, just an ease of access thing. The graphs have extremely interesting info, but it's not easy to read them.
"While going from X370 at 6.8 W TDP at maximum load, X470 was improved upon in terms of power consumption to a lower TDP of 4.8 W." This is the opposite of what the chart right above it says.
Fun fact. You cannot install and/or use Ryzen Master with the Hyper-V role installed. It's not supported, nor does the program run (it's also noted in their Software Installation Guide).
Thanks! Appreciate the fast turn-around of this first deeper dive into the 3000-series Ryzen. The 3700x looks tempting. @Andrei & Gavin & Ian: On another note, I look forward to seeing a review of the Ryzen APUs, especially the 3400G! I am about to build a new $ 500 potato (HTPC), and the 3400G looks promising. However, before I start building, I'd really like to read a reasonably thorough review on the new APUs before committing to the build. If and when you do, please also report on the HTPC usability of the 3200G and 3400G in detail, especially 10bit HDR playback and 4K streaming. Surprisingly (or not), PCs have been lagging behind the mobile SoCs on this.
For what it's worth, we rarely give out any awards at all. The award tiers are Silver, Gold, Platinum. The 9900K never even got an award, so in our view the new Ryzen chips are overall better products.
Can we get some moderation in here? "Phynaz" Seems much more interested in slinging insults towards his rival fanboys than any discussion about tech. He sullied most of the Navi articles comment section with the same trash. Learn some social skill dude.
That's probabily reserved for ... which other CPU is better in the same price range ?! Oh right, none.
This is subjective on my behaf, but these AMD chips are as <gold> as they can be. Sure, <platinum> would entail winning all tests ... but <gold> is well deserved.
Nobody's going to go back and check IF 9900K ever got some award or not. They'll likely be left with the impression that Ryzen 3000 is <silver> or second best (or even third, by your count) .
I feel this is so damn subjective on me, but I guess I come with 24 years of experience in this field which include a short 2 year stint in tech journalism and since that was 8 years ago, I'm allowed a bit of lack of objectivity.
If me and my colleagues were left with this impression, I'm sure many others are and Ryzen 3000 is in no way second best in its class.
Thank you for posting this. As someone that has been in the industry over 20 years as well I was taken aback at Silver too.
We've had a decade of Intel fleecing the market. Small gains being parceled out for high cost. Coupled with a platform that never lives beyond the generation it came out for.
The 1st generation Ryzen came out swinging. Was it perfect? No. Was it valid competition in a market in desperate need of it? Absolutely. Here we are a few years later with a product offering essentially the same or better performance vs the competition with much better efficiency at a much lower price. And it gets a "Silver". I really question the objectivity of this site.
No, I've bought both. But you'd have to be very naive to call the "progress" made after Bulldozer flopped and the Core architecture dominated anything but milking the market while moving things at a snails pace. Intel had every chance to continue to boldly innovate, but instead chose to parcel out small incremental changes and charge a hefty premium for them.
I'll buy whatever makes sense. Right now that isn't Intel in my opinion. May change when Sunny Cove hits.
The fact people are still holding onto Sandy Bridge because they don't feel like Coffee Lake isn't a good upgrade should be your reasoning that yes, Intel wasn't really doing much. Heck, I'm on my 4790k, and the only CPUs that peeked my interest is the AMD 3900x and 3950x, because those are great looking processors. If I were more into overclocking, maybe I'd spring for the 9900k, but it's not a processor that peeked my interest...
Well there was the "benchmarking Sandy Bridge in 2019" article a couple of months back. That showed that a 9700K is about 1.5-2x faster than a 2600K. Yes that's not the same rate of improvement, over seven years, which we saw up to Sandy Bridge, but it is still a hell of a lot faster for roughly the same cash price.
It's just the increments -- a few percent a generation -- have been small. But they have compounded.
What more needs to be said about the 3900X relative to its peers? So glad I didn't spring for the i9-9900K a few months back!
The 3700K is in a more interesting position at its price, but it looks like there are still some conditional performance advantages for the i7-9700K (read: single threaded workloads), but many times not. In any case, it's a great value and product. Kudos to AMD.
What I _really_ want to see is the 3600X (and the 3600) vs the i5-9600K. Historically, AMD has been seen as a budget play, but I suspect the single-threaded performance of the Intel CPU will shore it up vs the higher threads of these AMD chips in many user-relevant workloads vs. benchmarks. (Heck, it probably is just as good as the 3900X in gaming due to GPUs not catching up to increases in effects and resolution, right?) There could be very interesting values at these price points. Could you imagine 3 years ago telling someone, "For a good budget gaming PC, the Intel chips are alright, but if you want something REALLY nice once your GPU budget is used, put another $80 toward an AMD CPU!" Amazing.
I too am interested in the 3600x. I don't know if my math is relevant but the TDP per core count is higher for the 3600x than for the 3700x AND 3800x. Isn't it possible that it would then have a higher OC and better gaming performance? I mostly game, but moving from 4 threads on my current system to 12 threads isn't going to hurt either.
This is a great result for AMD. They’ve won some benchmarks, conceded some others to Intel, and embarrassed themselves nowhere. A few thoughts:
1. Was worried about the memory latency graphs at the start, but it was a relief to see how effectively those regressions have been mitigated in actual workloads.
2. In the 3D particle movement with AVX test, the 7920X jumps out to a *huge* lead. While everything else roughly doubles, it shows 9x speed up. Granted it is the only AVX512-capable cpu but it shouldn’t be *that* good unless the revised code is able to brilliantly utilize some of the new -512 features beyond just doubling vector length.
3. I work in an industry that’s heavily focused on performance and relatively cost-insensitive. That’s traditionally meant Intel but suddenly we are looking very hard at AMD on the server side.
1) Yes, AMD seemingly knew what they were doing and addressed that concern in the end product. I'd be curious what impact reducing the L3 capacity would have on benchmarks to valid some of the theories (or better yet, AMD release a larger L3 cache chiplet).
2) There are a couple of other instructions with AVX-512 that can make vector code more efficient beyond just doubling the vector width for a linear increase in performance. Also worth noting that leveraging AVX-512 comes at a clock speed hit so the gains per clock is even higher.
3) While this is consumer side, the 12 core results are the most important to look like for Rome performance projection. That is the one that shares the IO die. (OTOH, a single CPU chiplet Rome that is allowed to clock to the moon with a 512 bit memory interface and 130 PCIe 4.0 lanes is an interesting single threaded proposal.)
4) Agreed but I would predict a very small down tick in IPC. The 3900 has all the cache enabled with fewer cores competing for contention. Everything else looks to be upward.
That's 2% lead without exploit patches, this is why I am annoyed at Anandtech not having applied them.
If it was a 10% difference, sure, then you would definitely know which is better, but at 2% the patch could mean the AMD CPU would win in their benchmark.
A total waste of time reading this. Didn't Purch media give you enough shekels to get a better GPU? what's the point of having that gpu and posting 2k and 4k results? Also, your customer, Intel, has to fix passmark...
1080 has bandwidth issues if I remember correctly at higher resolution, so could be influencing the benchmark.
Still find it crazy that Anandtech didn't test out if any advantages of PCIe 4 since currently the only way to get it is AMD, could be a deciding factor for some.
Happy for AMD. Happy for consumers. But for a 90% gamer like me, still happy with my recent 9700K purchase. Despite getting "closer", AMD still plays catch up and with superior pricing. I want to see AMD surpass the competition WITH good pricing. That day will come soon I am sure.
Yes, the day will come - if AMD can stay alive. They almost didn't. Giving Intel your business after AMD's miraculous comeback forced them to offer better products is rewarding the company that screwed you for 10+ years and screwing the company that fixed that. For a few FPS, for today.
True. No one should be supporting intel for pretending 4 cores has been enough for all this time. People got horny for 5% more FPS, which is not even noticeable. Support AMD already!!
Newsflash for you spanky, AMD already did surpass Intel. Your just to blind to see it. The question is, why do people like you see a 7% increase in FPS, that can only be realized if you purchase a $1200.00 video card, and then set your system up ignoring any of the multitude of patches that need to be installed on your precious Intel CPU. Meltdown/Specte/Zombieload, etc etc. And to really make your comments laughable, you cannot even notice that 7% improvement in game (really 150 FPS over 135 FPS...who cares?). All while sucking more power and costing a price premium for performance you can't even see.
It's no longer about playing catch up, it's about AMD giving you better performance where it actually matters.
The pricing is better? Here they are about 70% of the price of the Intel equivalent just the CPU, not factoring in cost of getting cooler and that AMD motherboards are cheaper.
I am wondering what's the dissipation rate in each cooler used, but I cannot find it anywhere. A fair comparison would be using the same cooler on all cpus, not mix and matching.
I have trouble concluding on what's the purpose of these "tests" or what's the conclusion of this mess. 3 year old gpu, different bench conditions, half of the Intel security patches are missing, passmark was "weird"....
After looking at the "Test Bed and Setup" page, it appears that none of the results between generations are truly comparable as your margin of error is too large to draw an accurate conclusion. The test beds differ in memory brand, quantity, number of DIMMs and (surely) timings. HSFs are all over the place with Intel getting the benefit of a tower cooler (with mystery fans being used) in the 9X00 series vs stock coolers for the AMD chips. HSFs are also not factored into the price of the AMD chips vs Intel.
It is a shame that you guys did all this work in making everything so precise when the data is truly incomparable. I guess you could argue that this is "better than nothing" to compare dissimilar test benches in different environments. I would argue that this data has an error margin of +/- 5%-10%, rendering most of the graphs useless.
The review was fine, but Ryzen 3k series deserves a Gold of course, as Intel is beaten in cost, power consumption, process node, performance (as no games that I know of today are single threaded), and security--the last a huge win for AMD. A silver is for 2nd place--and I'll only agree with that if you give Intel a Bronze...;) (But then, who's left?....;)) With all these ancient and creaking Intel-compiler optimized synthetic benchmarks around--some of these had to be resurrected for this comparison!--it reminds me of the early days of the Athlon versus the original Pentium long years ago. For the first few months the benchmarks held that AMD still had some catching up to do--advance the clock a year later release and there was almost nothing Intel was winning anymore! Pretty much everything now showed Intel bringing up the rear--both in games and synthetics. The reason for that was that the Athlon/K7 compilers were then in wide circulation and use--along with the standard Intel compilers--so that game-engine designers and benchmark devs could optimize their work for *both* architectures. AMD walked away with it--and a short time later Intel threw in the towel (after A64, I believe) and cancelled the original Pentium entirely and went back to the drawing board. I think it's obvious that few if any of the games and synthetics tested were properly optimized for the R 3k series--and possibly not even for Ryzen 1(!), as well. Time will tell...;)
hey Phynaz, how long as intel been screwing you intel fanboys ?? lies about 10nm being on track, keeping the mainstream stuck on quad core, over charging for its products ?? making sure every 1 to 2 cpu release REQUIRES are new mobo as well....
None of the above ever affected me. So you’ll excuse me if I’m not enraged that “lied” about 10nm. No one from Intel ever said a thing to me about 10nm so I was never lied to.
Now when AMD pulled promised compatibility for Zen 2, that’s being lied to. I wonder if AMD has offered compensation to people that bought those boards/systems to people that bought them for that reason.
Although if you bought them for that reason, well you’re a moron, because this is the THIRD time AMD lied about an upgrade path.
then you obviously are only here to lie... intel has said that 10 nm was on track, 3 or 4 YEARS AGO, but yet.. 10nm is only now starting to show up... intel has been on 14nm since around 2014 i think it was.. and from their roadmaps from then, they should of switched to 10nm around 2016/2017
they pulled it.. because it wasnt as viable as they originally thought.. which is good on them for that, how angry would people be if they didnt, then people upgraded to it, and didnt get what amd said ?? i dont think many bought anything for that reason.. but who knows for sure...
and how have they lied for the 3rd time about an upgrade path ??
seems the only moron here.. is you, as you keep post lies and BS.....
um ya ok.. you are just as bad there phynaz.... intel fan talking out of his own butt.. you are not even a little pissed that intel stuck the mainstream at quad core ?? or charging too much ? fyi.. you need and education.. amd has been with am4 alot longer then intel has with its sockets.. and those names you list.. are the code names... not the product.. and quad father... was way back in the a64 days... what compatibility are you referring to then ???
Please keep up with the times, retard isn’t acceptable language. Idiot and moron are short enough that they ought to fit in that teensy tiny cranium of yours.
but intel sure has rotted your brain. with intel, you were lucky to be able to use the same board for more then 2 cpus. my x99 board.. only 2 generations of cpus were available for it. for amd, AM4 started with ryzen 1, then ryzen 2, and now. ryzen 3, thats 3 generations of zen.. in the case of am3+, you could use cpus for am3+, am3, and in some cases cpu's for am2 as well, all in the same socket. quadfather, was amd's attempt at giving the mainstream a dual socket platform, with out the server price take or features that went along with it. historically, amd has given a much longer upgrade path then intel ever has. how did i agree that amd screwed its customers ? for the most part, intel has screwed its customers, over and over, as i have mentioned in my previous post
waltC " For what it's worth, we rarely give out any awards at all. The award tiers are Silver, Gold, Platinum. The 9900K never even got an award, so in our view the new Ryzen chips are overall better products. " seems for AT silver is the top award... not gold.. :-)
Silver is an award for a great product. And it's the highest award we've given to a CPU in quite some time. Conversely, gold awards are very rarely given out. As the final arbiter on these matters, I would have given out a Gold if the Ryzen 3000 had consistently beaten the competition in both MT and ST workloads here.
This time I ended up upgrading to the AMD Ryzen 2700X instead of 3900X, since with the price/performance difference it was better value to get to the 2700X for 249 euros, whereas 3700X is around 500 euros.
"However there’s a catch: in order to support DDR4 above 3600, the chip will automatically change the memory controller to infinity fabric clock ratio from being 1:1 to 2:1."
Better still, Intel releasing the 8700K, etc., after they already knew about the new security vulnerabilities, allowing customers to buy them while senior staff sold off shares, etc.
Did it stop AMD from releasing their cpus which were affected as well? No.
The senior staff had decided to sell the shares before they were aware of the vulnerabilities, it is nothing uncommon, there is a legal procedure required to be done before you can sell them and then you have to wait several months. All of that happened before the vulnerabilities were discovered and passed to Intel. Now continue trolling at r/amd.
Maxiking most of the security vulnerabilities effect intel ONLY.. this is proven and known.. stop making things up..
and what are you talking about amd not hitting boost clocks?? seems to me.. they are.. boost clocks are usually for only a few cores.. not all cores....
phynaz, whats wrong ?? the fact that the security vulnerabilities like specter and meltdown only affect INTEL, and not amd ?? to quote the article from here : https://www.anandtech.com/show/14525/amd-zen-2-mic... " Another aspect to Zen 2 is AMD’s approach to heightened security requirements of modern processors. As has been reported, a good number of the recent array of side channel exploits do not affect AMD processors, primarily because of how AMD manages its TLB buffers that have always required additional security checks before most of this became an issue. Nonetheless, for the issues to which AMD is vulnerable, it has implemented a full hardware-based security platform for them " but dont let that allow you to think other wise.. cause obviously.. you will continue to post your BS about this...
It works to undermine your security, since it is a fundamentally insecure design that can only apparently be remedied by disabling it entirely. Don't take my word for it, take the word of experts like the OpenBSD team, Apple engineers, etc.
True, I will decide to trust Apple on this, they do not offer any AMD cpus so they obviously know what they are doing.
Also, the HT security problems you keep talking about applies only to certain workloads, like hw virtualization etc. It requires tasks being run 24/7 for a longer period of time, without restarting and the direct access to a machine. So not my case, but thanks for the heads up, I appreciate you care and makes my heart feel warm and fuzzy.
Maxiking you obviously have NO idea how the intel security issues work, or how they were fixed, or the performance hit you HAVE to take in order to use them. " they do not offer any AMD cpus so they obviously know what they are doing. " or because intel is giving them a kick as deal to use them, or more then likely.. amd didnt have any cpus at the time that met their performance goals
Boost is single core, not all core. Same as Intel.
You, that number person and P. just seem to be trolling.
You could have mentioned actual things that Intel is better at, e.g. (had to think hard) workloads that are L2 intensive like stock market algorithms depending on how you write it.
Wow, really? I gave up on this review after I read that they're not only using slow RAM but that they're doing a "let's pretend" about the Intel-only security performance regressions.
Had they actually used FAST RAM, something around 3800+, you would have been complaining how it was unfair towards AMD, because the IF got underclocked. It is diffucult to please you.
Deal it with, after all of those years and security patches, AMD still can't beat Intel refreshed cpus originally introduced to the market in 2015.
Did you miss the non-gaming benchmarks where the 9900K is not even close to the 3900X? They have obliterated Intel's flagship (aside from gaming). At much less power consumption too. And lower price.
It's so satisfying to see the underdog on top again.
"As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer's maximum supported frequency. This is also typically run at JEDEC subtimings where possible. It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance. While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds - this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date."
1) You cite "most users" being lazy as justification for hamstringing the processor with slow RAM. Then, simultaneously, you test with a "premium" motherboard. Maybe all of those lazy computer folk are the ones buying the midrange and low-end boards, not the enthusiasts who are more likely to fork out the cash for a premium board.
2) Most lazy computer users don't read complex articles like this.
3) Citing ordinary stupid lazy people is rarely a good justification for anything, especially when it comes to enthusiast computing.
4) "ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS"
That is absurd. Oh, the scary BIOS! It takes two seconds to select an XMP profile.
5) "as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer"
6) "Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date."
That isn't good enough and you know it. Firstly, the information needs to be out there at the start. Secondly, promises from Anandtech about reviews "coming real soon" often turn out to be vaporware, like the GTX 960. Selective information to further an agenda.
Citing OEMs' practices for basic models, given that they do things like hamstring APUs with single-channel RAM, is a logical failure, especially when you're using premium motherboards.
Your rationalizations for your memory testing paradigm are clearly specious.
Similarly, Anandtech should be ashamed of its overclocking methodology that was described in years past, where serious stress testing wasn't done, actual stability was never found, and voltage extremes were used as the basis for determining that "max stable overclock".
Those kind of reckless amateurish practices do one thing and one thing only: They get novices to destroy equipment and corrupt their data, wasting their time and money.
Speaking of laziness. It's lazy to not do what Joel Hruska bothered to do, for the day 1 Ryzen 1 review — run the RAM at 3200. He managed to do that without serious effort, and that was with Zen 1. Here we are and you're using 3200 with slow timings for Zen 3?
It's bizarre to have so much precision testing (like RAM latency parsing) coupled with extremely sloppy (or worse) decision-making, like pretending that the latest Intel-only security vulnerabilities aren't important enough to factor in.
A guy from hw unboxed was overclocking 3900x, reached 4.3ghz and killed the cpu.
3200mhz without serious effort on Ryzen 1? Do you have any clue what are talking about? Even today, with Ryzen 2, you are considered lucky, if you can ran 3200CL14 without problems. Not to mention, you think it is something uncommon with Ryzen 1. 3000mhzCL15 used to be a problem.
Sounds like nonsense,we seem to have not watched the same review.
That review was also flawed as the 9900K got a $200 cooler and certain tests also had water cooling, versus the 3900X with a stock cooler. Great methodology.
Stop posting here, your posts are all nonsense so far.
There was a gigabyte board engineering sample that killed cpu's. Not the cpu, two reviewers have confirmed it was pre-prod gigabyte board that killed cpu's.
I am not understanding why this is only Silver and not Gold. Considering Price, Power, Performance, Value.
The only thing that beat it, is one SKUs from Intel at single thread performance with higher clock speed. And these Intel test are without latest Zombieload Security patch.
Gold awards are very rarely given out. As the final arbiter on these matters, I would only give out a Gold if the Ryzen 3000 had consistently beaten the competition in both MT and ST workloads here.
Mind using a cheaper cooler on the 9900K so similar envelope to work on? Then also forcing the 9900K to stay at TDP? No? Then how is the 3900X supposed to compete in performance alone if it completely destroys the 9900K in every other metric.
it already is... 95 watts for intel.. is the minimum.. intels chips have been shown to use alot more then that.. up to 200 watts.. constrain intels cpus to 95 watts.. and they loose across the board
Just another unemployed AMD fanboy that can’t handle the truth. Wait till NAFTA kicks in, you’ll be able to afford a used Via CPU with that Monopoly money Canada uses.
what ever phynaz... anandtech did a write up on the power intel uses for their chips : https://www.anandtech.com/show/13544/why-intel-pro... that link you posted, looks like a mistake was made with communication between several different parties, as the poster said, considering this is a new cpu, and accompanying mobo/chipset, things like this do happen, even intel has had its own issues with a new platform, and we will have to see how it levels off in the coming few weeks. amd does stay within its TDP limits better then intel.. at least when amd says their cpus use XXX watts, it uses around that number, unlike intel, where a 95watt cpu, cause use up to 200 watts, as the link i posted shows...
you sure like to throw insults around dont you ? does it make you feel better about your self ? in the end.. maybe its YOU that cant handle the truth about your beloved intel ? face it, compared to zen2/ryzen 3, intels cpus use more power, and cost LESS then intels equivalent cpu, and amd has IPC parity with intel.
The silver award seems apt. Since it certainly lived up to expectations and in some instances surpasses them. gold if it's the clear winner in everything, platinum if it beats out all expectations..
Using llvm for c/c++/Fortran codes is most likely to result in slower performance than gcc (and even more likely than Intel compilers) .I do not know if the performance impact is more/less/the same among Intel and and CPUs but I do not really trust these numbers in the first pages of the review.
I'm excited by the 3800X, which based on this article, may showcase a much higher performance (and power) output at higher multi threaded applications.
I'm very much looking forward to the inclusion of the 3800X numbers. Would also like to see some game updates with the 2080 and such at 1440p as most of the test either skipped that resolution and went to 4K. The 4K results mostly showed the GPU bottleneck.
I was really looking forward to reading this review. I look forward to finding out what is going on with your PCMark numbers. I appreciate that you guys are willing to go the extra mile when you see something not looking right. Thank you and keep up the great work guys!
Great review, thanks. Gains are good but I'm more than happy with my 2700X for now so I'll likely be waiting for Ryzen 4000. Seems like Intel CPUs are more or less obsolete at their current prices now unless you absolutely need the best possible gaming performance at any cost. (more money than sense).
One nitpick, though. I completely disagree with this statement:
"Ultimately, while AMD still lags behind Intel in gaming performance, the gap has narrowed immensely, to the point that Ryzen CPUs are no longer something to be dismissed if you want to have a high-end gaming machine."
Specifically, about "dismissing" AMD Ryzen CPUs for high end gaming machines, I mean the 2nd and 1st gen ones. I have built many "high end" gaming machines, with Ryzen 1800X and 2700X and they are excellent. Anyone that "dismisses" Ryzen 1 or 2 for a high end gaming machine is a tool. (I'm gaming at 144Hz on a 2700X, lol).
But I understand the point trying to be made. Gaming was the last bastion for Ryzen in absolute performance and now they have sort of cracked it. 9900K for 480+ bucks is going to be a hard sell with these new chips onm the market. Where are these rumoured Intel Price cuts? or is chipzilla really that arrogant?
I think he said that because most people want to see AMD close to/or beat Intel in order to finally look at the processors as a proper alternative. I am playing on an R5 2600 daily and in what I need it to perform, it does great. People like us who have long researched and dug into those Ryzens will probably have already switched. Now it's time for those like my colleagues whom, when I showed the performances, went into deep thoughts as to how to plan their next Ryzen builds.
Spelling, grammar, and 2 technical corrections (thus far):
"...meaning for the very vast majority of workloads, you're better off staying at or under DDR4-3600 with a 1:1 MC:IF ratio." Acutally, AMD's graph shows DDR4-37333, not DDR4-3600 before the 2:1 IF ratio sets in. "...meaning for the very vast majority of workloads, you're better off staying at or under DDR4-3733 with a 1:1 MC:IF ratio."
"...this put a lot more pressure on the L2 cache capacity, ..." Missing "s": "...this puts a lot more pressure on the L2 cache capacity, ..."
"AMD here has essentially as 60% advantage in bandwidth as the CCX's L3 is much faster than Intel's L3" "a" not "as. Maybe get rid of the "essentially"? "AMD here has essentially a 60% advantage in bandwidth as the CCX's L3 is much faster than Intel's L3"
"The X570 chipset is the first chipset its manufactured in-house using ASMedia's IP, whereas previously with the X470 and X370 chipsets, ASMedia developed and produced it based on its 55nm architecture." This sentence makes absolutely no sense. Have another cup of coffee? :)
"...on top of being able to run them on more memory limited platforms which we plan on to do in the future." Excess "on". "...on top of being able to run them on more memory limited platforms which we plan to do in the future."
"We're seeing quite an interesting match-up against Intel's 9700K here which is leading the all the benchmarks." Extra "the": "We're seeing quite an interesting match-up against Intel's 9700K here which is leading all the benchmarks."
"In our test, we take v1.3.3 of the software with a good sized data set of 84 x 18 megapixel photos and push it through a reasonably fast variant of the algorithms, but is still more stringent than our 2017 test." Replace "is" for "they are" as the word algorithms is plural. "In our test, we take v1.3.3 of the software with a good sized data set of 84 x 18 megapixel photos and push it through a reasonably fast variant of the algorithms, but they are still more stringent than our 2017 test."
"Please note, if you plan to share out the Compression graph, please include the Decompression one. Otherwise you're only presenting half a picture." Excess words, try: "Please note, if you plan to share our Compression graph, please include the Decompression one. Otherwise you're only presenting half a picture."
"but actually also raising the clock frequency at the same time, bringing for some impressive power efficiency benefits." Excess "for" or bringing: "but actually also raising the clock frequency at the same time, bringing some impressive power efficiency benefits." OR "but actually also raising the clock frequency at the same time, for some impressive power efficiency benefits."
"Not that Zen 2 is soley about memory performance, either." Missing "l": "Not that Zen 2 is solely about memory performance, either."
"We've also seen the core's new 256-bit (AVX2) vector datapaths to work very well." Excess "to": "We've also seen the core's new 256-bit (AVX2) vector datapaths work very well."
"Intel's higher achieved frequencies as well as continued larger lead in memory sensitive workloads are still goals that AMD has to work towards to" Excess "to": "Intel's higher achieved frequencies as well as continued larger lead in memory sensitive workloads are still goals that AMD has to work towards"
"The new design did seemingly make some compromises, and we saw that the DRAM memory latency of this new system architecture is slower than the previous monolithic implementation. However, here is also where things get interesting. Even though this is a theoretical regression on paper, when it comes to actual performance in workloads the regression is essentially non-existent, and AMD is able to showcase improvements even in the most memory-sensitive workloads." Not strictly accurate. AMD is showing a regression in performance compared to themselves in the "3DMark Physics - Ice Storm Unlimited" and "AppTimer: GIMP" benchmarks. GIMP is single threaded and the 3900X is loosing to the 2700X. Again, the same with "Ice Storm Unlimited", but I suspect that we're hitting a performance ceiling here. I suspect if deep dive into the regression in GIMP you'll find something more interesting than just a memory bottle-neck.
@andrei I've been closely watching Geekbench since it's introduction to the suite and the ST mode seems to love clock speed. Maybe next time you can give normalized results.
"We're investigating the PCMark results, which seem abnormally high." Aww, c'mon, we all love an AMD shill. :-)
Corona is especially interesting for it's magic Intel number. The 9900K 5B-RPS which is the same as it's boost clock frequency of 5GHz! The 3700X is doing 4.6B-RPS and has a boost clock speed of 4.4GHz.
With the X570, given the vast increase in TDP to 11/15W (which ASRock for one has opted to use X470 in their server board: https://www.servethehome.com/asrock-rack-x470d4u-a... I wonder why AMD didn't choose to bump the chipset to a better process node, since 55nm is positively ancient. Could you ask about this, as it seems to be a card that fell off the table when looking a platform efficiency, and dents AMD's armour when it comes to beating Intel on thermal efficiency...
Seems weird for PCIe 4.0 to take so much more than 3.0, though it is twice the speed. Maybe they implemented as an FPGA rather than an ASIC? Though that seems like a weird solution, and I'm not sure there are any FPGAs at 14nm. I guess we'll see if it reduces with the next series. (More conspiracy-theory wise, maybe they're hiding some high-power feature in there, extra cores...)
How do the pcie 4.0 x4 lanes work between cpu and chipset? Would 2 pcie 3.0 x4 devices be able to run at full speed? Does the chipset aggregate the bandwidth available back to cpu?
Also would it be possible to use all 16 lanes on the chipset at once, just that when it goes back to the cpu they would simply be restricted in speed?
The CPU has 24 PCIe 4.0 lanes available. The chipset has an x4 link to the CPU. The chipset can be configured as one x8, or 2x x4 blocks, each of which can be 1x x4, 2x x2, or 4x SATA. These are all switched within the X570 southbridge (so therefore aggregated to the CPU). However, remember the chip is also giving 20 other native lanes, and each of these is double the bandwidth of PCIe 3.0. A PCIe 3.0 GPU with an x16 connector is actually only x8 PCIe 4.0 lanes in terms of bandwidth available.
Hmm, so is it correct to say I can either have 4 or 8 PCIe 4.0 x4 lanes to the CPU? 4 is natively reserved by the CPU and the other 4 is general purpose which could be reserved for this purpose?
So given 8 lanes of PCIe 4.0 to the CPU, if the chipset supports, say, the use of 16 lanes, does it mean I can use all 16 lanes at once, just that it will run at PCIe 3.0 speeds?
Hmm, so is it correct to say I can either have 4 or 8 PCIe 4.0 x4 lanes to the CPU? 4 is natively reserved by the CPU and the other 4 is general purpose which could be reserved for this purpose?
So given 8 lanes of PCIe 4.0 to the CPU, if the chipset supports, say, the use of 16 lanes, does it mean I can use all 16 lanes at once, just that it will run at PCIe 3.0 speeds?
As I've asked before with Zen as compared with Intel, please post at least one graph of benchmark performance per watt. Just inferring it based on the last page of the article isn't good enough, especially when the legend on every other graph shows the TDP numbers which we know Intel and AMD determine in different ways.
Silver is an award for a great product. And it's the highest award we've given to a CPU in quite some time.
Conversely, gold awards are very rarely given out. As the final arbiter on these matters, I would have given out a Gold if the Ryzen 3000 had consistently beaten the competition in both MT and ST workloads here.
And as I stated on another comment you had like this, completely dumb.
The 3900X is cheaper, half the power budget, using a worse cooler, with the 9900K not having full mitigation patches and performance wins generally being erro rof margin, what exactly are you basing this on.
You have just forgotten how much performance new generations brought before. What it was like to upgrade from P4 to Core2. Then from Core2Duo to Core i7. It was BOOST. Now it's just 15% here and there. That's why I agree that Silver is actually fair.
Well it shouldn't be Silver. Harder to get higher ipc gains than before. Write games and apps with multicore in mind instead of single threaded performance and maybe you'll see p4 to Core2 performance gains. 800lb gorilla with all those resources getting slapped around by a tiny fish. Should be GOLD!
Change in circumstances, that was a decade ago. Performance difference per clock is still substantially different, should have been gold rated if modern Intel get silver.
I hope people here know that Anandtech is owned by Intel. So do not expect any fair reviews of Intel competitors products. We got to read other forums to have the truth
Apparently Anandtech is owned by 100% owned by Intel, AMD, Nvidia, Apple, Samsung, LG, etc. depending which article you read. It's ridiculous how people have been posting nonsense like this for years.
We all knew that 3'rd zen will have something ep(i/y)c inside, but this is insane. remember that current "drivers" might not be at it's peak YET, so another 5% + 5% from software support might appear later.
Yes, but we people have to make AMDs product heard. Because these Intel paid medias like this, they will press the product down even it would beat competition 50%.
So, Intel wants them to publish numbers that are favorable for them, but are also okay with them explaining how they're bullshit? In an article that practically praises Ryzen 3000?
Not really, the issue is more that everyone clued up in tech knows about it, so tech journalists don't bother commenting the fact, while the average person doesn't understand.
"Meanwhile we should note that while the ZombieLoad exploit was announced earlier this year as well, the patches for that have not been released yet. We'll be looking at those later on once they hit."
Other articles on the net shows both microcode update from Intel and OS patches from Microsoft have been released. Are they wrong ???
Amidst all the slapfighting between AMD and Intel, ARM have got to be pretty pleased with how their cores compare to these architectures (in spec2006) right now. A76 implemented in a mobile SoC with like 10 times worse memory subsystem and cache than the test setups here is at around ~Zen+ IPC and takes like 1/3 area/core on the same process as a Zen 2 core, and then A77 is coming around the corner with another expected IPC boost....
The first time AMD zen 2 tie with i9900k in gaming and beat i9900k in almost every aspect of productivity! AMD will be great for streaming + gaming. How could this article didnt redo 1903? Why Intel system isnt patch for mitigation and security flaw. is that called ignorance or what that giving Intel some advantage in these area of benchmark. Please redo them and take time this time as AMD is still fixing most of its problem thank you!
Originally, I was going to wait for Zen 3, but I think I can consider buying a 3700X to 'get me by' until next year. No way am I buying a new X570 motherboard. Replaceing my 2700X with the 3700X is good enough until Zen 3 and new AM5 socket and new X670(?) chipset are released next year. Of course, the chipset name is hypothetical, so it's my best guess as to what AMD will call next year's new hardware ;-)
Two things I would like to see (maybe in an update): ECC support (I expect that it's like for Ryzen 1XXX: unsupported, but works with Asrock and ASUS boards) and RAM capacity: Can you use 4 of Samsung's 32GB DIMMs for 128GB RAM?
ECC is supported in all Ryzen CPUs as it's part of the built-in memory controller. However, it's up to the motherboard makers to enable support for it in the RAM slots and BIOS and whatnot.
If a motherboard claims support for Ryzen Pro, I believe that's a good indication it supports ECC. Otherwise, you have to dig around in the motherboard manual to find out.
This is quite impressive, as AMD engineers hinted last year in leaks that they feared Zen 2 would be server and mobile only design. TSMC's first generation 7nm process is heavily optimized for efficiency and they didn't expect it to scale well past 3.5 GHz. Intel better have its 10nm process in full swing by the end of the year, otherwise they're in for a beating when Rome and the mobile variant launch - it's no secret that chip manufacturers only care about desktop to the extent that they win good press from enthusiasts.
On a side note. What are the potential gains from kernel optimizaions similar to what happened a few months after the original Ryzen, this seems to be a similar restructuring of the cache.
It's kinda sad that that AMD's releases seem like a "beta fest". While the products themselves are pretty good, issues with Bios or drivers often seem to be a let down.
It must stink to put in all the work to bench a system just to have to re-do it again.
Still, seeing how the results are already pretty good, I am hopeful that they will improve further after updates / patches.
Why are the gaming benchmarks at only 720p and 1080p? Is this what most people game at these days? Most gaming benchmarks are 1080, 1440 and 4k. Oh the CPU doesn't have much affect above 1080 you say? Well good, PLEASE SHOW THAT. People need to know this when making CPU decisions. If AMD trounces Intel at everything office related, and @ 1440 & 4k there is no difference, then that will absolutely affect my buying decision. Why are you showing 720p when hardly anyone games at that rez?
*Slow clap* Great work AMD! I have always been a snobbish Intel user. Back in the late '90s and early '00s it was because the audio software I used simply was not stable on AMD (heck, it was barely stable on Intel lol). Then after the Core2 came out Intel was weirdly the best AND the cheapest for a very long time. But now, AMD really has a lot going for itself, and I am truly impressed. Hoping to rebuild my aging (but still pretty great) i7 2600 this fall... mostly because I need it as a home server, not really because I 'need' an upgrade. But I think I am going AMD this time around. I really can't believe how much they have improved in the last 3 years!
Guys... I get you might not want to adjust your testing base. But MDS/Zombieload makes a significant difference when it comes to system calls, such as just about any file or network access. https://www.phoronix.com/scan.php?page=article&...
The reason for this is that the CPU has to perform a crazy sequence of events when accessing privileged data when two threads on a core are involved, essentially yanking the other thread into kernel mode as well, performing the privileged access that the original thread wanted, then flushing all relevant buffers before returning the two threads, so that the other thread can't run a timing attack using data from them.
It's a hack, and the impact is potentially worse the more modern the Intel CPU is, because - aside from the Atom - they have had increasingly bigger output buffers, especially Skylake+.
The OS fixes were out in mid-May when Zombieload was announced, for both Windows and Linux, so I don't know where you're getting "the patches for that have not been released yet".
Maybe you're thinking firmware for your motherboard to load new microcode? This is typically more of an issue for Windows; on Linux you'd just load the appropriate package. But even here, this doesn't make sense, because (for example) your Z370 i7 Gaming (used for the critical Intel 8/9th Gen CPUs) *does* have a newer BIOS: https://www.asrock.com/MB/Intel/Fatal1ty%20Z370%20...
In fact, much newer. The 4.00 is from May 13, so presumably is relevant to MDS. You seem to be on 1.70, from March 26... 2018. There have been five updates since then. Why was it not updated?
I quickly scanned the comments to see if the benchmarks had been performed with all relevant mitigations installed and was not surprised in the least to discover they hadn't, so frankly this entire article is pointless and I won't waste my time reading it. All there is left to say about this article is that whatever difference Anadtech determined between Intel and AMD, it would have been even more in favor of AMD had all Intel mitigations been enabled.
Anandtech, Ryan Smith etc., do yourself a favor and re-test your Intel CPUs with *all* mitigations enabled otherwise your Intel benchmarks are just a sham, and you will start to lack credibility. Based on the comments for this article and others your readership are already staying to lose faith in your integrity. Other sites such a phoronix.com are doing a great job detailing the full impact of the mitigations (including Zombieload which you should have tested) so it's hard to take serially your reasons for not testing with a level, real-world playing field (ie. full mitigations). Or maybe you just didn't want to give out a Gold award? :)
@TEAMSWITCHER For people that aren't Intel apologists, this stuff matters. Not just because we as consumers want to get an honest review of how the latest AMD hardware stacks up against Intel in a real world situation with all mitigations applied, but also because this elephant in the room is a core credibility issue that Anandtech need to deal with.
The only one being an apologist here is you CityBlue. In all your rage about Anandtech not testing with mitigations in place, you failed to every take up the fact that Anandtech has also tested the Intel setup with lower RAM speeds than the AMD one. Which is, to use your own words, "hard to take seriously...for not testing with a level, real-world playing field". Changing RAM speed is a simple push of the button on XMP, and both easily support time (not to mention that x570 motherboards isn't something the overwhelming majority of people, for obvious reasons, will buy). Remember, this was a traditional complaint from many users back when Zen 1 came out, and was tested by various vendors out there (like Gamersnexus) with lower RAM speeds and Intel counterparts.
@generalako as I said in a previous comment, this article and it's benchmarking is so fundamentally flawed that I'm not willing to invest the time to read the article (I mean, seriously - what's the point?) so forgive me for not mentioning other errors/omissions that may have favoured AMD but two wrongs do not make a right, and especially not when the mitigation omission is so egregious.
This is true for the HEDT X-series motherboard as well. 1.40 is from March 2018. There have been three updates since then, including two new instances of microcode, the last from 6 June 2019: https://www.asrock.com/MB/Intel/X299%20OC%20Formul...
This does *not* apply in quite the same way for the GIGABYTE X170 ECC Extreme used for the 7th- and 6th-gen Intel CPUs... but only because it hasn't been updated *by Gigabyte* since the very first patches for Meltdown and Spectre at the start of 2018: https://www.gigabyte.com/uk/Motherboard/GA-X170-EX...
Some of those benchmark results with the i9-7920X are very fishy. In some cases, out-performing Intel CPU's with more advanced cores that have 2/3rd the cores, yet, it somehow manages to score 550% better? Please explain.
Well, it *is* an X-series. Perhaps it has a bit more cache? Or AVX-512 support with more modules? But I also see it's using a BIOS from March 2018 - not the latest from June 6 with microcode allowing MDS mitigations to be used by the OS (see my comment in the previous page).
There are multiple errors in the "X570 Motherboards: PCIe 4.0 For Everybody" section. Check the second paragraph and "AMD X570, X470 and X370 Chipset Comparison" table that follows it.
So any plans to cover the huge fraud and misleading AMD marketing about frequency and the boost frequency? The majority of 3900x have such poor silicon quality they can't reach 4.6 GHz on a single core.
there is NO fraud about this.. better yet... where is your PROOF about this ? post some links to sites that are showing this.. if not drop it already.. you are just trying to spread lies, and BS...
My proof is this and any review on the internet, the advertised 4.6ghz is not being reached on the majority chips and if, only for 100 - 200ms and very sporadically. This is called FRAUD, I guess Intel should start selling their cpus with 5.3ghz boost because a few of them would be able to reach it for 100ms after pumping a lot of voltage into them like AMD does.
Lol, a boost frequency is exactly that. It doesn't mean the chips will sustain the boost for any guaranteed period of time. Do you really think it's just a coincidence that you're the only one that's "outraged" by the expected operation of these chips? Guess what, when Intel ships a chip with a stated frequency, it's also a boost with no guarantee of duration. Stock 9900k's don't run continuously at 5ghz. Lol You look like a complete fool for talking about fraud where there is none.
yea ok sure.. what ever maxiking.. you have NO idea how boost works, OR what the difference is between boost and max all core turbo is. the fact you wont post links to other sites that show this, also proves your are just trying to spread lies and BS. the only fraud i am seeing.. is you...
We are on the side which confirms my words in the review, forums, by their tweets. Yet you are blind to the facts. Check derbauer, gaming nexus, not gonna spoonfeeding you.
let me ask you this... maxiking.. do you know the difference between boost clock. and all core turbo ? cause it sure seems you do NOT know the difference.. and others have pointed out to you that you are also wrong.. i wanted links.. to be sure i am looking at the same sites as you.. in the end.. you are just trolling.. and talking bs... drop this already..
Check derbauer, gaming nexus... never heard of these sites.. i can see why Korguz is asking you to post links directly to where you are getting this, and i agree, BS from...
There is a debate going about this on every internet forum, so no I am not the only one concerned and the AMD subreddit has a dedicated thread about it as well.
If Intel ships a chip with any stated boost frequency, the boosts are guaranteed on the per core usage basis. Stock 9900k runs at 5.0 ghz, the more cores are being used, the lower frequency..a single core always run @ 5.0ghz, all cores @ 4.7ghz. I already said it earlier. Sorry my friend, wrong example, the only fool here is you.
The fact is that Ryzen 3rd gen is a worse overclocker than the 2nd one. Or you know, I will give AMD the benefit of doubt. Before the reviews were up, the whole internet had been going crazy and thinking that 4.6 on all cores was possible, because "look at those boost frequency man". And that was the AMD intention. To use those sporadical 100 - 200ms spikes to spread the idea that the final product would able to reach them on all cores, to misguide. And I must say it WORKED brilliantly, ADOREDTV is now the biggest clown on the internet, easy 5ghz+, he said. I wonder if he makes ConLake style videos about this.
So yeah, that 4.6ghz boost is fraud. The majority can't reach it on a single core and the rest capable of doing so only by performing non frequent 100 - 200ms long spikes.
If Intel did this, oh god, what a shitstorm would be here, just like with their TDP, Meltdown, Spectre. Every review would be full of this. Don't blame me, I am just pointing at facts and making fun of petty suddenly blind amd fans. Don't shoot the messenger.
well.. 1st.. you didnt answer my question if you know the difference between boost clock and all core turbo.. 2nd.. you are the only one on here.. that is crying about this.. define every internet forum... also.. you do not seem to replying to any one else in this thread about this..
I've seen 4.4GHz without doing any BIOS tweaking on a 3900X on an Asus ROG Crosshair VI Hero. BIOS is still AGESA 1.0.0.2, so I haven't bothered to even try pushing to 4.6GHz yet. I wouldn't say that a boost to 4.6GHz with a good BIOS version isn't possible based on what I have seen with X370, and expecting that X570 has a good chance to be better about how to handle the new chips.
As far as Intel and boost speeds, that is based as well on cooling. You try a 95W TDP rated cooler, and you will NOT be hitting anywhere close to 5.0GHz boost. You would need something significant. That 95W TDP is a fraud, because it only applies to base speeds, while the AMD TDP figures are in line with what most people will hit.
Did you bother to read Andrei F's twitter post regarding the Bios update - it includes a nice graph where you can see the 3900x's cores boosting to what looks like 4.6 Ghz.
Intel doesn't guarantee boost clocks. It's literally on their website. The only guarantee is base clocks. Boost clocks depend on cooling and power delivery.
While 3900X vs i9-7920K and 3700X vs i7-9900K is a no-brainer, i would really wanna see how performs (overclocked) 3600 vs this bunch of CPUs. This will help making some interesting decisions for optimizing budged.
The BIOS in the Intel motherboards tested are from 2018; most appear to only have microcode to handle Meltdown/Spectre (despite the availability of BIOS versions that would work). So... no.
No; they didn't retest the Intels on Windows 10 1903, which includes the OS-side patches for the MDS flaws. The motherboard firmware patches may never come.
This really does invalidate the Intel numbers, but it's not critical: on a up-to-date system, they'll be slower, and Ryzen 3000 even further ahead.
I noticed that at the E3 2019 tech day, AMD recommended DDR4-3600 CL16 RAM. I see that 3200 MHz RAM has been used in the AMD testbench. I read the description about avoiding overclocking but 3600 MHz RAMs come with a factory clock of 3600 MHz, right? I know I am missing something. What am I missing?
Do you plan to make some tests of these CPUs on older, cheaper and colder motherboards? It would be very interesting to see results of b450 chipset and whether it is possible to use DDR4-3600MHz with tight timings on these older boards. Or at least provide more info about what has more priority for memory speed and timings on AMD platform - CPU or chipset.
I am going to wait to build a PC for a bit, however, I am super excited by this launch and disappointed by the video card launch. I expect to have an AMD chip since Intel has no answer for this, and we shall see on the video cards, but if I was building today I'd probably get a 2070 RTX super.
Rather a new rig and it is X470 up to the A.A BIOS and it is MSI Gaming Plus. OK link #2 is here and I stroked the DDR$ up top 3333Mhz. I also stroked the fan to stay sub 70C. Wild OCs will take water at least "in The Home" versus LiqN2 Lab.
BTW where is the Bragging Thread? My MOBO is the MSI X470 Gaming Plus BIOS A.A makes Ryzen 9 go BTW. I have yet to up the MULTI in case you want to know. I wonder what good Ocers will get with the right stuff.
You underst and that RAM set at 1672 is 1/2 the common referred to speed. 3344Mhz is the common nomenclature.
***Single-Core Score ***Multi-Core Score 5589 47755 Geekbench 4.3.4 Tryout for Windows x86 (64-bit) Result Information Upload Date July 12 2019 08:16 PM Views 2 System Information System Information Operating System Microsoft Windows 10 Pro (64-bit) Model Micro-Star International Co., Ltd. MS-7B79 Motherboard Micro-Star International Co., Ltd. X470 GAMING PLUS (MS-7B79) Memory 32768 MB DDR4 SDRAM 1672MHz Northbridge AMD Ryzen SOC 00 Southbridge AMD X470 51 BIOS American Megatrends Inc. A.A0 Processor Information Name AMD Ryzen 9 3900X Topology 1 Processor, 12 Cores, 24 Threads Identifier AuthenticAMD Family 23 Model 113 Stepping 0 Base Frequency 3.80 GHz Maximum Frequency 4.53 GHz
You can't reach 5.0ghz + You can't reach even the boost frequency on a single core You can't beat consistently competitor's older 14nm cpu architecture which has been on the market since 2016... You can't beat RAM OC'ing records either because over 3733mhz IF gets actually downlocked and due tu that, "faster" ram performs worse unless you OC 7400mhz, which is not possible even with liquid nitrogen.
Rather a new rig and it is X470 up to the A.A BIOS and it is MSI Gaming Plus. OK link #2 is here and I stroked the DDR$ up top 3333Mhz. I also stroked the fan to stay sub 70C. Wild OCs will take water at least "in The Home" versus LiqN2 Lab.
BTW where is the Bragging Thread? My MOBO is the MSI X470 Gaming Plus BIOS A.A makes Ryzen 9 go BTW. I have yet to up the MULTI in case you want to know. I wonder what good Ocers will get with the right stuff.
You underst and that RAM set at 1672 is 1/2 the common referred to speed. 3344Mhz is the common nomenclature.
***Single-Core Score ***Multi-Core Score 5589 47755 Geekbench 4.3.4 Tryout for Windows x86 (64-bit) Result Information Upload Date July 12 2019 08:16 PM Views 2 System Information System Information Operating System Microsoft Windows 10 Pro (64-bit) Model Micro-Star International Co., Ltd. MS-7B79 Motherboard Micro-Star International Co., Ltd. X470 GAMING PLUS (MS-7B79) Memory 32768 MB DDR4 SDRAM 1672MHz Northbridge AMD Ryzen SOC 00 Southbridge AMD X470 51 BIOS American Megatrends Inc. A.A0 Processor Information Name AMD Ryzen 9 3900X Topology 1 Processor, 12 Cores, 24 Threads Identifier AuthenticAMD Family 23 Model 113 Stepping 0 Base Frequency 3.80 GHz Maximum Frequency 4.53 GHz
The editor's choice awards are a bit strange to me. Zen 1 didn't receive one even though it was the largest CPU performance increase from a company this century. The i7-4950HQ received an editor's choice silver award even though it had little importance to the industry. And the 3700X, which offers comparable SP performance to competing intel products at a huge discount and smaller power budget gets the same editor's choice level as the i7-4950HQ?
I know it was a different editor at the time, but the selective excitement is a bit of a bummer. eDRAM was exciting to see at the time and then nothing ever came of it. The enthusiasm of chiplets under the new editor comes through much less. That too is fine. However if the rating system is what it is then I don't think it's much to argue that chiplets are much more disruptive than eDRAM and is already making much larger waves.
Maxiking, and HOW LONG till intel gets the SAME treatment?? saying a processor uses x watts, but in reality uses 50 to 100 watts MORE isnt FRAUD ??? hell you confine intels cpus to the watts they state, and their performance goes DOWN THE TOILET !!!. again .. you KEEP saying AMD is a fraud, but you STILL refuse to admit, that intel is a fraud as well..
does this guy even acknowlege the issue with intel and the amount of power they " say " their cpus use, and how much power they REALLY use ??
further.. intel doesnt do any marketing, cause they DON'T want the general average user to know the cpu they bought, uses MORE power then has been stated, THAT also is false advertising, come on maxiking, go after intel as well, the same same things you are accusing amd of...
You are uneducated, TDP doesn't mean power consumption but the amount of heat dissipated, it informs you how much of heat the cooler must be able to dissipate in order to keep the cpu cool enough to run.
Get it? 1700x TDP was 95W yet there were tasks it managed to consume 120 or even 140w on stock settings. Like do you even watch reviews? It was the same with 2700w.
sorry dude.. but YOU are uneducated, amd stays A LOT closer to its stated TDP then intel does, AT even did a review on it. power dissipated, also relates to power used. but it also doesnt help, that amd and intel both use the term TDP differently. either way.. intel uses more power then amd does. https://www.anandtech.com/show/13544/why-intel-pro...
Again, TDP is not power consumption and it refers to a cooler.
You are uneducated and fabricating because you are an amd fanboy. No one really cares about what is more accurate or not, because it does not say anything about power consumption of the chip.
So keep living in your nice little bubble. It is not my fault that you and other sites have been thinking that TDP -> power consumption. I will share something new to you again.. Ever heard about that Frankenstein novel? Frankenstein in not the monster but the doctor, his surname..Shocking I KNOW!!!
again.. TDP, or Thermal Design Power, does relate to power consumption and how much is needed to keep something cool. You are uneducated and fabricating because you are an intel fanboy. i also notice you like to throw personal insults around when someone disagrees with you, or to try to make your opinion valid. so you keep living in your nice little bubble as well, not my fault you dont understand TDP relates to how much power something uses, as the more power a product uses, the more heat it creates, and then, needs to be removed.
What you just did it is just sad. it shows you are little kid.
TDP is not power consumption, if TDP - 100% power consumption, it would mean that 100% of the electrical energy is converted into thermal energy so yeah which is impossible it would mean perpetuum mobile you twat, actually the cpu would be net positive, it would convert 100% of electrical energy into thermal whilst managing to perform another task at no energy cost.
Breaking the laws of physics just because of your AMD fanboyism
i said it RELATES to power consumption, what, you cant read ?? cant see passed your intel bias ?? the more power something uses, the more heat it generates, and there for, the more needs to be dissipated, and i also never said anything about 100% power consumption, pulling words and making things up to try to make your self sound right ? And you are calling me names on top of that, who's the kid here ???
You are uneducated, TDP doesn't mean power consumption but the amount of heat dissipated, it informs you how much of heat the cooler must be able to dissipate in order to keep the cpu cool enough to run.
Get it? 1700x TDP was 95W yet there were tasks it managed to consume 120 or even 140w on stock settings. Like do you even watch reviews? It was the same with 2700w.
hmmm doest really say amd is being fraudulent, just doesnt like the idea the chips might not boost, or run at what AMD says, but didnt mention fraud...
and Korguz has a point.. WHY arent you commenting about the power intels cpus use, vs what intel says they use ?
LOOOOOOL, so we have a guy confirming AMD doing fraund by misleading people about the frequency, instead of acknowledging the fraund, we gonna talk about semantics.
Yeah, if you get sentenced for a sexual assault, you should sue then anyone who has accussed you of raping. Just wow.
"You are uneducated, TDP doesn't mean power consumption or the highest peak but the amount of heat dissipated, it informs you how much of heat the cooler must be able to dissipate in order to keep the cpu cool enough to run.
Get it? 1700x TDP was 95W yet there were tasks it managed to consume 120 or even 140w on stock settings. Like do you even watch reviews? It was the same with 2700x.
and yet, you still refuse to admit, that intel has its own issues with fraud and misleading its own customers.
does he actually say its fraud ?? not directly, seems only YOU keep saying that, and only YOU say amd should be sued for it. again.., i would love to see YOU file a suit against amd for it, considering you are so hung up about it but you wont, cause you are all talk, no action, and probably know.. you wouldnt get very far with that law suit
I said a few times... I don't tend to buy amd products so no, I am not gonna sue anybody.
And as pointed out in the video, in his German one, he works for a retailer selling prebuilt pcs.. People keep returning pcs with AMD cpus becaue they do not boost to the promised frequency. You there, there are something like laws, if you write on the box 4.6ghz, it must reach it.
You are so knowledgeable, sharp minded and analytical when comes to meaning of words and what people want to say, you should sue Intel on your own, should be easy.
why not ?? going by how dead set you are about this.. seems like it would be an easy win for you.. ooooohhhh in the german one.. i understand now.. too bad i dont speak german so i cant confirm this... and if some one writes on the box that something uses a certain amount of power.. then it should use it.. not 50 to 100 watts more.. i have a few friends that buy intels cpus.. they see it uses 95 watts of power.. so they get a HSF that can dissipate that much power.. then wonder why their cpu throttles and runs slow when under load... then i point then to the link i just posted,and they are not happy.. and now need to go buy yet another HSF to handle the extra power.
You are so knowledgeable, sharp minded and analytical when comes to meaning of words and what people want to say, you should sue Amd on your own, should be easy. again, too bad you wont.. cause you are all talk. have a good day sir..
Again, you have once again showed your AMD fanboyism.
There is written: TDP 95W. I already explained what TDP means. AMD's TDP isn't accurate either.
AMD has 4.6ghz on the box whilst a bing number cpus does not REACH IT AT ALL. There is no "*" moniker next the 4.6ghz claim and they do not say that their cpu may not reach the frequency at all. In fact, there is a video from AMD on youtube promised even higher frequency, lol. Up to 4.75 ghz.
So yeah, stop being desperate and forcing Intel into the debate.
Because your childish attempts are futile, this is not about AMD or Intel. It is about us consumers. What will be next? 6 Ghz on the box?
and again, like in another thread, you showed how much you hate amd, and are biased against them, and you call me an amd fanboy, you are just as much an intel fanboy. FYI, IF you actually READ the link i posted, you would see that intels 95 watts, is pretty much a MINIMUM their chips use, in reality, its more like 50 to 100 ABOVE that, and also.. amd is A LOT closer then intel is to the TDP they state, but again.. to be fair, amd AND intel use and come do different values for TDP, but you cant see passed your hated for amd to see this.. you are the one that has to resort to name calling, so WHO is being childish ?? what wil be next, intel claiming their cpus use 100 watts, but in reality, they use 300 ?
I said a few times... I don't tend to buy amd products so no, I am not gonna sue anybody.
And as pointed out in the video, in his German one, he works for a retailer selling prebuilt pcs.. People keep returning pcs with AMD cpus becaue they do not boost to the promised frequency. You there, there are something like laws, if you write on the box 4.6ghz, it must reach it.
You are so knowledgeable, sharp minded and analytical when comes to meaning of words and what people want to say, you should sue Intel on your own, should be easy.
ICC compiler is 3x faster than LLVM and AVX512 is 2x faster than AVX2. And both were left out of comparison? The comparison designed purely only for the LLVM compiler users? Used by who?
ICC is proprietary afaik and Anandtech prefers open compilers. AVX512 should be found in 3DPM and shows utter demolition by the only processor that supports it (7920X).
I considered going with the Ryzen 9 3900X chip and an x570 motherboard for a new rendering system but since these chips aren't available for less than $820+ anywhere, I guess I'll be back to either the threadripper or Intel 9000+ series. There is simply no way I'm paying that kind of price for a chip with a Manufacters Suggested Retail Price of $499.
@Andrei - I was just digging through reviews again before biting the bullet on a 3900X and one of the big questions that is not agreed upon in the tech community is gaming performance for PBO vs all-core overclock, yet you only run 2 benches on the overclocked settings. How can a review be complete with only 2 benches run, neither related to gaming? In a PURELY single threaded scenario PBO gives a tiny 2.X percent increase in single threaded Cinebench. This indicates to me that it is not sustaining the max 4.6 on a single core or it would have scaled better, so it may not be really comparing 4.6 vs 4.3 even for single threaded performance. Almost all recent game engines can at least utilize 4 threads, so I feel your exact same test run through the gaming suite would have shown a consistent winner with 4.3 all-core OC vs PBO. And in heavily threaded scenarios the gap would keep growing larger, but specifically in today's GAMES, especially if you consider very few of us have 0 background activity, all-core OC would hands-down win is my guess, but we could have better evidence of this if you could run a complete benchmarking suite. (unless I'm blind and missed it, in case my apologies :)
I've been messing around with a 3700X, and even with a 14cm Noctua cooling it, it does not sustain max allowed boost on even a single core with PBO which is another thing I wish you touched on more. During your testing do you monitor the boost speeds and what percent of the time it can stay at the max boost over XX minutes?
I am confused by the diagram of the current used by individual cores as the number of threads is increased. Since SMT doesn't double the performance of a core, on the 3900X, for example, shouldn't the number of cores in use increase to all 12 for the first 12 threads, one core for each thread, with all cores then remaining in use as the number of threads continues to increase to 24?
Or is it just that this chart represents power consumption under a particular setting that minimizes the number of cores in use, and other settings that maximize performance are also possible?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
447 Comments
Back to Article
FireSnake - Sunday, July 7, 2019 - link
Awesome!I have been waiting for this one.
Let us start reading.
WaltC - Sunday, July 7, 2019 - link
One thing I noticed before I return to the reading is the odd bit about chipsets and memory speeds. Pretty sure the memory controller is on the CPU itself as opposed to the chipset, and I've been running DDR4-3200 XMP CL16 on my Ryzen 1 on both x370 and x470 MSI motherboards with no problems--the same DDR4 2x8 config moved from one motherboard to the next.futrtrubl - Sunday, July 7, 2019 - link
Guaranteed supported memory speeds and what overclocked memory can generally be used are two very separate things. And yes, that 3200 memory is considered an overclock for the CPU.WaltC - Sunday, July 7, 2019 - link
Right--so why tie the memory controller to the chipset? QUote: "Some motherboard vendors are advertising speeds of up to DDR4-4400 which until X570, was unheard of. X570 also marks a jump up to DDR4-3200 up from DDR4-2933 on X470, and DDR4-2667 on X370." Almost every x370, x470 motherboard produced will run DDR-4 3200 XMP ROOB. There's an obvious difference between exceeding JEDEC standards with XMP configurations and overclocking the cpu--which I've also done, but that's beside the point. Pointing out present JEDEC limitations overcome with XMP configurations is a far cry from understanding that the chipset doesn't control the memory speeds--the memory controller on the cpu is either capable of XMP settings or it isn't. Ryzen 1 is up to the task. You can also take a gander at vendor-specific motherboard ram compatibility lists to see lots of XMP 3200MHz compatibility with Ryzen 1 (and of course 2k and 3k series).edzieba - Sunday, July 7, 2019 - link
The new chipset means new boards, to which can be applied more stringent requirements of trace routing for DDR. Same as with the more stringent requirements for PCIe routing for PCIe 4.0.WaltC - Sunday, July 7, 2019 - link
OK--understood--but improved trace, imo, is mainly for PCIe4.x support with x570-- really not for DDR 3200 support, however, which has already been supported well in x370/x470 motherboards--which I know from practical experience....;) In my case it was as simple as activating the XMP profile #2 in the bios, saving the setting and rebooting. Simply was surprised to see someone tying the mem controller to the chipset! I know that the Ryzen mem controller in the CPU has been improved for Ryzen 3k series, but that has more to do with attaining much higher clocks > 3200MHz for the ram, and is relative to the CPU R 3k series, as opposed to the x570 chipset, since the mem controller isn't in the x570 chipset. All I wanted to say initially is that both DDR 4 3000 & 3200MHz have been supported all the way back to x370 boards, not by the chipset, but by the Ryzen memory controller--indeed, AMD released several AGESA versions for motherboard vendors to implement in their bioses to improve compatibility with with many different brands of memory, too.BikeDude - Sunday, July 7, 2019 - link
You mentioned 2x8GB. Try with 2x16GB and you might not be as lucky or will have to work harder to get the timing right. Motherboards that only seat two DIMMs will be noticeably easier than four DIMM motherboards.If AMD did anything to help grease the wheels, I'm sure many users will appreciate that.
FWIW, this overclocking guide has helped me a lot: https://www.techpowerup.com/review/amd-ryzen-memor...
mat9v - Sunday, July 7, 2019 - link
Does anyone know if 3900X has 3 cores for each CCX (as in 1 core in each CCX disabled) or does it have two CCX's of 4 cores and two CCX's of 2 cores?photonboy - Thursday, July 11, 2019 - link
3+3rarson - Monday, July 8, 2019 - link
WaltC, you're correct. The memory controller is part of the IO die, not the chipset. The chipset is connected to the IO die via 4 PCIe lanes.While the subsequent iterations of Ryzen have indeed improved memory support along with the new chipsets, the chipsets have nothing to do with that. I'm assuming the author is using the chipsets to delineate generations of memory improvement, but it could be just as easily (and more clearly) stated by referring to the generation of Ryzen processors.
Death666Angel - Tuesday, July 9, 2019 - link
Well, the thing is that motherboard manufacturers, motherboard revisions, motherboard layout and BIOS versions do play a role as well, though. The memory controller is just one piece of the puzzle. If you have a CPU with a great memory controller, it doesn't mean it performs the same on all boards. And it doesn't mean it performs the same with all RAM either. Sometimes the actual traces on motherboards are crap for certain clockspeeds. Sometimes the BIOS numbers for secondary and tertiary timings are crap at certain clockspeeds and get better in later revisions, seemingly allowing for better memory clockspeeds when it really was just a question of auto vs manual if you knew what you were doing. Sometimes the SoC voltage is worse on that board vs the other and that influences things. The thing is, across the board, X570 motherboards have higher advertised OC clockspeeds for the memory and Ryzen 3000 has higher guaranteed clockspeeds. And Anandtech believes that is the thing that counts, not if you can get x clockspeed stable. At least in the vanilla CPU articles. They do separate RAM articles often.BLu3HaZe - Tuesday, July 9, 2019 - link
"Some motherboard vendors are advertising speeds of up to DDR4-4400 which until Zen 2, was unheard of. Zen 2 also marks a jump up to DDR4-3200 up from DDR4-2933 on Zen+, and DDR4-2667 on Zen."How about now? :)
And I believe the authors mean to say that official support for is up to 3200 on X570 boards, while older boards were rated lower "officially" corresponding to the generation they launched with. Speeds above that would be listed with (OC) clearly marked in memory support.
Anything above the 'rated' speeds, you're technically overclocking the Infinity Fabric until you run in 2:1 mode which is only on Zen 2 anyhow, so your mileage will definitely vary.
Even the 9900K 'officially' supports only DDR4-2666 but we all know how high it can go without any issues combined with XMP OC.
Ratman6161 - Wednesday, July 10, 2019 - link
In Zen and Zen +, the infinity fabric speed was tied to the memory speed. So overclock the RAM and you were also overclocking the infinity fabric. In Zen 2 infinity fabric is independent of the RAM speed.Targon - Monday, July 8, 2019 - link
I am curious about the DDR4-3200 CL16 memory in the Ryzen test. CL16 RAM is considered the "cheap crap" when it comes to DDR4-3200, and my own Hynix M-die garbage memory is exactly that, G.skill Ripjaws V 3200CL16. On first generation Ryzen, getting it to 3200 speeds just hasn't happened, and I know that for gaming, CL16 vs. CL14 is enough to cause the slight loss to Intel(meaning Intel wouldn't have the lead in the gaming tests).Ninjawithagun - Monday, July 8, 2019 - link
Regardless of whether or not a 'crap' DRAM kit having CL16 vs. a much more expensive kit with lower CL rating, it isn't going to make any significant difference in performance. This has been proven again and again.Ratman6161 - Wednesday, July 10, 2019 - link
"CL16 RAM is considered the "cheap crap" when it comes to DDR4-3200"Since when? Yes its cheap(er) but I'd disagree with the "crap" part. I needed 32 Gb of RAM so that's either 2x16 with 16 GB modules usually being double sided (a crap shoot) or 4x8 with 4 modules being a crap shoot. Looking at current pricing (not the much higher prices from back when I bought) New egg has the G-skill ripjaws 2x16 CAS 16 kit for $135 while the Trident Z 2x16 CAS 15 for $210 or the CAS 14 Trident Z for $250. So I'd be paying $75 to $115 more...for something that isn't likely to do any better in my real world configuration. Even if I could hit its advertised CAS 15 or 14, how much is that worth. So I'd say the RipJaws is not "cheap crap". Its a "value" :)
Domaldel - Wednesday, July 10, 2019 - link
It's considered "cheap crap" because you can't guarantee that it's Samsung B-die at those speeds while you can with DDR4 3200 MHz CL14 as nothing else is able to reach those speeds and latencies then a good B-die.What that means is that you can actually have a shot at manually overclocking it further while keeping compatibility with Ryzen (if you tweak the timings and sub-timings) while you couldn't really with other memory kids on the first two generations of Ryzen.
I don't have a Ryzen 3xxx series of chip so I can't really comment on those...
WaltC - Monday, July 15, 2019 - link
Since about the 2nd AGESA implementation, on my original x370 Ryzen 1 mboard, my "cheap crap"...;)...Patriot Viper Elite 16CL 2x8GB has had no problem with 3200Mhz at stock timings. used the same on a x47- mboard, and now it's running at 3200MHz on my x570 Aorus Master board--no problems.jgraham11 - Tuesday, July 16, 2019 - link
DDR4 3200 is apparently not an overclock. Says so on AMD's specs page for the 3700Xhttps://www.amd.com/en/products/cpu/amd-ryzen-7-37...
RoboJ1M - Sunday, July 7, 2019 - link
Wait, the memory controllers on the IO for Zen 2, right?I'm sure it's on the IO Die.
John_M - Sunday, July 7, 2019 - link
Yes. The integrated memory controller is on the IO die, which is part of the Ryzen SoC, not the chipset.BushLin - Monday, July 8, 2019 - link
Right now, there's no indication what CL / timings are applied to all the other systems. CL16 is indeed bottom of the barrel for DDR-3200, you would hope there's no shenanigans with Intel getting CL12 DDR-2666. Why not just run all the systems with the same DDR-3200, it's not like they can't do it.profiaudi - Wednesday, July 10, 2019 - link
Not to be too rude, but the IMC is on the io chipLet, not the chipSet. The chipset actually has an important role for the memory speed, in that a chipset defines a platform and a platform imposes requirements on the power supply and trace routing. While the IMC in 3rd gen can handle 3200MT/s+ completely fine, it is guaranteed to do so only one X570. Anything older is a dice roll as the boards were not designed for such speeds (not a requirement for the older platform).waja - Tuesday, July 23, 2019 - link
Do you know george stevenson story?He earn 3657$ every month at home just working few hours on internet see more by open this connection and click home button.FOR MORE INFORMATION COPY THIS SITE.......... www.online-3.comAndrei Frumusanu - Sunday, July 7, 2019 - link
Just as note for those who haven’t been following: This review wasn’t written by our usual resident CPU editor, Dr Ian Cutress as he unfortunately the timing didn’t work out. We only had a few days time with the new Ryzen CPUs, as such, you might noticed a few bits and pieces missing in our article that we’ll try to address in the next hours and days. We’ll be trying to update the piece with more information and data as soon as we can. Thanks.Also huge thanks to Gavin Bonshor who actually did all the testing and collected all the data for this review, thumb up to him.
plonk420 - Sunday, July 7, 2019 - link
hoping a speedy recovery for him! loved his video with Wendell!loving the article, too. don't suppose you could test cross-CCX latency?
plonk420 - Sunday, July 7, 2019 - link
e.g. pcper.com/2017/06/the-intel-core-i9-7900x-10-core-skylake-x-processor-review/3/main interest is if it is low enough to be harnessed by RPCS3 (PS3 emulator)
ballsystemlord - Sunday, July 7, 2019 - link
CCX benchmarks would be nice.IF power benchmarks were also done last time and probably in the works.
shakazulu667 - Sunday, July 7, 2019 - link
Are Intel results with or without spectre et al mitigations?Ryan Smith - Sunday, July 7, 2019 - link
They are with Spectre and Meltdown mitigations. They are not new enough results to include anything for Fallout/ZombieLoad.djayjp - Sunday, July 7, 2019 - link
So results for Intel chips are completely invalid then.futrtrubl - Sunday, July 7, 2019 - link
You will need to have to explain that then. Comparing Intel with mitigations vs AMD with mitigations.djayjp - Sunday, July 7, 2019 - link
No. Fallout/ZombieLoad does not affect AMD chips.djayjp - Sunday, July 7, 2019 - link
Intel performance will suffer whereas AMD's won't be affected.WaltC - Sunday, July 7, 2019 - link
Ha-ha...;) So because AMD has a newer architecture without most of the vulnerabilities that plague Intel's ancient CPU architectures--it should be held against AMD? Rubbish...;) Look, what is unfair about testing both architectures/cpus with all the mitigations that each *requires*? I can't see a thing wrong with it--it's perfect, in fact.extide - Sunday, July 7, 2019 - link
They tested Intel WITHOUT Fallout/ZombieLoad which would affect them. Probably not by much, though, honestly.RSAUser - Monday, July 8, 2019 - link
Well the results are close enough for a lot of tests to be error margin, that the mitigation would put AMD in the lead.The tests should reflect real world as of when the article is published, using old results without declaring that Intel doesn't have mitigation applied on every page is the equivalent of falsifying the results as people will buy based on these tests.
mkaibear - Monday, July 8, 2019 - link
"using old results without declaring that Intel doesn't have mitigation applied on every page is the equivalent of falsifying the results as people will buy based on these tests."Oh, that's just inane. They quite openly state the exact test specification on the "Test Bed and Setup" page, including which mitigations are applied. Arguing that not putting one particular piece of information on every page means it's the equivalent of falsifying the results is completely ridiculous.
RSAUser - Tuesday, July 9, 2019 - link
How many go through the test bed set up page?Meteor2 - Sunday, July 14, 2019 - link
Pretty much everyone reading such an in-depth review, I should think.Daeros - Monday, July 15, 2019 - link
The only mitigation for MDS is to disable Hyper-Threading. I feel like there would be a pretty significant performance penalty for this.Irata - Sunday, July 7, 2019 - link
Well, at least Ryzen 3000 CPU were tested with the latest Windows build that includes Ryzen optimizations, but tbh I find it a bit "lazy" at least to not test Intel CPU on the latest Windows release which forces security updates that *do* affect performance negatively.This may or may not have changed the final results but would be more proper.
Oxford Guy - Sunday, July 7, 2019 - link
Lazy doesn't even begin to describe it.Irata - Sunday, July 7, 2019 - link
Thing is I find this so completely unnecessary.Not criticising thereview per se, but you see AT staff going wild on Twitter over people accusing them of bias when simple things like testing both Intel and AMD systems on the same Windows version would be an easy way to protect themselves against criticism.
It the same as the budget CPU review where the Pentium Gold was recommended due to its price/ performance, but many posters pointed out that it simply was not available anywhere for even near the suggested price and AT failed to acknowledge that.
Zombieload ? Never heard of it.
This is what I mean by lazy - acknowledge these issues or at least give a logical reason why. This is much easier than being offended on Twitter. If you say why you did certain things, there is no reason to post "Because they crap over the comment sections with such vitriol; they're so incensed that we did XYZ, to the point where they're prepared to spend half an hour writing comments to that effect with the most condescending language. " which basically comes down to saying "A ton of our readers are a*holes.
Sure, PC related comment sections can be extremely toxic, but doing things as proper as possible is a good way to safeguard against such comments or at least make those complaining look like ignorant fools rather than actually encouraging this.
John_M - Sunday, July 7, 2019 - link
A good point and you made it very well and in a very civil way.Ryan Smith - Monday, July 8, 2019 - link
Thanks. I appreciate the feedback, as I know first hand it can sometimes be hard to write something useful.When AMD told us that there were important scheduler changes in 1903, Ian and I both groaned a bit. We're glad AMD is getting some much-needed attention from Microsoft with regards to thread scheduling. But we generally would avoid using such a fresh OS, after the disasters that were the 1803 and 1809 launches.
And more to the point, the timeframe for this review didn't leave us nearly enough time to redo everything on 1903. With the AMD processors arriving on Wednesday, and with all the prep work required up to that, the best we could do in the time available was run the Ryzen 3000 parts on 1903, ensuring that we tested AMD's processor with the scheduler it was meant for. I had been pushing hard to try to get at least some of the most important stuff redone on 1903, but unfortunately that just didn't work out.
Ultimately laziness definitely was not part of the reason for anything we did. Andrei and Gavin went above and beyond, giving up their weekends and family time in order to get this review done for today. As it stands, we're all beat, and the work week hasn't even started yet...
(I'll also add that AnandTech is not a centralized operation; Ian is in London, I'm on the US west coast, etc. It brings us some great benefits, but it also means that we can't easily hand off hardware to other people to ramp up testing in a crunch period.)
RSAUser - Monday, July 8, 2019 - link
But you already had the Intel processors beforehand so could have tested them on 1903 without having to wait for the Ryzen CPU? Your argument is weird.Daeros - Monday, July 15, 2019 - link
Exactly. They knew that they needed to re-test the Intel and older Ryzen chips on 1903 to have a level, relevant playing field. Knowing that it would penalize Intel disproportionately to have all the mitigations 1903 bakes in, they simply chose not to.Targon - Monday, July 8, 2019 - link
Sorry, Ryan, but test beds are not your "daily drivers". With 1903 out for more than one month, a fresh install of 1903(Windows 10 Media Creation tool comes in handy), with the latest chipset and device drivers, it should have been possible to fully re-test the Intel platform with all the latest security patches, BIOS updates, etc. The Intel platform should have been set and re-benchmarked before the samples from AMD even showed up.It would have been good to see proper RAM used, because anyone who buys DDR4-3200 RAM with the intention of gaming would go with DDR4-3200CL14 RAM, not the CL16 stuff that was used in the new Ryzen setup. The only reason I went CL16 with my Ryzen setup was because when pre-ordering Ryzen 7 in 2017, it wasn't known at the time how significant CL14 vs. CL16 RAM would be in terms of performance and stability(and even the ability to run at DDR4-3200 speeds).
If I were doing reviews, I'd have DDR4-3200 in various flavors from the various systems being used. Taking the better stuff out of another system to do a proper test would be expected.
Ratman6161 - Thursday, July 11, 2019 - link
"ho buys DDR4-3200 RAM with the intention of gaming would go with DDR4-3200CL14 RAM"Well I can tell you who. First Ill address "the intention of gaming". there are a lot of us who could care less about games and I am one of them. Second, even for those who do play games, if you need 32 GB of RAM (like I do) the difference in price on New Egg between CAS 16 and CAS 14 for a 2x16 Kit is $115 (comparing RipJaws CAS 16 Vs Trident Z CAS 14 - both G-Skill obviously). That's approaching double the price. So I sort of appreciate reviews that use the RAM I would actually buy. I'm sure gamers on a budget who either can't or don't want to spend the extra $115 or would rather put it against a better video card, the cheaper RAM is a good trade off.
Finally, there are going to be a zillion reviews of these processors over the next few days and weeks. We don't necessarily need to get every single possible configuration covered the first day :) Also, there are many other sites publishing reviews so its easy to find sites using different configurations. All in all, I don't know why people are being so harsh on this (and other) reviews. its not like I paid to read it :)
Irata - Monday, July 8, 2019 - link
Thanks for your reply Ryan. I did not intend to be rude when saying "lazy" but rather show that I do not think this is something that was done by intent.Like I said - mention these things and it helps clear up misunderstandings.
It is definitely very positive that you test the Ryzen CPU with the latest builds though.
I also like that you mention if prices include an HSF or not, but it would have been nice to mention the price of HSF used for Intel systems (when not boxed), as e.g. the Thermalright True Copper is a rather expensive CPU cooler.
I think you already addressed not using a faster nVME drive (a PCIe 4 version would have been ideal if available - this would also have given an indication of potentially increased system power use for the Ryzen with PCIe 4 drives) on Twitter.
Those are little nitpicks, so not intended to be a criticism of the overall article. It is just that people tend to be rather sensitive when it comes to Intel vs. AMD CPU comparisons, given Intel's history of things they are willing to do to keep mind- and marketshare.
Daeros - Monday, July 15, 2019 - link
Whether or not it is intentional, AT has had an increasing Intel bias over the last several years. Watch to see how long it takes for an AMD article to get pushed down by rumors or vaporware from Intel or Nvidia.rarson - Monday, July 8, 2019 - link
I think Ryan brings up several salient points, and whether or not you think that they did or did not have the time to do what you wanted (they were also a man down without Dr. Cuttress), the fact of the matter is that AMD dropped a bunch of CPUs and GPUs all at once and literally everyone was scrambling to do what they could in order to cover these launches.I don't think it's coincidence that even in the tech Youtube space, if you watch 10 different reviews you'll largely see 10 different testing methodologies and 10 (somewhat) different results. Every single reviewer I've talked to said that this was one of, if not the most, difficult launch windows they've ever dealt with. Additionally, launching on a weekend with all of its associated complications (not just on reviewers' ends, but partners as well) is a bitch, with everyone scrambling at the last minute on their days off getting in last-minute updates and whatnot.
When AMD tells you at the last minute, "Oh, the brand new Windows 10 update includes something new" you don't necessarily have time to go back and redo all the benchmarks you had already done on the Intel platform.
TL;DR while there may have been flaws in some of the testing, take the details with a grain of salt and compare them to the myriad of other reviews out there for a better overall picture if necessary.
Irata - Tuesday, July 9, 2019 - link
You are making a good point and unfortunately this was an - unfortunately - typical AMD CPU launch with things still being beta. I would assume testers are none too happy about having to re-do their tests.What I don't get from AMD (even if (and that's a capital IF) it's not their fault, it's their responsibility) is how they cannot see how this makes their products appear in a less favorable light. Let's say the buggy bios cost them 5%, the conclusion with a 5% better performance would have been even more in Ryzen 3000's favor.
It's a bit like showing up to a job interview wearing the clothes you wore for the previous day's physical activity.
Daeros - Monday, July 15, 2019 - link
Lazy isn't in it. Intentionally misleading is more like it. On one page, where AMD wins more than it looses in the charts, out of 21 paragraphs, 2 had something positive to say about AMD or Ryzen 3k without following up with something along the lines of "but we know Intel's got new tech coming, too"Ryan Smith - Monday, July 8, 2019 - link
To be sure, they're still valid. The patches for Fallout and ZombieLoad are not out yet (I only mention them because the vulnerabilities have already been announced).RSAUser - Monday, July 8, 2019 - link
They've been out since 14 May, what are you talking about?djayjp - Monday, July 8, 2019 - link
Don't forget RIDLMeteor2 - Sunday, July 14, 2019 - link
RIDL and Zombieload are the same thing.Yes, the Intel CPUs should have been re-benchmarked on 1903, updated after 14 May when the OS-side fixes for the new MDS-class flaws were released. That's only fair and it's quite reasonable to expect that users will apply security updates, not leave their systems unpatched and vulnerable for perhaps a percent or two of performance.
FireSnake - Monday, July 8, 2019 - link
Ryan: how is this not explained in the article? I am reading this site for more then a decade and I trust you most. and I trust you will provide such information. I would expect, you update the article with this info.shakazulu667 - Sunday, July 7, 2019 - link
Is there a compilation test coming for chromium or another big source tree, that would show if new IO arch brings wider benefits for such CPU+IO workloads?Andrei Frumusanu - Sunday, July 7, 2019 - link
We'll be re-adding the Chromium compile test in the next few days - there were a few technical hiccups when running it.shakazulu667 - Sunday, July 7, 2019 - link
Thanks, I'm looking forward to it, especially curious if AMD can utilize NVMe better for this kind of workload.Andrei Frumusanu - Sunday, July 7, 2019 - link
Unfortunately we don't test the CPU suite with different SSDs for this.shakazulu667 - Sunday, July 7, 2019 - link
Is there another test in your suite that could show improvements with IO , incl NVMe?RSAUser - Monday, July 8, 2019 - link
But one of the big features is PCIe 4 support, so testing with an nvme drive as well to show difference would be important? People spending $490 on a CPU only are probably going to be buying an Nvme SSD.A5 - Monday, July 8, 2019 - link
There aren't any PCIe 4 SSDs for them to test with.0ldman79 - Monday, July 8, 2019 - link
Yep, PCIe 4.0 NVME is going to be beta at this point at best.Last I read the first 4.0 NVME to be released is essentially running an overclocked 3.0 interface, which the list of NVME that can saturate 3.0 is pretty short as it is.
RSAUser - Tuesday, July 9, 2019 - link
That's because these are the first PCIe 4 slots that exist, can't release a product that can't even be used.Using an overlocked drive in lieu of a 4 one is the proper thing to do.
Kevin G - Tuesday, July 9, 2019 - link
For consumers yes but the first PCIe 4.0 host system was the IBM POWER9 released ~18 months ago. As such there are a handful of NIC and accelerators for servers out there today.The real oddity is that nVidia doesn’t support PCIe 4.0. Volta’s nvLink has a PHY based upon PCIe 4.0. Turing should as well though nVidia doesn’t par those chips with the previously mentioned POWER9.
RSAUser - Thursday, July 11, 2019 - link
Well, tests of the new AMD card show near no difference for PCIe 4, so no point yet.0ldman79 - Sunday, July 7, 2019 - link
I'm not done yet, but you guys have done an excellent job so far.The only differences I can spot are just in how both of you phrase things, quality is excellent as always.
JustinTeim - Sunday, July 7, 2019 - link
Andrei, What is the status of ECC support for these CPUs ?Yorgos - Sunday, July 7, 2019 - link
He is not even a Dr., to start with.Ian Cutress - Sunday, July 7, 2019 - link
DPhil awarded from Oxford in Computation Chemistry in 2011. (Oxford call it a DPhil, but it is the old term for a Ph.D.)mikato - Thursday, July 11, 2019 - link
I read that as Dr Phil at first. So to be clear, you do not have a Dr Phil degree. I’m not sure what that would be, but I know it would be terrible.XsjadoKoncept - Sunday, July 21, 2019 - link
You'd probably have to be friends with Oprah. Yeck.WaltC - Sunday, July 7, 2019 - link
Hope all is well for the good Dr. Cutress!Oxford Guy - Sunday, July 7, 2019 - link
Yes, like all the results that include the performance regressions on Intel from actually dealing with reality.Reality is that they exist and need to be patched, not ignored.
That includes turning off hyperthreading.
cheshirster - Monday, July 8, 2019 - link
"Dr Ian Cutress as he unfortunately the timing didn’t work out"I bet he just don't want to risk his reputation.
Korguz - Monday, July 8, 2019 - link
huh ???mkozakewich - Saturday, July 13, 2019 - link
"...as unfortunately the timing didn’t work out."You should increase his voltage a little and reboot, that might help.
Meteor2 - Monday, July 15, 2019 - link
It's hard to get one's head around this, but basically: *all* the Intel benchmarks *do not* include the security patches for the MDS-class flaws. The 9000 and 8000 series tests do include the OS-side Spectre fixes, but that's it. No OS-fixes for other CPUs, and no motherboard firmware fixes for any Intel CPUsAt the very least, all the Intel CPUs should be retested on Windows 10 1903 which has the OS-side MDS fixes.
Most if not all the motherboards used for the Intel reviews can also have their firmware upgraded to fix Spectre and most times MDS flaws. Do it.
This is sensible and reasonable to do: no sensible and reasonable user would leave their OS vulnerable. Maybe the motherboard, because it's a bit scary to do, but as that can be patched, it should be by reviewers.
This would result in all the Intel scores being lower. We don't know by how much without this process actually being done. But until it is, the Intel results, and thus the review itself, are invalid.
While you're at it Anandtech, each year buy the latest $999 GPU for CPU testing. Consider it a cost of doing business. Letting the GPU bottleneck the CPU on most game resolutions benchmarked is pointless.
plonk420 - Sunday, July 7, 2019 - link
mountain time zone, best time zone... 7am 7/7!exactopposite - Sunday, July 7, 2019 - link
Been waiting on this one for a long timeEris_Floralia - Sunday, July 7, 2019 - link
It's really nice to see Andrei starting to take part in desktop processor reviews and Gavin Bonshor's hardwork!mjz_5 - Sunday, July 7, 2019 - link
Intel gets 5% better FPS In games is really not a win. I’ll consider that a tie. In multiple applications AMD gets 20% more performance. That’s a win!!Dragonstongue - Sunday, July 7, 2019 - link
hopefully Anal lists :P see how much a "win" the Ryzen 3k / x5xx / Navi truly are, not only to get AMD margins even higher but to take more market share from Intel and "stagnate" Nvidia's needing to "up the price" to make more $$$$$$ when AMD "seems" to be making "as much if not more" selling a small amount less per unit (keep in mind, AMD is next Playstation and Xbox which are 99/9% likely to be using the same silicon, so, AMD take a "small hit" to get as many Ryzen gen 3 and Navi "in the world" which drums up market/mindshare which is extremely important for any business, at this stage in the game is VITAL for AMD.sor - Sunday, July 7, 2019 - link
I’m honestly wondering what the point is of the gaming benchmarks for CPU tests anymore.It seems like the game is either so easy to render that we are comparing values in the hundreds of FPS, or they’re so hard to render that it’s completely GPU dependent and the graph is flat for all CPUs.
In the vast majority of tests here one would have an identical experience with any of the top four CPUs tested.
Targon - Monday, July 8, 2019 - link
Game engines are starting to use more cores, and at lower resolutions(which do not stress the video card all that much) will show improvements/benefits of using one CPU or even platform over another. In this review, due to the RAM being used, the gaming benchmarks are almost invalid(DDR4-3200 CL16), since moving to CL14 RAM would potentially eliminate any advantage Intel has in these benchmarks.Tkan215 - Monday, July 8, 2019 - link
true the future is more cores. People and customers should feel awake that single core aint the future its just a stopping rock. more cores !Tkan215 - Monday, July 8, 2019 - link
yes i called it a tie because of the margin of error and patches were not taken into account. also, Intel get enormouse game support so really many factors as they are not equal playing groundwatzupken - Sunday, July 7, 2019 - link
Intel's bad moment just started. Clearly while there are some areas where Intel chips are still doing well, however the victories are significantly lesser now. Looking at the power metrics, they lost the fab advantage, so they are now in the disadvantage. To top it off, Intel is still charging monopolistic prices on their existing chips. Have not really seen the rumored price cuts, which may be too little and too late.StrangerGuy - Sunday, July 7, 2019 - link
IMO the $200 CPU landscape is now buy 3600 non-X, or get ripped off by Intel anything even if the latter for cheaper by $50.mikato - Thursday, July 11, 2019 - link
Yeah I really wish a 3600 was tested.Maxiking - Sunday, July 7, 2019 - link
Intel is waiting for 10nm, considering the fact AMD didn't even match Skylake prepatches performance... IF Intel fixes the 10nm, AMD will be be smashed to the ground. If it is a big if, but it is a fact.Mahigan - Sunday, July 7, 2019 - link
AMD actually beat Intel on a clock for clock basis now. What you're seeing is Intel's higher boost clocks saving the day (somewhat).If Intel can't go past 5GHz with their 10nm, due to the new core design, and only are able to get say 10-15% more performance per clock then Gen3 Ryzen will most likely end up, with its 7nm+ and improvements AMD aren't done making, in tough competition.
just4U - Sunday, July 7, 2019 - link
Intel won't be doing any smashing anytime soon there Max.. I was damn pleased with the overall value/performance of my 2700x in comparison to my highly overclocked 8700K (4.9Ghz) and basically shrugged of the 9 series intel. The addition of a 12core.. with great performance levels really changes the game.Even if Intel brings something out it's not going to destroy anything. All we've seen over the past 5 years is small bumps upwards in performance.
Korguz - Sunday, July 7, 2019 - link
Maxiking intel has been waiting for 10nm for 204 years now.. and they are still kind of waiting for it. skylake prepatch ? as in specture and meltdown ? um.. kind of need those fixes/patches in place, even if it means a performance hit.. but by all means.. get skylake, dont fix/patch it, and worry about that.. and spend more.. its up to you... either way.. zen2.. looks very good....Targon - Monday, July 8, 2019 - link
What RAM was used in the Intel system? The Ryzen system used DDR4-3200, but it's CL16, not CL14 RAM. That CAS latency difference would be enough for Ryzen to at least tie the 9900k if not beat it in the gaming tests.Tkan215 - Monday, July 8, 2019 - link
I dont think so its not easy to refine 10nm like you think how many year it take Intel to refine 10nm it has been already 4 to 5 years dont get your hope up. If volume aint there there is no chance. AMD surelly moving to 7nm euv quicker then 5nmTkan215 - Monday, July 8, 2019 - link
I havent seen them drop any price i9900k went back up at amazon.com. Intel continue to ignore, non response and not caring for their competition. they want their margin this is all this company care about not your feeling or desireTEAMSWITCHER - Sunday, July 7, 2019 - link
It's summer time in Michigan and I have no desire to upgrade right now... I can wait for the flagship 3950X in September.Maxiking - Sunday, July 7, 2019 - link
Local anandtech yield and node experts got hit again. I wonder how many hits you can take before you shut up.As predicted, Intel still faster in games and AMD OC ability more or less unchanged, slighty worse. It is a new node after all buy yeah, you know better, so keep dreaming about those 5ghz on the majority of chips.
Teckk - Sunday, July 7, 2019 - link
So more cores at the same TDP as 2000 series Ryzen is nothing? Ok.Maxiking - Sunday, July 7, 2019 - link
That isn't the thing I was talking about. My point was that local experts, I mean, trolls, know nothing about the manufacturing cost, yields, about the node in general. As it has been showed recently in the reviews, OC ability of the chips is terrible and lower core count parts tend to perform worse, reaching only 4.1 - 4.2 ghz.Teckk - Sunday, July 7, 2019 - link
Ah, got it. It is an improvement, but not good enough.Maxiking - Sunday, July 7, 2019 - link
It is good enough in terms of competition and that we can get things cheaper.But not when the raw performance is tconsidered. It is a hypothetical scenario, but had there been no 10 nm problems for Intel, AMD would have been in the bulldozer position again.
catavalon21 - Sunday, July 7, 2019 - link
I haven't owned an AMD CPU since my K500 a very long time ago, but let's call it what it is - AMD has a CPU at the $500 price point that Intel is charging $1200 presently to compete with, and Intel's solution uses far more power. That's a win for AMD in any domain.imaheadcase - Sunday, July 7, 2019 - link
The problem is that the PC market is stagnant atm, if you are already a intel owner, absolutely no reason to upgrade to amd CPU. Most people who have systems now don't really have any need to upgrade like it used to be.He stated in article it took amd 15 YEARS to get this good CPU finally out and sounded like he was impressed by that?
Its a impressive CPU, but lets be real here, Intel has dominated the market already for years because it has better marketing, better suppliers.
Based on previous article comments, most people are still rocking 2600K CPU..FROM 2011! They still are very good CPU.
Thats not counting the price difference, while yes the one intel cpu is crazy expensive, its not a normal CPU most people have to go by, if you a regular user with the previous mentioned 2600K CPU..that requires a total system overhaul if you wanted to go AMD route...which to be honest is a risk on betting that a new amd system is not going to last as long as a 2600K did for you.
catavalon21 - Sunday, July 7, 2019 - link
The 2600K had legs as good as any modern CPU, but I don't agree that "most" people are still using a CPU 6 to 8 years old.yeeeeman - Monday, July 8, 2019 - link
Most people are still on Sandy bridge, ivy bridge or haswell. All of these are nothing compared to what 3900x offers and also 3700x. That is the main idea here. There is no point in buying 9900k just to pay a lot more for 5% fps increase at 1080p. That is nitpicking at its best. You are much better off with a 3900x. You get 2950x mt performance, you get more than enough gaming performance and you get lower power consumption than 9900k.Namisecond - Sunday, July 7, 2019 - link
Intel had better marketing, better suppliers, better chipsets, better networking, etc. AMD having a better CPU just doesn't seem to be enough.just4U - Sunday, July 7, 2019 - link
Better chipsets? Amd just released the x570 what does the 390 chipset offer that the x570 does not?Meteor2 - Sunday, July 14, 2019 - link
"He stated in article it took amd 15 YEARS to get this good CPU finally out and sounded like he was impressed by that?" No. That's why it was awarded a Silver.Korguz - Sunday, July 7, 2019 - link
not according to Maxiking, catavalon21... starting to sound like Maxiking, is another HStewart .....shabby - Wednesday, July 10, 2019 - link
Where is Hstewart anyway? LolOliseo - Sunday, July 7, 2019 - link
"But not when the raw performance is tconsidered. It is a hypothetical scenario"How can you take someone seriously when they say this on an article that provides the evidence they claim is "hypothetical".
You simply can't. Either they think you're stupid, or they don't know they are.
It's one or the other. What do you reckon it is.
Andrei Frumusanu - Sunday, July 7, 2019 - link
Please don't take our current numbers as any sign of overclockability - we didn't have enough time for it and motherboard firmwares are still getting updated.Maxiking - Sunday, July 7, 2019 - link
Your numbers on par with the rest of the world, so you maxed out those chips.sor - Sunday, July 7, 2019 - link
Nobody has final motherboard firmwares for these. We will see what they are capable of in the coming weeks.Maxiking - Sunday, July 7, 2019 - link
Those cpus don't even POST past 4.3ghz, so no, firmware isn't the problem and never was and it never improved OC-ing, only compatibility and made systems more stable. They reached the limit of the node.Oliseo - Sunday, July 7, 2019 - link
"Those cpus don't even POST past 4.3ghz, so no, firmware isn't the problem and never was and it never improved OC-ing, only compatibility and made systems more stable. They reached the limit of the node."Like you've reached the limit of your ability to speak in a way that others can make sense of? Perhaps you need to focus on that rather than whatever multinationals are up to you're trying to defend. It will do you more good in the long term, trust that.
RSAUser - Monday, July 8, 2019 - link
Sorry what? There are benchmarks out showing 5 GHz all core on the 3900X, that is with nitrogen, but I'm expecting at least 4.8GHz possible.Intel is worse per clock than AMD with the new node, plus AMD has about 105W to play with on the 3900X to match the power usage of the 9900K on all core.
TEAMSWITCHER - Tuesday, July 9, 2019 - link
4.8GHz won’t happen.RSAUser - Thursday, July 11, 2019 - link
There are reddit posts showing 4.8 on all-core, air, you'll see more posts about that soon.DigitalFreak - Sunday, July 7, 2019 - link
@maxking So you call out trolls while being one yourself.Maxiking - Sunday, July 7, 2019 - link
The only troll here is AMD. They advertise 4,6ghz boost while reaching 4.2 ghz and 4.3 ghz max when manually OC-ed. This is called false advertising and fraud.Oxford Guy - Sunday, July 7, 2019 - link
But we'll ignore having to completely disable hyperthreading on Intel's hyperthreading-advertised CPUs.Phynaz - Sunday, July 7, 2019 - link
You keep repeating this as if by doing so it will somehow become true.Mugur - Monday, July 8, 2019 - link
You don't know what boost means, then... All cores overclock has nothing to do with boost.Xyler94 - Monday, July 8, 2019 - link
Now I know you're an idiot. That's single core boost, not all core. Intel doesn't even state all core boost... except on a single product, the 9900KS, which is a last ditch effort to be like "Hey guys, we can hit 5ghz all core! don't look at the Ryzen Chips... please!"FYI, AMD hit 5ghz all core before Intel did, with the terrible FX9590 or something like that. It was not a good CPU.
Tkan215 - Monday, July 8, 2019 - link
this mean AMD can have great clock boost easily with time if intel can go from 4.0 to 5.0 ghz wall . Amd most likely can in the futuresor - Sunday, July 7, 2019 - link
The gaming benchmarks are mostly flat, AMD and Intel within a 1-2% margin of error.If you look at the one big win for Intel, do you have a 712hz monitor that AMD just can’t keep up with at a paltry 655fps?
Korguz - Sunday, July 7, 2019 - link
Maxiking still faster by 5%, and probably costing MORE for that 5%.. no thanks... seems intel is also dreaming about 5 ghz. you are criticizing amd for doing something they havent really been able to do in years.. so go by intel cpus, and pay a lot more....Mahigan - Sunday, July 7, 2019 - link
Wow.. you're angry. I think you take this far too seriously.wilsonkf - Sunday, July 7, 2019 - link
Are the tests re-run on Intel CPUs? 9900K seems to be losing more ground to 9700k than when they were launched. Is it the effect of patches on Intel HT CPUs?RSAUser - Monday, July 8, 2019 - link
Comment above Anand tech states no zombie mitigation, plus not 1903 which forces those patches to be enabled.Meteor2 - Sunday, July 14, 2019 - link
Might be related to the Spectre patches. Can't remember the timing between those and the 9000 series.spaceship9876 - Sunday, July 7, 2019 - link
I was hoping that you would clock the 2700x, 3700x and intel chip with the same manual clock speeds so that we can see a real IPC comparison between zen+, zen2 and intel.Andrei Frumusanu - Sunday, July 7, 2019 - link
CPUs are designed with memory latencies in mind when clocking at a certain clock - the current comparison at slightly different clocks is still perfectly valid for IPC.RSAUser - Monday, July 8, 2019 - link
Then show a power difference since the 9900K is double the draw...Mugur - Monday, July 8, 2019 - link
Done by other sites / youtube channels (at 4 Ghz). Ryzen 3000 is destroying Intel at the same clock.isthisavailable - Sunday, July 7, 2019 - link
With improved 7nm+ process for next gen, AMD can hit 5ghz and take the single core crown.Maxiking - Sunday, July 7, 2019 - link
Just like they did with the 14nm+ aka 12nm.Just like they were supposed to do with this gen.
But, but, but..?
sor - Sunday, July 7, 2019 - link
Actually that’s a bad example because we did see 200-300mhz bumps each generation between Ryzen 1000 and 3000. At that pace it’s possible to see a 5ghz turbo next generation, or be close enough.If you’re saying that everyone expected 5ghz with Ryzen 2000, then yes, if that’s true then those people were being unreasonable. At this point though it’s not a big leap.
Maxiking - Sunday, July 7, 2019 - link
It is a perfectly valid example, the bump between 1st gen and 2ng gen ryzen was 200mhz. Max OC on 1st gen was 4.1ghz, the max on the 2nd gen was 4.3ghz.There is no bump this year. I am highly skeptical they would be able to reach 4.6ghz on all cores with 7 nm+ next year. TSMC nodes are nothing special, you hit the wall and you are done regardless of voltage used.
sor - Sunday, July 7, 2019 - link
last year we had 3.7/4.3ghz as the flagship Ryzen 7 desktop. This year we get 3.9/4.5ghz in Ryzen 7 and up to 4.7ghz single threaded in Ryzen 9.These parts are well beyond anything we saw in the 2000 series, to claim there has been no frequency improvement is disingenuous at best. Going to 5ghz is just a small process tweak away this time.
Maxiking - Sunday, July 7, 2019 - link
Yeah and we were promised 3900x boosting up to 4.6ghz and it barely boost to 4.2ghz and can be manually overclocked to 4.3ghz.I will believe when I see it.
So no, there is no frequency boost with these chips.
Mugur - Monday, July 8, 2019 - link
You are wrong, 3900 is boosting to 4.55-4.6 single core all day.LordanSS - Tuesday, July 9, 2019 - link
My 2700X set up for 85W TDP single clock boosts to 5Ghz, given enough cooling (280mm liquid cooler).5Ghz. Last generation.
RSAUser - Monday, July 8, 2019 - link
You saw a better performance per clock, these chips hitting 4.65 is about the same performance as 2000 series hitting 5GHz.5080 - Sunday, July 7, 2019 - link
Gaming performance benchmarks really shows what sad state the gaming industry is in that we still have to rely on single core performance.Korguz - Sunday, July 7, 2019 - link
Maxiking where did you read that we were promised this ??? you are bashing AMD for making promises.. what about all the promises intel has made over the years?? i dont see you bashing them for that. didnt intel promise 5ghz ?? but yet... we only see that in ONE chip, and its a special binned chip, in limited quantities... and practically needs exotic coolingMaxiking - Tuesday, July 9, 2019 - link
8700k, 9900k, special edition of 8700k.It is one chip sure. The difference is Intel can reach its boost on a single core for unlimited amount of time unlike AMD and its sporadical 100ms long 4.55ghz boosts and when confronted they lie on twitter there is not such thing as boost in their CPU anymore lol. What does have Intel with this?
Korguz - Tuesday, July 9, 2019 - link
geeze.. drop this BS already.. maybe intel can. but how much power is it using to achieve this ?? 150 watts on what intel says is a 95 watt cpu ?? just drop this crap already, you obviously have NO real proof, other then your own BS words.. cause if you did.. you would of posts links to this garbageStormyParis - Sunday, July 7, 2019 - link
I'd be really interested in a recap of which CPU includes which optionnal features. It's been my experience that Intel mostly, but also AMD a bit, play a shell game with advanced mutimedia, virtualization, security,... extensions and it's a pain to suss out which CPU supports what.RSAUser - Monday, July 8, 2019 - link
AFAIK AMD support hyper V, no issues with docker for me. Check your use case though, think there was a small feature AMD did not support, but can't remember what it was, didn't affect me for docker or if rendering video.0ldman79 - Sunday, July 7, 2019 - link
Still reading, but one minor complaint, the latency graphs, I can either easily read the key or see the entire graph while clicking the button, but not both.I have to zoom out to see the entire graph then the text gets pretty small.
Not a huge thing, just an ease of access thing. The graphs have extremely interesting info, but it's not easy to read them.
futrtrubl - Sunday, July 7, 2019 - link
"While going from X370 at 6.8 W TDP at maximum load, X470 was improved upon in terms of power consumption to a lower TDP of 4.8 W." This is the opposite of what the chart right above it says.Manabu - Sunday, July 7, 2019 - link
The text is correct, the chart is wrong. The x470 runs indeed 4.8W peak and 1.9W in an idle mode.Please fix it anand people.
webmastir - Sunday, July 7, 2019 - link
Fun fact. You cannot install and/or use Ryzen Master with the Hyper-V role installed. It's not supported, nor does the program run (it's also noted in their Software Installation Guide).julianbautista87 - Sunday, July 7, 2019 - link
Great product! I'm impressed how technology advances so fast. Might buy one for home next month.eastcoast_pete - Sunday, July 7, 2019 - link
Thanks! Appreciate the fast turn-around of this first deeper dive into the 3000-series Ryzen. The 3700x looks tempting.@Andrei & Gavin & Ian: On another note, I look forward to seeing a review of the Ryzen APUs, especially the 3400G! I am about to build a new $ 500 potato (HTPC), and the 3400G looks promising. However, before I start building, I'd really like to read a reasonably thorough review on the new APUs before committing to the build. If and when you do, please also report on the HTPC usability of the 3200G and 3400G in detail, especially 10bit HDR playback and 4K streaming. Surprisingly (or not), PCs have been lagging behind the mobile SoCs on this.
Meteor2 - Monday, July 15, 2019 - link
Secondeddjayjp - Sunday, July 7, 2019 - link
So the slower CPU of the two with less cache per core is faster in single threaded apps...? Makes sense....GeoffreyA - Sunday, July 7, 2019 - link
The true return of the Athlon 64. Well done, and keep it up. I knew I'd see this day all those years ago.Thank you for this review as well. Take care.
hsienhsinlee - Sunday, July 7, 2019 - link
Are their respective L3 caches shared across the chiplets or private to each CDD?Andrei Frumusanu - Sunday, July 7, 2019 - link
Private in each CCX, the same as previous Zen designs.djayjp - Sunday, July 7, 2019 - link
Wow so even the fastest consumer CPU can only manage 7 Megarays/s whereas RTX GPUs can do 7 Gigarays/s for about the same price, or 1000x faster.extide - Sunday, July 7, 2019 - link
Well, I'd hope that was the case -- you're comparing a general purpose CPU core to an array of fixed function hardware.IGTrading - Sunday, July 7, 2019 - link
We are shocked by the ridiculous "award" granted by AnandTech ....So Intel's 500 USD chip only wins in a few Single Threaded benchmarks while using over 70% more power than the rated TDP ?!?
Bun AMD's Ryzen 3 doesn't get the Gold award ?!
In this twisted lack of logic, who the heck gets the Gold ?!
The overpriced, power hungry 9900K ?!? :)))))
Lack of Editorial independence is a bitch, isn't it ?!
Andrei Frumusanu - Sunday, July 7, 2019 - link
For what it's worth, we rarely give out any awards at all. The award tiers are Silver, Gold, Platinum. The 9900K never even got an award, so in our view the new Ryzen chips are overall better products.Oxford Guy - Sunday, July 7, 2019 - link
What's the award called for pretending that the Intel-only security flaws don't exist nor come with serious performance regressions?Phynaz - Sunday, July 7, 2019 - link
What’s the award called for being an ignorant AMD fanboy?sausagefingers - Monday, July 8, 2019 - link
Can we get some moderation in here?"Phynaz" Seems much more interested in slinging insults towards his rival fanboys than any discussion about tech.
He sullied most of the Navi articles comment section with the same trash.
Learn some social skill dude.
Phynaz - Monday, July 8, 2019 - link
Never gonna happen. My posts create a lot of page views.Meteor2 - Monday, July 15, 2019 - link
No they don't Phynaz. Apart from the post I'm replying to (because it's short), I skip over your replies. I'm sure most do.Phynaz - Monday, July 15, 2019 - link
Yup, ignorant.Qasar - Tuesday, July 16, 2019 - link
yes you areIGTrading - Sunday, July 7, 2019 - link
Oh ... so there is a Platinum "award" too ?! :)That's probabily reserved for ... which other CPU is better in the same price range ?! Oh right, none.
This is subjective on my behaf, but these AMD chips are as <gold> as they can be. Sure, <platinum> would entail winning all tests ... but <gold> is well deserved.
Nobody's going to go back and check IF 9900K ever got some award or not. They'll likely be left with the impression that Ryzen 3000 is <silver> or second best (or even third, by your count) .
I feel this is so damn subjective on me, but I guess I come with 24 years of experience in this field which include a short 2 year stint in tech journalism and since that was 8 years ago, I'm allowed a bit of lack of objectivity.
If me and my colleagues were left with this impression, I'm sure many others are and Ryzen 3000 is in no way second best in its class.
Mugur - Monday, July 8, 2019 - link
My thoughts exactly...Meteor2 - Monday, July 15, 2019 - link
I think the "awards" are a bit silly; they don't add anything and Anandtech would be better without them.Phynaz - Monday, July 15, 2019 - link
Need some ointment for that butthurt?DrKlahn - Monday, July 8, 2019 - link
Thank you for posting this. As someone that has been in the industry over 20 years as well I was taken aback at Silver too.We've had a decade of Intel fleecing the market. Small gains being parceled out for high cost. Coupled with a platform that never lives beyond the generation it came out for.
The 1st generation Ryzen came out swinging. Was it perfect? No. Was it valid competition in a market in desperate need of it? Absolutely. Here we are a few years later with a product offering essentially the same or better performance vs the competition with much better efficiency at a much lower price. And it gets a "Silver". I really question the objectivity of this site.
TEAMSWITCHER - Monday, July 8, 2019 - link
"Intel fleecing the market?" - Your hatred is showing.DrKlahn - Monday, July 8, 2019 - link
No, I've bought both. But you'd have to be very naive to call the "progress" made after Bulldozer flopped and the Core architecture dominated anything but milking the market while moving things at a snails pace. Intel had every chance to continue to boldly innovate, but instead chose to parcel out small incremental changes and charge a hefty premium for them.I'll buy whatever makes sense. Right now that isn't Intel in my opinion. May change when Sunny Cove hits.
Xyler94 - Tuesday, July 9, 2019 - link
The fact people are still holding onto Sandy Bridge because they don't feel like Coffee Lake isn't a good upgrade should be your reasoning that yes, Intel wasn't really doing much. Heck, I'm on my 4790k, and the only CPUs that peeked my interest is the AMD 3900x and 3950x, because those are great looking processors. If I were more into overclocking, maybe I'd spring for the 9900k, but it's not a processor that peeked my interest...Meteor2 - Monday, July 15, 2019 - link
Well there was the "benchmarking Sandy Bridge in 2019" article a couple of months back. That showed that a 9700K is about 1.5-2x faster than a 2600K. Yes that's not the same rate of improvement, over seven years, which we saw up to Sandy Bridge, but it is still a hell of a lot faster for roughly the same cash price.It's just the increments -- a few percent a generation -- have been small. But they have compounded.
Xyler94 - Monday, July 22, 2019 - link
Your 1.5 to 2 times faster was in what, productivity? What about gaming? That's what I was alluding to.Yaldabaoth - Sunday, July 7, 2019 - link
Thank you so much!What more needs to be said about the 3900X relative to its peers? So glad I didn't spring for the i9-9900K a few months back!
The 3700K is in a more interesting position at its price, but it looks like there are still some conditional performance advantages for the i7-9700K (read: single threaded workloads), but many times not. In any case, it's a great value and product. Kudos to AMD.
What I _really_ want to see is the 3600X (and the 3600) vs the i5-9600K. Historically, AMD has been seen as a budget play, but I suspect the single-threaded performance of the Intel CPU will shore it up vs the higher threads of these AMD chips in many user-relevant workloads vs. benchmarks. (Heck, it probably is just as good as the 3900X in gaming due to GPUs not catching up to increases in effects and resolution, right?) There could be very interesting values at these price points. Could you imagine 3 years ago telling someone, "For a good budget gaming PC, the Intel chips are alright, but if you want something REALLY nice once your GPU budget is used, put another $80 toward an AMD CPU!" Amazing.
BloodyBunnySlippers - Sunday, July 7, 2019 - link
I too am interested in the 3600x. I don't know if my math is relevant but the TDP per core count is higher for the 3600x than for the 3700x AND 3800x. Isn't it possible that it would then have a higher OC and better gaming performance? I mostly game, but moving from 4 threads on my current system to 12 threads isn't going to hurt either.allenb - Sunday, July 7, 2019 - link
This is a great result for AMD. They’ve won some benchmarks, conceded some others to Intel, and embarrassed themselves nowhere. A few thoughts:1. Was worried about the memory latency graphs at the start, but it was a relief to see how effectively those regressions have been mitigated in actual workloads.
2. In the 3D particle movement with AVX test, the 7920X jumps out to a *huge* lead. While everything else roughly doubles, it shows 9x speed up. Granted it is the only AVX512-capable cpu but it shouldn’t be *that* good unless the revised code is able to brilliantly utilize some of the new -512 features beyond just doubling vector length.
3. I work in an industry that’s heavily focused on performance and relatively cost-insensitive. That’s traditionally meant Intel but suddenly we are looking very hard at AMD on the server side.
4. Can’t wait to see what the 3950X can do!
Kevin G - Sunday, July 7, 2019 - link
1) Yes, AMD seemingly knew what they were doing and addressed that concern in the end product. I'd be curious what impact reducing the L3 capacity would have on benchmarks to valid some of the theories (or better yet, AMD release a larger L3 cache chiplet).2) There are a couple of other instructions with AVX-512 that can make vector code more efficient beyond just doubling the vector width for a linear increase in performance. Also worth noting that leveraging AVX-512 comes at a clock speed hit so the gains per clock is even higher.
3) While this is consumer side, the 12 core results are the most important to look like for Rome performance projection. That is the one that shares the IO die. (OTOH, a single CPU chiplet Rome that is allowed to clock to the moon with a 512 bit memory interface and 130 PCIe 4.0 lanes is an interesting single threaded proposal.)
4) Agreed but I would predict a very small down tick in IPC. The 3900 has all the cache enabled with fewer cores competing for contention. Everything else looks to be upward.
just4U - Sunday, July 7, 2019 - link
It's been a long time since I actually was chomping at the bit to see a review.. A Big day for Amd.guachi - Sunday, July 7, 2019 - link
That power efficiency!!!! Oh, my!As to gaming: Taking only the 1080 results the 3700X and 3900X are 98% as fast as the 9700K/9900K.
I'll take 98% as fast for all the other benefits. The multi-thread performance is spectacular.
RSAUser - Monday, July 8, 2019 - link
That's 2% lead without exploit patches, this is why I am annoyed at Anandtech not having applied them.If it was a 10% difference, sure, then you would definitely know which is better, but at 2% the patch could mean the AMD CPU would win in their benchmark.
A5 - Monday, July 8, 2019 - link
2% is MOE either way. It's a tie.RSAUser - Thursday, July 11, 2019 - link
It's a tie now without the patch, zombieload patches seem to show cases of up to 40% I/O hits, so could influence a lot of benchmarks/games.Meteor2 - Monday, July 15, 2019 - link
Yes, testing without the May security update is inexplicable.Yorgos - Sunday, July 7, 2019 - link
A total waste of time reading this.Didn't Purch media give you enough shekels to get a better GPU?
what's the point of having that gpu and posting 2k and 4k results?
Also, your customer, Intel, has to fix passmark...
Phynaz - Sunday, July 7, 2019 - link
Go away idiotKorguz - Sunday, July 7, 2019 - link
Yorgos who cares what GPU they used.. this is a CPU review.... the video card, doesnt matterRSAUser - Monday, July 8, 2019 - link
1080 has bandwidth issues if I remember correctly at higher resolution, so could be influencing the benchmark.Still find it crazy that Anandtech didn't test out if any advantages of PCIe 4 since currently the only way to get it is AMD, could be a deciding factor for some.
Korguz - Monday, July 8, 2019 - link
what bandwidth issues ???RSAUser - Tuesday, July 9, 2019 - link
Check the 1080 vs 1080 Ti results at 4k, they are not consistent in terms of scaling with performance.Korguz - Tuesday, July 9, 2019 - link
which results ? this review only uses the gtx 1080RSAUser - Thursday, July 11, 2019 - link
Not here, actual tests showing comparison between the two.Hul8 - Sunday, July 7, 2019 - link
AnandTech, please publish the approximate % delta of- number of AnandTech main page loads; and
- number of distinct clients having loaded the main page
for today, 7th of July, versus:
- average of the four previous Sundays; and
- average of last ten work days (usual days articles are published).
Hul8 - Sunday, July 7, 2019 - link
Would be interesting to know how much increased traffic there was from people like me who kept checking back every 1/2 to 2 hours.Chaython - Sunday, July 7, 2019 - link
The GTX 1080 is a bottleneck shame on youKevin G - Sunday, July 7, 2019 - link
Note the resolutions and settings used in testing.RSAUser - Monday, July 8, 2019 - link
Note the 1080 having bandwidth issues that the 1080 Ti or 2000 series don't.Meteor2 - Monday, July 15, 2019 - link
I don't understand why they bothered with 4K tests with 1080. Really only the 720p tests make the CPU the bottleneck.Chaser - Sunday, July 7, 2019 - link
Happy for AMD. Happy for consumers. But for a 90% gamer like me, still happy with my recent 9700K purchase. Despite getting "closer", AMD still plays catch up and with superior pricing. I want to see AMD surpass the competition WITH good pricing. That day will come soon I am sure.Arbie - Sunday, July 7, 2019 - link
Yes, the day will come - if AMD can stay alive. They almost didn't. Giving Intel your business after AMD's miraculous comeback forced them to offer better products is rewarding the company that screwed you for 10+ years and screwing the company that fixed that. For a few FPS, for today.mjz_5 - Sunday, July 7, 2019 - link
True. No one should be supporting intel for pretending 4 cores has been enough for all this time. People got horny for 5% more FPS, which is not even noticeable. Support AMD already!!nc0gnet0 - Sunday, July 7, 2019 - link
Newsflash for you spanky, AMD already did surpass Intel. Your just to blind to see it. The question is, why do people like you see a 7% increase in FPS, that can only be realized if you purchase a $1200.00 video card, and then set your system up ignoring any of the multitude of patches that need to be installed on your precious Intel CPU. Meltdown/Specte/Zombieload, etc etc. And to really make your comments laughable, you cannot even notice that 7% improvement in game (really 150 FPS over 135 FPS...who cares?). All while sucking more power and costing a price premium for performance you can't even see.It's no longer about playing catch up, it's about AMD giving you better performance where it actually matters.
RSAUser - Monday, July 8, 2019 - link
The pricing is better? Here they are about 70% of the price of the Intel equivalent just the CPU, not factoring in cost of getting cooler and that AMD motherboards are cheaper.Arbie - Sunday, July 7, 2019 - link
Why aren't the Intel chip costs increased to account for the *required* cooler ??Looks like you're running a $100 unit. A $50 cooler would probably be fine - but even that's a 10-20% adder. This should not have been left out.
Yorgos - Sunday, July 7, 2019 - link
I am wondering what's the dissipation rate in each cooler used, but I cannot find it anywhere.A fair comparison would be using the same cooler on all cpus, not mix and matching.
I have trouble concluding on what's the purpose of these "tests" or what's the conclusion of this mess.
3 year old gpu, different bench conditions, half of the Intel security patches are missing, passmark was "weird"....
Andrei Frumusanu - Sunday, July 7, 2019 - link
I talk about this in the conclusion...palindrome - Sunday, July 7, 2019 - link
One line on the conclusion is an afterthought. Perhaps it should have been included in the intel vs amd comparison tables on the first page...Arbie - Sunday, July 7, 2019 - link
+1Andrei Frumusanu - Sunday, July 7, 2019 - link
You're right, I've added it in the tables and a mention.Arbie - Monday, July 8, 2019 - link
++1palindrome - Sunday, July 7, 2019 - link
After looking at the "Test Bed and Setup" page, it appears that none of the results between generations are truly comparable as your margin of error is too large to draw an accurate conclusion. The test beds differ in memory brand, quantity, number of DIMMs and (surely) timings. HSFs are all over the place with Intel getting the benefit of a tower cooler (with mystery fans being used) in the 9X00 series vs stock coolers for the AMD chips. HSFs are also not factored into the price of the AMD chips vs Intel.It is a shame that you guys did all this work in making everything so precise when the data is truly incomparable. I guess you could argue that this is "better than nothing" to compare dissimilar test benches in different environments. I would argue that this data has an error margin of +/- 5%-10%, rendering most of the graphs useless.
WaltC - Sunday, July 7, 2019 - link
The review was fine, but Ryzen 3k series deserves a Gold of course, as Intel is beaten in cost, power consumption, process node, performance (as no games that I know of today are single threaded), and security--the last a huge win for AMD. A silver is for 2nd place--and I'll only agree with that if you give Intel a Bronze...;) (But then, who's left?....;)) With all these ancient and creaking Intel-compiler optimized synthetic benchmarks around--some of these had to be resurrected for this comparison!--it reminds me of the early days of the Athlon versus the original Pentium long years ago. For the first few months the benchmarks held that AMD still had some catching up to do--advance the clock a year later release and there was almost nothing Intel was winning anymore! Pretty much everything now showed Intel bringing up the rear--both in games and synthetics. The reason for that was that the Athlon/K7 compilers were then in wide circulation and use--along with the standard Intel compilers--so that game-engine designers and benchmark devs could optimize their work for *both* architectures. AMD walked away with it--and a short time later Intel threw in the towel (after A64, I believe) and cancelled the original Pentium entirely and went back to the drawing board. I think it's obvious that few if any of the games and synthetics tested were properly optimized for the R 3k series--and possibly not even for Ryzen 1(!), as well. Time will tell...;)Phynaz - Sunday, July 7, 2019 - link
Hey Walt, what’s it been a decade since you AMD fanboys have had something decent?You still can’t seem to understand the concept of writing concisely.
Korguz - Sunday, July 7, 2019 - link
hey Phynaz, how long as intel been screwing you intel fanboys ?? lies about 10nm being on track, keeping the mainstream stuck on quad core, over charging for its products ?? making sure every 1 to 2 cpu release REQUIRES are new mobo as well....Phynaz - Sunday, July 7, 2019 - link
None of the above ever affected me. So you’ll excuse me if I’m not enraged that “lied” about 10nm. No one from Intel ever said a thing to me about 10nm so I was never lied to.Now when AMD pulled promised compatibility for Zen 2, that’s being lied to. I wonder if AMD has offered compensation to people that bought those boards/systems to people that bought them for that reason.
Although if you bought them for that reason, well you’re a moron, because this is the THIRD time AMD lied about an upgrade path.
Korguz - Monday, July 8, 2019 - link
then you obviously are only here to lie... intel has said that 10 nm was on track, 3 or 4 YEARS AGO, but yet.. 10nm is only now starting to show up... intel has been on 14nm since around 2014 i think it was.. and from their roadmaps from then, they should of switched to 10nm around 2016/2017they pulled it.. because it wasnt as viable as they originally thought.. which is good on them for that, how angry would people be if they didnt, then people upgraded to it, and didnt get what amd said ?? i dont think many bought anything for that reason.. but who knows for sure...
and how have they lied for the 3rd time about an upgrade path ??
seems the only moron here.. is you, as you keep post lies and BS.....
Phynaz - Monday, July 8, 2019 - link
Time for your AMD history lesson.Quadfather
Piledriver
And now Zen 2
AMD fans , taking it in the butt for decades.
My suggestion is get a job so you can afford a real cpu
Korguz - Monday, July 8, 2019 - link
um ya ok.. you are just as bad there phynaz.... intel fan talking out of his own butt.. you are not even a little pissed that intel stuck the mainstream at quad core ?? or charging too much ? fyi.. you need and education.. amd has been with am4 alot longer then intel has with its sockets.. and those names you list.. are the code names... not the product.. and quad father... was way back in the a64 days...what compatibility are you referring to then ???
Phynaz - Monday, July 8, 2019 - link
So you agree that AMD fucked it’s customers after getting their money. It’s good AMD hasn’t completely rotted your brain.Xylade - Monday, July 8, 2019 - link
Complete re-tarded intel fanboi. Even after reading the article your demented mind still can't catch up.Phynaz - Monday, July 8, 2019 - link
Please keep up with the times, retard isn’t acceptable language. Idiot and moron are short enough that they ought to fit in that teensy tiny cranium of yours.Korguz - Tuesday, July 9, 2019 - link
but intel sure has rotted your brain. with intel, you were lucky to be able to use the same board for more then 2 cpus. my x99 board.. only 2 generations of cpus were available for it. for amd, AM4 started with ryzen 1, then ryzen 2, and now. ryzen 3, thats 3 generations of zen.. in the case of am3+, you could use cpus for am3+, am3, and in some cases cpu's for am2 as well, all in the same socket. quadfather, was amd's attempt at giving the mainstream a dual socket platform, with out the server price take or features that went along with it. historically, amd has given a much longer upgrade path then intel ever has. how did i agree that amd screwed its customers ? for the most part, intel has screwed its customers, over and over, as i have mentioned in my previous postKorguz - Sunday, July 7, 2019 - link
waltC " For what it's worth, we rarely give out any awards at all. The award tiers are Silver, Gold, Platinum. The 9900K never even got an award, so in our view the new Ryzen chips are overall better products. " seems for AT silver is the top award... not gold.. :-)Phynaz - Sunday, July 7, 2019 - link
AMD fanboys: Catching up to intel should be the top award.Rest of world: Umm, nooooo, that’s not the way awards work.
Korguz - Monday, July 8, 2019 - link
AT has explained how their awards work in a few posts in this threadKorguz - Monday, July 8, 2019 - link
yep.. from Ryan further down this thread :Silver is an award for a great product. And it's the highest award we've given to a CPU in quite some time.
Conversely, gold awards are very rarely given out. As the final arbiter on these matters, I would have given out a Gold if the Ryzen 3000 had consistently beaten the competition in both MT and ST workloads here.
GreenReaper - Monday, July 8, 2019 - link
Room for it to do that with the 3950X.Xylade - Monday, July 8, 2019 - link
Pure utter re.tard.karund - Sunday, July 7, 2019 - link
This time I ended up upgrading to the AMD Ryzen 2700X instead of 3900X, since with the price/performance difference it was better value to get to the 2700X for 249 euros, whereas 3700X is around 500 euros.WaltC - Sunday, July 7, 2019 - link
Good choice--lots of bang for the buck there!...;) I'll be going with a 3k series cpu, but those deals on Zen + have certainly been tempting!karund - Sunday, July 7, 2019 - link
At the moment it wasnt really worth going with the 3900X as for the same price I also get 16Gb DDR4 Memory and a mainboard.Alexvrb - Sunday, July 7, 2019 - link
"However there’s a catch: in order to support DDR4 above 3600, the chip will automatically change the memory controller to infinity fabric clock ratio from being 1:1 to 2:1."You mean 3733.
Makaveli - Sunday, July 7, 2019 - link
Has that been confirmed i'm getting mixed reports for 3733 vs 3600 and the last one I saw also said 3600.Andrei Frumusanu - Sunday, July 7, 2019 - link
No, 3600 is correct. AMD's example of 3733 had the footnote that they set the ratio to 1:1 manually.The_Assimilator - Sunday, July 7, 2019 - link
Well done, AMD. Well done.Maxiking - Sunday, July 7, 2019 - link
Those cpus are fraund.They can't even hit advertised boost clocks, realized it now, they should be banned. They advertise 4.6ghz boost lol. Nonsense.
Oxford Guy - Sunday, July 7, 2019 - link
Speaking of fraud, how about Intel selling hyperthreading CPUs that have to have hyperthreading disabled to deal with their design failure?mapesdhs - Sunday, July 7, 2019 - link
Better still, Intel releasing the 8700K, etc., after they already knew about the new security vulnerabilities, allowing customers to buy them while senior staff sold off shares, etc.Maxiking - Sunday, July 7, 2019 - link
Did it stop AMD from releasing their cpus which were affected as well? No.The senior staff had decided to sell the shares before they were aware of the vulnerabilities, it is nothing uncommon, there is a legal procedure required to be done before you can sell them and then you have to wait several months. All of that happened before the vulnerabilities were discovered and passed to Intel. Now continue trolling at r/amd.
Oxford Guy - Sunday, July 7, 2019 - link
Maxiking, AMD processors are not affected by the hyperthreading security problem.It is an Intel-only problem and the OpenBSD team, Apple, and others have said hyperthreading must be completely disabled to fix it.
Maxiking - Sunday, July 7, 2019 - link
He, or she or in case of a tranny, they did mention the vulnerabilities in general, not just your HT thingy, that is your agenda.The point is that none of them decided to stop selling cpus because of the discovered vulnerabilities, so there were double standards used.
Korguz - Sunday, July 7, 2019 - link
Maxiking most of the security vulnerabilities effect intel ONLY.. this is proven and known.. stop making things up..and what are you talking about amd not hitting boost clocks?? seems to me.. they are.. boost clocks are usually for only a few cores.. not all cores....
Xylade - Monday, July 8, 2019 - link
Go away imbecilePhynaz - Sunday, July 7, 2019 - link
Wrong. But you won’t let that stop you.Korguz - Monday, July 8, 2019 - link
phynaz, whats wrong ?? the fact that the security vulnerabilities like specter and meltdown only affect INTEL, and not amd ??to quote the article from here : https://www.anandtech.com/show/14525/amd-zen-2-mic... " Another aspect to Zen 2 is AMD’s approach to heightened security requirements of modern processors. As has been reported, a good number of the recent array of side channel exploits do not affect AMD processors, primarily because of how AMD manages its TLB buffers that have always required additional security checks before most of this became an issue. Nonetheless, for the issues to which AMD is vulnerable, it has implemented a full hardware-based security platform for them "
but dont let that allow you to think other wise.. cause obviously.. you will continue to post your BS about this...
Maxiking - Sunday, July 7, 2019 - link
My dear AMD friend, Intel HT works unlike that advertised 4.6ghz. You would need liquid nitrogen to reach it.That 5ghz 28 core on a chiller doesn't look so bad now, hah? Karma is free.
Oxford Guy - Sunday, July 7, 2019 - link
"Intel HT works"It works to undermine your security, since it is a fundamentally insecure design that can only apparently be remedied by disabling it entirely. Don't take my word for it, take the word of experts like the OpenBSD team, Apple engineers, etc.
Maxiking - Sunday, July 7, 2019 - link
True, I will decide to trust Apple on this, they do not offer any AMD cpus so they obviously know what they are doing.Also, the HT security problems you keep talking about applies only to certain workloads, like hw virtualization etc. It requires tasks being run 24/7 for a longer period of time, without restarting and the direct access to a machine. So not my case, but thanks for the heads up, I appreciate you care and makes my heart feel warm and fuzzy.
Korguz - Sunday, July 7, 2019 - link
Maxiking you obviously have NO idea how the intel security issues work, or how they were fixed, or the performance hit you HAVE to take in order to use them. " they do not offer any AMD cpus so they obviously know what they are doing. " or because intel is giving them a kick as deal to use them, or more then likely.. amd didnt have any cpus at the time that met their performance goalsXylade - Monday, July 8, 2019 - link
Another pathetic replyPhynaz - Monday, July 8, 2019 - link
That’s a real winner you’ve posted yourself.RSAUser - Monday, July 8, 2019 - link
Boost is single core, not all core. Same as Intel.You, that number person and P. just seem to be trolling.
You could have mentioned actual things that Intel is better at, e.g. (had to think hard) workloads that are L2 intensive like stock market algorithms depending on how you write it.
Xylade - Monday, July 8, 2019 - link
Hahahaha. Ur post shows how much of a re-tard you are.Micha81 - Sunday, July 7, 2019 - link
Any chance to benchmark Dwarf Fortress, including the impact of different memory speeds ?Oxford Guy - Sunday, July 7, 2019 - link
Extremetech, as with Zen 1, made the effort to use a decent RAM speed. Read that review.Maxiking - Sunday, July 7, 2019 - link
I just checked the review, he used 3600mhz for AMD Ryzen3 and only 3200mhz for Intel.Don't see you complaining about it not being fair. You are biased my friend.
RSAUser - Monday, July 8, 2019 - link
The official supported for the 9900K is 2666, he's already overclocking going for 3200.Dude, stop posting nonsense.
Oxford Guy - Sunday, July 7, 2019 - link
"The systems have applied Spectre and Meltdown mitigation patches where applicable, but not any newer patches for the newest set of vulnerabilities."Keep not surprising me.
John_M - Monday, July 8, 2019 - link
Are any patches for Zombieload et al actually available yet, or is it simply a case of disabling hyperthreading and hoping for the best?RSAUser - Monday, July 8, 2019 - link
Already exist, forced enable on the 1903 windows update. Released May 14, supposedly has a huge hit on I/O.GreenReaper - Monday, July 8, 2019 - link
It sure had a significant impact for several workloads on Linux:https://www.phoronix.com/scan.php?page=article&...
mapesdhs - Sunday, July 7, 2019 - link
Where are the CB R15 and R20 numbers? I find it difficult to believe such tests were not done.Oxford Guy - Sunday, July 7, 2019 - link
Wow, really? I gave up on this review after I read that they're not only using slow RAM but that they're doing a "let's pretend" about the Intel-only security performance regressions.Maxiking - Sunday, July 7, 2019 - link
Had they actually used FAST RAM, something around 3800+, you would have been complaining how it was unfair towards AMD, because the IF got underclocked. It is diffucult to please you.Deal it with, after all of those years and security patches, AMD still can't beat Intel refreshed cpus originally introduced to the market in 2015.
Korguz - Sunday, July 7, 2019 - link
Maxiking what are you smoking?? most of your posts here.. are BS and FUDLMonty - Monday, July 8, 2019 - link
Did you miss the non-gaming benchmarks where the 9900K is not even close to the 3900X? They have obliterated Intel's flagship (aside from gaming). At much less power consumption too. And lower price.It's so satisfying to see the underdog on top again.
tamalero - Monday, July 8, 2019 - link
Ram works differently in the 3000 series.. faster ram sometimes its not better. As it forces the controller to go 2:1 when 1:1 is desirable.Oxford Guy - Sunday, July 7, 2019 - link
"As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer's maximum supported frequency. This is also typically run at JEDEC subtimings where possible. It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance. While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds - this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date."1) You cite "most users" being lazy as justification for hamstringing the processor with slow RAM. Then, simultaneously, you test with a "premium" motherboard. Maybe all of those lazy computer folk are the ones buying the midrange and low-end boards, not the enthusiasts who are more likely to fork out the cash for a premium board.
2) Most lazy computer users don't read complex articles like this.
3) Citing ordinary stupid lazy people is rarely a good justification for anything, especially when it comes to enthusiast computing.
4) "ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS"
That is absurd. Oh, the scary BIOS! It takes two seconds to select an XMP profile.
5) "as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer"
6) "Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date."
That isn't good enough and you know it. Firstly, the information needs to be out there at the start. Secondly, promises from Anandtech about reviews "coming real soon" often turn out to be vaporware, like the GTX 960. Selective information to further an agenda.
Citing OEMs' practices for basic models, given that they do things like hamstring APUs with single-channel RAM, is a logical failure, especially when you're using premium motherboards.
Your rationalizations for your memory testing paradigm are clearly specious.
Oxford Guy - Sunday, July 7, 2019 - link
Similarly, Anandtech should be ashamed of its overclocking methodology that was described in years past, where serious stress testing wasn't done, actual stability was never found, and voltage extremes were used as the basis for determining that "max stable overclock".Those kind of reckless amateurish practices do one thing and one thing only: They get novices to destroy equipment and corrupt their data, wasting their time and money.
Speaking of laziness. It's lazy to not do what Joel Hruska bothered to do, for the day 1 Ryzen 1 review — run the RAM at 3200. He managed to do that without serious effort, and that was with Zen 1. Here we are and you're using 3200 with slow timings for Zen 3?
It's bizarre to have so much precision testing (like RAM latency parsing) coupled with extremely sloppy (or worse) decision-making, like pretending that the latest Intel-only security vulnerabilities aren't important enough to factor in.
Maxiking - Sunday, July 7, 2019 - link
A guy from hw unboxed was overclocking 3900x, reached 4.3ghz and killed the cpu.3200mhz without serious effort on Ryzen 1? Do you have any clue what are talking about? Even today, with Ryzen 2, you are considered lucky, if you can ran 3200CL14 without problems. Not to mention, you think it is something uncommon with Ryzen 1. 3000mhzCL15 used to be a problem.
r/AyyyyMD is this way >>>>
RSAUser - Monday, July 8, 2019 - link
Sounds like nonsense,we seem to have not watched the same review.That review was also flawed as the 9900K got a $200 cooler and certain tests also had water cooling, versus the 3900X with a stock cooler. Great methodology.
Stop posting here, your posts are all nonsense so far.
Xylade - Monday, July 8, 2019 - link
Another complete demented re-tardoleyska - Tuesday, July 9, 2019 - link
There was a gigabyte board engineering sample that killed cpu's.Not the cpu, two reviewers have confirmed it was pre-prod gigabyte board that killed cpu's.
Phynaz - Sunday, July 7, 2019 - link
Blah, blah, blah. Something, something, something I read on Twitter.ksec - Sunday, July 7, 2019 - link
I am not understanding why this is only Silver and not Gold. Considering Price, Power, Performance, Value.The only thing that beat it, is one SKUs from Intel at single thread performance with higher clock speed. And these Intel test are without latest Zombieload Security patch.
TheUnhandledException - Sunday, July 7, 2019 - link
It has to be manufactured by a company who's name starts with an i to get gold.Korguz - Sunday, July 7, 2019 - link
OR, for AT, a sliver is the top tier award.. and gold is 2nd ? i think this was explained by gavin in a post farther up....John_M - Monday, July 8, 2019 - link
Gold is indeed second tier but silver is third. It's a shame Anand can't use gold, silver and bronze categories that everyone can understand.Ryan Smith - Sunday, July 7, 2019 - link
Gold awards are very rarely given out. As the final arbiter on these matters, I would only give out a Gold if the Ryzen 3000 had consistently beaten the competition in both MT and ST workloads here.RSAUser - Monday, July 8, 2019 - link
Mind using a cheaper cooler on the 9900K so similar envelope to work on? Then also forcing the 9900K to stay at TDP? No? Then how is the 3900X supposed to compete in performance alone if it completely destroys the 9900K in every other metric.Phynaz - Monday, July 8, 2019 - link
Do we get to force the AMD cpu to stay within tdp too?Korguz - Monday, July 8, 2019 - link
it already is... 95 watts for intel.. is the minimum.. intels chips have been shown to use alot more then that.. up to 200 watts.. constrain intels cpus to 95 watts.. and they loose across the boardPhynaz - Monday, July 8, 2019 - link
AMD doesn’t stay within TDP under load at any time.https://www.overclock.net/forum/10-amd-cpus/172875...
Just another unemployed AMD fanboy that can’t handle the truth. Wait till NAFTA kicks in, you’ll be able to afford a used Via CPU with that Monopoly money Canada uses.
RSAUser - Tuesday, July 9, 2019 - link
Page does not load for me, and you're linking to an overclocking site. I'd assume that would not be within TDP.And they're able to go over TDP by like 10W with precision boost, big whoop, that's not even close to double rated.
Korguz - Tuesday, July 9, 2019 - link
what ever phynaz... anandtech did a write up on the power intel uses for their chips : https://www.anandtech.com/show/13544/why-intel-pro...that link you posted, looks like a mistake was made with communication between several different parties, as the poster said, considering this is a new cpu, and accompanying mobo/chipset, things like this do happen, even intel has had its own issues with a new platform, and we will have to see how it levels off in the coming few weeks. amd does stay within its TDP limits better then intel.. at least when amd says their cpus use XXX watts, it uses around that number, unlike intel, where a 95watt cpu, cause use up to 200 watts, as the link i posted shows...
you sure like to throw insults around dont you ? does it make you feel better about your self ? in the end.. maybe its YOU that cant handle the truth about your beloved intel ? face it, compared to zen2/ryzen 3, intels cpus use more power, and cost LESS then intels equivalent cpu, and amd has IPC parity with intel.
Xyler94 - Wednesday, July 10, 2019 - link
Phynaz... you may wanna rethink your TDP argument there...Intel's i9 9900k's TDP is 95W, however regularly hits over 200W without an overclock.
just4U - Monday, July 8, 2019 - link
The silver award seems apt. Since it certainly lived up to expectations and in some instances surpasses them. gold if it's the clear winner in everything, platinum if it beats out all expectations..Meteor2 - Monday, July 15, 2019 - link
Ryzen 3000 beats Intel ST and MT per Watt or per dollar, which are the only metrics which matter. Otherwise how are you comparing like with like?patmanRR - Sunday, July 7, 2019 - link
Using llvm for c/c++/Fortran codes is most likely to result in slower performance than gcc (and even more likely than Intel compilers) .I do not know if the performance impact is more/less/the same among Intel and and CPUs but I do not really trust these numbers in the first pages of the review.Dragonsteel - Sunday, July 7, 2019 - link
I'm excited by the 3800X, which based on this article, may showcase a much higher performance (and power) output at higher multi threaded applications.I'm very much looking forward to the inclusion of the 3800X numbers. Would also like to see some game updates with the 2080 and such at 1440p as most of the test either skipped that resolution and went to 4K. The 4K results mostly showed the GPU bottleneck.
danjw - Sunday, July 7, 2019 - link
I was really looking forward to reading this review. I look forward to finding out what is going on with your PCMark numbers. I appreciate that you guys are willing to go the extra mile when you see something not looking right. Thank you and keep up the great work guys!AshlayW - Sunday, July 7, 2019 - link
Great review, thanks. Gains are good but I'm more than happy with my 2700X for now so I'll likely be waiting for Ryzen 4000. Seems like Intel CPUs are more or less obsolete at their current prices now unless you absolutely need the best possible gaming performance at any cost. (more money than sense).One nitpick, though. I completely disagree with this statement:
"Ultimately, while AMD still lags behind Intel in gaming performance, the gap has narrowed immensely, to the point that Ryzen CPUs are no longer something to be dismissed if you want to have a high-end gaming machine."
Specifically, about "dismissing" AMD Ryzen CPUs for high end gaming machines, I mean the 2nd and 1st gen ones. I have built many "high end" gaming machines, with Ryzen 1800X and 2700X and they are excellent. Anyone that "dismisses" Ryzen 1 or 2 for a high end gaming machine is a tool. (I'm gaming at 144Hz on a 2700X, lol).
But I understand the point trying to be made. Gaming was the last bastion for Ryzen in absolute performance and now they have sort of cracked it. 9900K for 480+ bucks is going to be a hard sell with these new chips onm the market. Where are these rumoured Intel Price cuts? or is chipzilla really that arrogant?
GlossGhost - Monday, July 8, 2019 - link
I think he said that because most people want to see AMD close to/or beat Intel in order to finally look at the processors as a proper alternative. I am playing on an R5 2600 daily and in what I need it to perform, it does great. People like us who have long researched and dug into those Ryzens will probably have already switched. Now it's time for those like my colleagues whom, when I showed the performances, went into deep thoughts as to how to plan their next Ryzen builds.ballsystemlord - Sunday, July 7, 2019 - link
Spelling, grammar, and 2 technical corrections (thus far):"...meaning for the very vast majority of workloads, you're better off staying at or under DDR4-3600 with a 1:1 MC:IF ratio."
Acutally, AMD's graph shows DDR4-37333, not DDR4-3600 before the 2:1 IF ratio sets in.
"...meaning for the very vast majority of workloads, you're better off staying at or under DDR4-3733 with a 1:1 MC:IF ratio."
"...this put a lot more pressure on the L2 cache capacity, ..."
Missing "s":
"...this puts a lot more pressure on the L2 cache capacity, ..."
"AMD here has essentially as 60% advantage in bandwidth as the CCX's L3 is much faster than Intel's L3"
"a" not "as. Maybe get rid of the "essentially"?
"AMD here has essentially a 60% advantage in bandwidth as the CCX's L3 is much faster than Intel's L3"
"The X570 chipset is the first chipset its manufactured in-house using ASMedia's IP, whereas previously with the X470 and X370 chipsets, ASMedia developed and produced it based on its 55nm architecture."
This sentence makes absolutely no sense. Have another cup of coffee? :)
"...on top of being able to run them on more memory limited platforms which we plan on to do in the future."
Excess "on".
"...on top of being able to run them on more memory limited platforms which we plan to do in the future."
"We're seeing quite an interesting match-up against Intel's 9700K here which is leading the all the benchmarks."
Extra "the":
"We're seeing quite an interesting match-up against Intel's 9700K here which is leading all the benchmarks."
"In our test, we take v1.3.3 of the software with a good sized data set of 84 x 18 megapixel photos and push it through a reasonably fast variant of the algorithms, but is still more stringent than our 2017 test."
Replace "is" for "they are" as the word algorithms is plural.
"In our test, we take v1.3.3 of the software with a good sized data set of 84 x 18 megapixel photos and push it through a reasonably fast variant of the algorithms, but they are still more stringent than our 2017 test."
"Please note, if you plan to share out the Compression graph, please include the Decompression one. Otherwise you're only presenting half a picture."
Excess words, try:
"Please note, if you plan to share our Compression graph, please include the Decompression one. Otherwise you're only presenting half a picture."
"but actually also raising the clock frequency at the same time, bringing for some impressive power efficiency benefits."
Excess "for" or bringing:
"but actually also raising the clock frequency at the same time, bringing some impressive power efficiency benefits."
OR
"but actually also raising the clock frequency at the same time, for some impressive power efficiency benefits."
"Not that Zen 2 is soley about memory performance, either."
Missing "l":
"Not that Zen 2 is solely about memory performance, either."
"We've also seen the core's new 256-bit (AVX2) vector datapaths to work very well."
Excess "to":
"We've also seen the core's new 256-bit (AVX2) vector datapaths work very well."
"Intel's higher achieved frequencies as well as continued larger lead in memory sensitive workloads are still goals that AMD has to work towards to"
Excess "to":
"Intel's higher achieved frequencies as well as continued larger lead in memory sensitive workloads are still goals that AMD has to work towards"
"The new design did seemingly make some compromises, and we saw that the DRAM memory latency of this new system architecture is slower than the previous monolithic implementation. However, here is also where things get interesting. Even though this is a theoretical regression on paper, when it comes to actual performance in workloads the regression is essentially non-existent, and AMD is able to showcase improvements even in the most memory-sensitive workloads."
Not strictly accurate. AMD is showing a regression in performance compared to themselves in the "3DMark Physics - Ice Storm Unlimited" and "AppTimer: GIMP" benchmarks. GIMP is single threaded and the 3900X is loosing to the 2700X. Again, the same with "Ice Storm Unlimited", but I suspect that we're hitting a performance ceiling here.
I suspect if deep dive into the regression in GIMP you'll find something more interesting than just a memory bottle-neck.
Ryan Smith - Monday, July 8, 2019 - link
Thanks, Lord Ball!ballsystemlord - Wednesday, July 10, 2019 - link
You're welcome. Its nice to be recognized.ballsystemlord - Sunday, July 7, 2019 - link
@andrei I've been closely watching Geekbench since it's introduction to the suite and the ST mode seems to love clock speed. Maybe next time you can give normalized results."We're investigating the PCMark results, which seem abnormally high."
Aww, c'mon, we all love an AMD shill. :-)
Corona is especially interesting for it's magic Intel number.
The 9900K 5B-RPS which is the same as it's boost clock frequency of 5GHz!
The 3700X is doing 4.6B-RPS and has a boost clock speed of 4.4GHz.
Desierz - Sunday, July 7, 2019 - link
How will using the 3xxx CPUs performance be affected by running on a PCIe 3.0 motherboard?zzing123 - Sunday, July 7, 2019 - link
With the X570, given the vast increase in TDP to 11/15W (which ASRock for one has opted to use X470 in their server board: https://www.servethehome.com/asrock-rack-x470d4u-a... I wonder why AMD didn't choose to bump the chipset to a better process node, since 55nm is positively ancient. Could you ask about this, as it seems to be a card that fell off the table when looking a platform efficiency, and dents AMD's armour when it comes to beating Intel on thermal efficiency...Mugur - Monday, July 8, 2019 - link
AFAIK the x570 chipset is 14nm, exactly like the uncore of the 3000 cpus.GreenReaper - Monday, July 8, 2019 - link
Seems weird for PCIe 4.0 to take so much more than 3.0, though it is twice the speed. Maybe they implemented as an FPGA rather than an ASIC? Though that seems like a weird solution, and I'm not sure there are any FPGAs at 14nm. I guess we'll see if it reduces with the next series. (More conspiracy-theory wise, maybe they're hiding some high-power feature in there, extra cores...)trenzterra - Sunday, July 7, 2019 - link
How do the pcie 4.0 x4 lanes work between cpu and chipset? Would 2 pcie 3.0 x4 devices be able to run at full speed? Does the chipset aggregate the bandwidth available back to cpu?Also would it be possible to use all 16 lanes on the chipset at once, just that when it goes back to the cpu they would simply be restricted in speed?
zzing123 - Sunday, July 7, 2019 - link
The CPU has 24 PCIe 4.0 lanes available. The chipset has an x4 link to the CPU. The chipset can be configured as one x8, or 2x x4 blocks, each of which can be 1x x4, 2x x2, or 4x SATA. These are all switched within the X570 southbridge (so therefore aggregated to the CPU). However, remember the chip is also giving 20 other native lanes, and each of these is double the bandwidth of PCIe 3.0. A PCIe 3.0 GPU with an x16 connector is actually only x8 PCIe 4.0 lanes in terms of bandwidth available.trenzterra - Sunday, July 7, 2019 - link
Hmm, so is it correct to say I can either have 4 or 8 PCIe 4.0 x4 lanes to the CPU? 4 is natively reserved by the CPU and the other 4 is general purpose which could be reserved for this purpose?So given 8 lanes of PCIe 4.0 to the CPU, if the chipset supports, say, the use of 16 lanes, does it mean I can use all 16 lanes at once, just that it will run at PCIe 3.0 speeds?
trenzterra - Sunday, July 7, 2019 - link
Hmm, so is it correct to say I can either have 4 or 8 PCIe 4.0 x4 lanes to the CPU? 4 is natively reserved by the CPU and the other 4 is general purpose which could be reserved for this purpose?So given 8 lanes of PCIe 4.0 to the CPU, if the chipset supports, say, the use of 16 lanes, does it mean I can use all 16 lanes at once, just that it will run at PCIe 3.0 speeds?
ABR - Monday, July 8, 2019 - link
As I've asked before with Zen as compared with Intel, please post at least one graph of benchmark performance per watt. Just inferring it based on the last page of the article isn't good enough, especially when the legend on every other graph shows the TDP numbers which we know Intel and AMD determine in different ways.Kishoreshack - Monday, July 8, 2019 - link
Why Silver?They deserve Gold Ratings from Anandtech
For the same price of Intel Processor they are way ahead in performance
Ryan Smith - Monday, July 8, 2019 - link
Silver is an award for a great product. And it's the highest award we've given to a CPU in quite some time.Conversely, gold awards are very rarely given out. As the final arbiter on these matters, I would have given out a Gold if the Ryzen 3000 had consistently beaten the competition in both MT and ST workloads here.
RSAUser - Monday, July 8, 2019 - link
And as I stated on another comment you had like this, completely dumb.The 3900X is cheaper, half the power budget, using a worse cooler, with the 9900K not having full mitigation patches and performance wins generally being erro rof margin, what exactly are you basing this on.
imaskar - Monday, July 8, 2019 - link
You have just forgotten how much performance new generations brought before. What it was like to upgrade from P4 to Core2. Then from Core2Duo to Core i7. It was BOOST. Now it's just 15% here and there. That's why I agree that Silver is actually fair.TEAMSWITCHER - Monday, July 8, 2019 - link
I too think that the "Silver" award is the correct one.madseven7 - Monday, July 8, 2019 - link
Well it shouldn't be Silver. Harder to get higher ipc gains than before. Write games and apps with multicore in mind instead of single threaded performance and maybe you'll see p4 to Core2 performance gains. 800lb gorilla with all those resources getting slapped around by a tiny fish. Should be GOLD!RSAUser - Thursday, July 11, 2019 - link
Change in circumstances, that was a decade ago. Performance difference per clock is still substantially different, should have been gold rated if modern Intel get silver.Phynaz - Monday, July 8, 2019 - link
what a cry baby.Mugur - Monday, July 8, 2019 - link
Well, maybe you keep the Gold for the 3950 in September? :-) Or Platinum?madseven7 - Monday, July 8, 2019 - link
They might have if you guys had all the mitigations and 1903 tested.NewCPUorder - Monday, July 8, 2019 - link
I hope people here know that Anandtech is owned by Intel. So do not expect any fair reviews of Intel competitors products. We got to read other forums to have the truthTeutorix - Monday, July 8, 2019 - link
Where the hell did you get this idea? Intel don't own Anandtech, they are owned by a British publishing company.ThreeDee912 - Monday, July 8, 2019 - link
Apparently Anandtech is owned by 100% owned by Intel, AMD, Nvidia, Apple, Samsung, LG, etc. depending which article you read. It's ridiculous how people have been posting nonsense like this for years.Korguz - Monday, July 8, 2019 - link
sorry.. but you have no idea what you are talking about newcpuorderPhynaz - Monday, July 8, 2019 - link
How does someone as dumb as you manage to function as a person?deil - Monday, July 8, 2019 - link
We all knew that 3'rd zen will have something ep(i/y)c inside, but this is insane.remember that current "drivers" might not be at it's peak YET, so another 5% + 5% from software support might appear later.
NewCPUorder - Monday, July 8, 2019 - link
Yes, but we people have to make AMDs product heard. Because these Intel paid medias like this, they will press the product down even it would beat competition 50%.Korguz - Monday, July 8, 2019 - link
newcpuorderwhat are you talking about ???intel doesnt own this site..
RSAUser - Monday, July 8, 2019 - link
I still have an issue with posting TDP next to it if AMD and Intel are not using the same definition.The 3900X is pulling a max of 105W while the 9900K is pulling a minimum of 95W, it pulls close to 200W all core load.
That difference is huge, and should be shown in the brackets next to the name.
NewCPUorder - Monday, July 8, 2019 - link
Intel owns this media, so they are very carefully publishing exactly what they want.jordanclock - Monday, July 8, 2019 - link
So, Intel wants them to publish numbers that are favorable for them, but are also okay with them explaining how they're bullshit? In an article that practically praises Ryzen 3000?Riiiiight.
Phynaz - Monday, July 8, 2019 - link
AMD fanboys are idiots.RSAUser - Thursday, July 11, 2019 - link
Not really, the issue is more that everyone clued up in tech knows about it, so tech journalists don't bother commenting the fact, while the average person doesn't understand.zealvix - Monday, July 8, 2019 - link
Will be good if there are retests performed in the near future with all latest vulnerability patched and same cooler used.zealvix - Monday, July 8, 2019 - link
"Meanwhile we should note that while the ZombieLoad exploit was announced earlier this year as well, the patches for that have not been released yet. We'll be looking at those later on once they hit."Other articles on the net shows both microcode update from Intel and OS patches from Microsoft have been released. Are they wrong ???
unclevagz - Monday, July 8, 2019 - link
Amidst all the slapfighting between AMD and Intel, ARM have got to be pretty pleased with how their cores compare to these architectures (in spec2006) right now. A76 implemented in a mobile SoC with like 10 times worse memory subsystem and cache than the test setups here is at around ~Zen+ IPC and takes like 1/3 area/core on the same process as a Zen 2 core, and then A77 is coming around the corner with another expected IPC boost....jjj - Monday, July 8, 2019 - link
So the 3600 is the only one that's not poor value as AMD goes backwards in perf per dollar.Tkan215 - Monday, July 8, 2019 - link
The first time AMD zen 2 tie with i9900k in gaming and beat i9900k in almost every aspect of productivity! AMD will be great for streaming + gaming. How could this article didnt redo 1903? Why Intel system isnt patch for mitigation and security flaw. is that called ignorance or what that giving Intel some advantage in these area of benchmark. Please redo them and take time this time as AMD is still fixing most of its problem thank you!Ninjawithagun - Monday, July 8, 2019 - link
Originally, I was going to wait for Zen 3, but I think I can consider buying a 3700X to 'get me by' until next year. No way am I buying a new X570 motherboard. Replaceing my 2700X with the 3700X is good enough until Zen 3 and new AM5 socket and new X670(?) chipset are released next year. Of course, the chipset name is hypothetical, so it's my best guess as to what AMD will call next year's new hardware ;-)Ninjawithagun - Monday, July 8, 2019 - link
*Replacinghaukionkannel - Tuesday, July 9, 2019 - link
Am4 next year. Am5 two years from now...Korguz - Tuesday, July 9, 2019 - link
um.. we have AM4 now....AntonErtl - Monday, July 8, 2019 - link
Thanks for the review.Two things I would like to see (maybe in an update): ECC support (I expect that it's like for Ryzen 1XXX: unsupported, but works with Asrock and ASUS boards) and RAM capacity: Can you use 4 of Samsung's 32GB DIMMs for 128GB RAM?
The Average - Monday, July 8, 2019 - link
This x570 motherboard states that it supports ECC so maybe it is indeed up to the motherboard vendors to support it.https://www.amazon.com/ASUS-Pro-WS-Workstation-Mot...
phoenix_rizzen - Tuesday, July 9, 2019 - link
ECC is supported in all Ryzen CPUs as it's part of the built-in memory controller. However, it's up to the motherboard makers to enable support for it in the RAM slots and BIOS and whatnot.If a motherboard claims support for Ryzen Pro, I believe that's a good indication it supports ECC. Otherwise, you have to dig around in the motherboard manual to find out.
UberHamburgler - Monday, July 8, 2019 - link
This is quite impressive, as AMD engineers hinted last year in leaks that they feared Zen 2 would be server and mobile only design. TSMC's first generation 7nm process is heavily optimized for efficiency and they didn't expect it to scale well past 3.5 GHz. Intel better have its 10nm process in full swing by the end of the year, otherwise they're in for a beating when Rome and the mobile variant launch - it's no secret that chip manufacturers only care about desktop to the extent that they win good press from enthusiasts.On a side note. What are the potential gains from kernel optimizaions similar to what happened a few months after the original Ryzen, this seems to be a similar restructuring of the cache.
Irata - Monday, July 8, 2019 - link
Cudos to you guys for re-running the benches.It's kinda sad that that AMD's releases seem like a "beta fest". While the products themselves are pretty good, issues with Bios or drivers often seem to be a let down.
It must stink to put in all the work to bench a system just to have to re-do it again.
Still, seeing how the results are already pretty good, I am hopeful that they will improve further after updates / patches.
poohbear - Monday, July 8, 2019 - link
Why are the gaming benchmarks at only 720p and 1080p? Is this what most people game at these days? Most gaming benchmarks are 1080, 1440 and 4k. Oh the CPU doesn't have much affect above 1080 you say? Well good, PLEASE SHOW THAT. People need to know this when making CPU decisions. If AMD trounces Intel at everything office related, and @ 1440 & 4k there is no difference, then that will absolutely affect my buying decision. Why are you showing 720p when hardly anyone games at that rez?tamalero - Monday, July 8, 2019 - link
Because you're testing the CPUs not the video cards you clown.529th - Monday, July 8, 2019 - link
Anyone know how well one chiplet Overclocks VS a 2 chiplet? I'm thinking one chiplet would not be as limited by temps verse a 2 chiplet.acme64 - Monday, July 8, 2019 - link
Is there any word on a performance difference while on x470 vs x570?haukionkannel - Tuesday, July 9, 2019 - link
There is not difference. Even 350 motherboard works with 12 core 3900. There was one test that did that.CaedenV - Monday, July 8, 2019 - link
*Slow clap*Great work AMD!
I have always been a snobbish Intel user. Back in the late '90s and early '00s it was because the audio software I used simply was not stable on AMD (heck, it was barely stable on Intel lol). Then after the Core2 came out Intel was weirdly the best AND the cheapest for a very long time. But now, AMD really has a lot going for itself, and I am truly impressed.
Hoping to rebuild my aging (but still pretty great) i7 2600 this fall... mostly because I need it as a home server, not really because I 'need' an upgrade. But I think I am going AMD this time around. I really can't believe how much they have improved in the last 3 years!
GreenReaper - Monday, July 8, 2019 - link
Guys... I get you might not want to adjust your testing base. But MDS/Zombieload makes a significant difference when it comes to system calls, such as just about any file or network access.https://www.phoronix.com/scan.php?page=article&...
The reason for this is that the CPU has to perform a crazy sequence of events when accessing privileged data when two threads on a core are involved, essentially yanking the other thread into kernel mode as well, performing the privileged access that the original thread wanted, then flushing all relevant buffers before returning the two threads, so that the other thread can't run a timing attack using data from them.
It's a hack, and the impact is potentially worse the more modern the Intel CPU is, because - aside from the Atom - they have had increasingly bigger output buffers, especially Skylake+.
The OS fixes were out in mid-May when Zombieload was announced, for both Windows and Linux, so I don't know where you're getting "the patches for that have not been released yet".
Maybe you're thinking firmware for your motherboard to load new microcode? This is typically more of an issue for Windows; on Linux you'd just load the appropriate package. But even here, this doesn't make sense, because (for example) your Z370 i7 Gaming (used for the critical Intel 8/9th Gen CPUs) *does* have a newer BIOS:
https://www.asrock.com/MB/Intel/Fatal1ty%20Z370%20...
In fact, much newer. The 4.00 is from May 13, so presumably is relevant to MDS. You seem to be on 1.70, from March 26... 2018. There have been five updates since then. Why was it not updated?
zealvix - Tuesday, July 9, 2019 - link
Yea, saw several articles from other sites that both microcode update from Intel and OS patches from Microsoft have been released.CityBlue - Tuesday, July 9, 2019 - link
I quickly scanned the comments to see if the benchmarks had been performed with all relevant mitigations installed and was not surprised in the least to discover they hadn't, so frankly this entire article is pointless and I won't waste my time reading it. All there is left to say about this article is that whatever difference Anadtech determined between Intel and AMD, it would have been even more in favor of AMD had all Intel mitigations been enabled.Anandtech, Ryan Smith etc., do yourself a favor and re-test your Intel CPUs with *all* mitigations enabled otherwise your Intel benchmarks are just a sham, and you will start to lack credibility. Based on the comments for this article and others your readership are already staying to lose faith in your integrity. Other sites such a phoronix.com are doing a great job detailing the full impact of the mitigations (including Zombieload which you should have tested) so it's hard to take serially your reasons for not testing with a level, real-world playing field (ie. full mitigations). Or maybe you just didn't want to give out a Gold award? :)
TEAMSWITCHER - Tuesday, July 9, 2019 - link
Really getting tired of comments like this. They should just delete them.CityBlue - Tuesday, July 9, 2019 - link
@TEAMSWITCHER For people that aren't Intel apologists, this stuff matters. Not just because we as consumers want to get an honest review of how the latest AMD hardware stacks up against Intel in a real world situation with all mitigations applied, but also because this elephant in the room is a core credibility issue that Anandtech need to deal with.generalako - Tuesday, July 9, 2019 - link
The only one being an apologist here is you CityBlue. In all your rage about Anandtech not testing with mitigations in place, you failed to every take up the fact that Anandtech has also tested the Intel setup with lower RAM speeds than the AMD one. Which is, to use your own words, "hard to take seriously...for not testing with a level, real-world playing field". Changing RAM speed is a simple push of the button on XMP, and both easily support time (not to mention that x570 motherboards isn't something the overwhelming majority of people, for obvious reasons, will buy). Remember, this was a traditional complaint from many users back when Zen 1 came out, and was tested by various vendors out there (like Gamersnexus) with lower RAM speeds and Intel counterparts.CityBlue - Tuesday, July 9, 2019 - link
@generalako as I said in a previous comment, this article and it's benchmarking is so fundamentally flawed that I'm not willing to invest the time to read the article (I mean, seriously - what's the point?) so forgive me for not mentioning other errors/omissions that may have favoured AMD but two wrongs do not make a right, and especially not when the mitigation omission is so egregious.Meteor2 - Monday, July 15, 2019 - link
CityBlue, you're spot-on. +1.GreenReaper - Tuesday, July 9, 2019 - link
This is true for the HEDT X-series motherboard as well. 1.40 is from March 2018. There have been three updates since then, including two new instances of microcode, the last from 6 June 2019:https://www.asrock.com/MB/Intel/X299%20OC%20Formul...
This does *not* apply in quite the same way for the GIGABYTE X170 ECC Extreme used for the 7th- and 6th-gen Intel CPUs... but only because it hasn't been updated *by Gigabyte* since the very first patches for Meltdown and Spectre at the start of 2018:
https://www.gigabyte.com/uk/Motherboard/GA-X170-EX...
MLSCrow - Monday, July 8, 2019 - link
Some of those benchmark results with the i9-7920X are very fishy. In some cases, out-performing Intel CPU's with more advanced cores that have 2/3rd the cores, yet, it somehow manages to score 550% better? Please explain.madseven7 - Monday, July 8, 2019 - link
Seems like Anandtech is becoming PCPerspective.GreenReaper - Tuesday, July 9, 2019 - link
Well, it *is* an X-series. Perhaps it has a bit more cache? Or AVX-512 support with more modules? But I also see it's using a BIOS from March 2018 - not the latest from June 6 with microcode allowing MDS mitigations to be used by the OS (see my comment in the previous page).mattkiss - Monday, July 8, 2019 - link
There are multiple errors in the "X570 Motherboards: PCIe 4.0 For Everybody" section. Check the second paragraph and "AMD X570, X470 and X370 Chipset Comparison" table that follows it.Ryan Smith - Tuesday, July 9, 2019 - link
Could you please be more specific? I'm thumbing through the specs right now, and I'm not seeing an issue.Maxiking - Tuesday, July 9, 2019 - link
So any plans to cover the huge fraud and misleading AMD marketing about frequency and the boost frequency? The majority of 3900x have such poor silicon quality they can't reach 4.6 GHz on a single core.Korguz - Tuesday, July 9, 2019 - link
there is NO fraud about this.. better yet... where is your PROOF about this ? post some links to sites that are showing this.. if not drop it already.. you are just trying to spread lies, and BS...Maxiking - Tuesday, July 9, 2019 - link
My proof is this and any review on the internet, the advertised 4.6ghz is not being reached on the majority chips and if, only for 100 - 200ms and very sporadically. This is called FRAUD, I guess Intel should start selling their cpus with 5.3ghz boost because a few of them would be able to reach it for 100ms after pumping a lot of voltage into them like AMD does.PACougar - Tuesday, July 9, 2019 - link
Lol, a boost frequency is exactly that. It doesn't mean the chips will sustain the boost for any guaranteed period of time. Do you really think it's just a coincidence that you're the only one that's "outraged" by the expected operation of these chips? Guess what, when Intel ships a chip with a stated frequency, it's also a boost with no guarantee of duration. Stock 9900k's don't run continuously at 5ghz. Lol You look like a complete fool for talking about fraud where there is none.Korguz - Tuesday, July 9, 2019 - link
yea ok sure.. what ever maxiking.. you have NO idea how boost works, OR what the difference is between boost and max all core turbo is. the fact you wont post links to other sites that show this, also proves your are just trying to spread lies and BS. the only fraud i am seeing.. is you...Maxiking - Tuesday, July 9, 2019 - link
We are on the side which confirms my words in the review, forums, by their tweets. Yet you are blind to the facts. Check derbauer, gaming nexus, not gonna spoonfeeding you.Korguz - Tuesday, July 9, 2019 - link
let me ask you this... maxiking.. do you know the difference between boost clock. and all core turbo ? cause it sure seems you do NOT know the difference.. and others have pointed out to you that you are also wrong.. i wanted links.. to be sure i am looking at the same sites as you.. in the end.. you are just trolling.. and talking bs... drop this already..Qasar - Wednesday, July 10, 2019 - link
Check derbauer, gaming nexus... never heard of these sites.. i can see why Korguz is asking you to post links directly to where you are getting this, and i agree, BS from...Maxiking - Tuesday, July 9, 2019 - link
There is a debate going about this on every internet forum, so no I am not the only one concerned and the AMD subreddit has a dedicated thread about it as well.If Intel ships a chip with any stated boost frequency, the boosts are guaranteed on the per core usage basis. Stock 9900k runs at 5.0 ghz, the more cores are being used, the lower frequency..a single core always run @ 5.0ghz, all cores @ 4.7ghz. I already said it earlier. Sorry my friend, wrong example, the only fool here is you.
The fact is that Ryzen 3rd gen is a worse overclocker than the 2nd one. Or you know, I will give AMD the benefit of doubt. Before the reviews were up, the whole internet had been going crazy and thinking that 4.6 on all cores was possible, because "look at those boost frequency man". And that was the AMD intention. To use those sporadical 100 - 200ms spikes to spread the idea that the final product would able to reach them on all cores, to misguide. And I must say it WORKED brilliantly, ADOREDTV is now the biggest clown on the internet, easy 5ghz+, he said. I wonder if he makes ConLake style videos about this.
So yeah, that 4.6ghz boost is fraud. The majority can't reach it on a single core and the rest capable of doing so only by performing non frequent 100 - 200ms long spikes.
If Intel did this, oh god, what a shitstorm would be here, just like with their TDP, Meltdown, Spectre. Every review would be full of this. Don't blame me, I am just pointing at facts and making fun of petty suddenly blind amd fans. Don't shoot the messenger.
Korguz - Tuesday, July 9, 2019 - link
well.. 1st.. you didnt answer my question if you know the difference between boost clock and all core turbo.. 2nd.. you are the only one on here.. that is crying about this.. define every internet forum... also.. you do not seem to replying to any one else in this thread about this..Targon - Monday, July 22, 2019 - link
I've seen 4.4GHz without doing any BIOS tweaking on a 3900X on an Asus ROG Crosshair VI Hero. BIOS is still AGESA 1.0.0.2, so I haven't bothered to even try pushing to 4.6GHz yet. I wouldn't say that a boost to 4.6GHz with a good BIOS version isn't possible based on what I have seen with X370, and expecting that X570 has a good chance to be better about how to handle the new chips.As far as Intel and boost speeds, that is based as well on cooling. You try a 95W TDP rated cooler, and you will NOT be hitting anywhere close to 5.0GHz boost. You would need something significant. That 95W TDP is a fraud, because it only applies to base speeds, while the AMD TDP figures are in line with what most people will hit.
kd_ - Tuesday, July 9, 2019 - link
Take it easy, BobIrata - Tuesday, July 9, 2019 - link
Did you bother to read Andrei F's twitter post regarding the Bios update - it includes a nice graph where you can see the 3900x's cores boosting to what looks like 4.6 Ghz.Xyler94 - Wednesday, July 10, 2019 - link
Oh god, you're still on about that?Intel doesn't guarantee boost clocks. It's literally on their website. The only guarantee is base clocks. Boost clocks depend on cooling and power delivery.
atl - Wednesday, July 10, 2019 - link
While 3900X vs i9-7920K and 3700X vs i7-9900K is a no-brainer, i would really wanna see how performs (overclocked) 3600 vs this bunch of CPUs.This will help making some interesting decisions for optimizing budged.
Mugur - Wednesday, July 10, 2019 - link
Check Hardware Unboxed / Gamers Nexus on Youtube or Techspot site...beginning - Wednesday, July 10, 2019 - link
Are these benchmarks of Intel CPUs after applying all the patches released so far for addressing vulnerabilities?GreenReaper - Wednesday, July 10, 2019 - link
The BIOS in the Intel motherboards tested are from 2018; most appear to only have microcode to handle Meltdown/Spectre (despite the availability of BIOS versions that would work). So... no.beginning - Thursday, July 11, 2019 - link
Thank you for your responseMeteor2 - Monday, July 15, 2019 - link
No; they didn't retest the Intels on Windows 10 1903, which includes the OS-side patches for the MDS flaws. The motherboard firmware patches may never come.This really does invalidate the Intel numbers, but it's not critical: on a up-to-date system, they'll be slower, and Ryzen 3000 even further ahead.
529th - Wednesday, July 10, 2019 - link
Will there be updated OC results with the new bios?beginning - Thursday, July 11, 2019 - link
I noticed that at the E3 2019 tech day, AMD recommended DDR4-3600 CL16 RAM. I see that 3200 MHz RAM has been used in the AMD testbench. I read the description about avoiding overclocking but 3600 MHz RAMs come with a factory clock of 3600 MHz, right? I know I am missing something. What am I missing?sknaumov - Thursday, July 11, 2019 - link
Do you plan to make some tests of these CPUs on older, cheaper and colder motherboards? It would be very interesting to see results of b450 chipset and whether it is possible to use DDR4-3600MHz with tight timings on these older boards. Or at least provide more info about what has more priority for memory speed and timings on AMD platform - CPU or chipset.viperswhip - Thursday, July 11, 2019 - link
I am going to wait to build a PC for a bit, however, I am super excited by this launch and disappointed by the video card launch. I expect to have an AMD chip since Intel has no answer for this, and we shall see on the video cards, but if I was building today I'd probably get a 2070 RTX super.PProchnow - Friday, July 12, 2019 - link
Here's is Jus' a good ol' boy trying out. No OC off stock Multi but 3333Mhz RAM#1
https://browser.geekbench.com/v4/cpu/13863634
Rather a new rig and it is X470 up to the A.A BIOS and it is MSI Gaming Plus.
OK link #2 is here and I stroked the DDR$ up top 3333Mhz. I also stroked the fan
to stay sub 70C. Wild OCs will take water at least "in The Home" versus LiqN2 Lab.
https://browser.geekbench.com/v4/cpu/13865361
BTW where is the Bragging Thread? My MOBO is the MSI X470 Gaming Plus BIOS A.A makes Ryzen 9 go BTW.
I have yet to up the MULTI in case you want to know. I wonder what good Ocers will get with the right stuff.
Single-Core Performance
Memory Score 6431
Floating Point Score 5409
Integer Score 5190
Crypto Score 6888
Single-Core Score 5589
You underst and that RAM set at 1672 is 1/2 the common referred to speed. 3344Mhz is the common nomenclature.
***Single-Core Score ***Multi-Core Score
5589 47755
Geekbench 4.3.4 Tryout for Windows x86 (64-bit)
Result Information
Upload Date July 12 2019 08:16 PM
Views 2
System Information
System Information
Operating System Microsoft Windows 10 Pro (64-bit)
Model Micro-Star International Co., Ltd. MS-7B79
Motherboard Micro-Star International Co., Ltd. X470 GAMING PLUS (MS-7B79)
Memory 32768 MB DDR4 SDRAM 1672MHz
Northbridge AMD Ryzen SOC 00
Southbridge AMD X470 51
BIOS American Megatrends Inc. A.A0
Processor Information
Name AMD Ryzen 9 3900X
Topology 1 Processor, 12 Cores, 24 Threads
Identifier AuthenticAMD Family 23 Model 113 Stepping 0
Base Frequency 3.80 GHz
Maximum Frequency 4.53 GHz
Maxiking - Tuesday, July 23, 2019 - link
Why would anyone brag about something ifYou can't reach 5.0ghz +
You can't reach even the boost frequency on a single core
You can't beat consistently competitor's older 14nm cpu architecture which has been on the market since 2016...
You can't beat RAM OC'ing records either because over 3733mhz IF gets actually downlocked and due tu that, "faster" ram performs worse unless you OC 7400mhz, which is not possible even with liquid nitrogen.
PProchnow - Friday, July 12, 2019 - link
These are my scores with my Ryzen 9 3900X.#1
https://browser.geekbench.com/v4/cpu/13863634
Rather a new rig and it is X470 up to the A.A BIOS and it is MSI Gaming Plus.
OK link #2 is here and I stroked the DDR$ up top 3333Mhz. I also stroked the fan
to stay sub 70C. Wild OCs will take water at least "in The Home" versus LiqN2 Lab.
https://browser.geekbench.com/v4/cpu/13865361
BTW where is the Bragging Thread? My MOBO is the MSI X470 Gaming Plus BIOS A.A makes Ryzen 9 go BTW.
I have yet to up the MULTI in case you want to know. I wonder what good Ocers will get with the right stuff.
Single-Core Performance
Memory Score 6431
Floating Point Score 5409
Integer Score 5190
Crypto Score 6888
Single-Core Score 5589
You underst and that RAM set at 1672 is 1/2 the common referred to speed. 3344Mhz is the common nomenclature.
***Single-Core Score ***Multi-Core Score
5589 47755
Geekbench 4.3.4 Tryout for Windows x86 (64-bit)
Result Information
Upload Date July 12 2019 08:16 PM
Views 2
System Information
System Information
Operating System Microsoft Windows 10 Pro (64-bit)
Model Micro-Star International Co., Ltd. MS-7B79
Motherboard Micro-Star International Co., Ltd. X470 GAMING PLUS (MS-7B79)
Memory 32768 MB DDR4 SDRAM 1672MHz
Northbridge AMD Ryzen SOC 00
Southbridge AMD X470 51
BIOS American Megatrends Inc. A.A0
Processor Information
Name AMD Ryzen 9 3900X
Topology 1 Processor, 12 Cores, 24 Threads
Identifier AuthenticAMD Family 23 Model 113 Stepping 0
Base Frequency 3.80 GHz
Maximum Frequency 4.53 GHz
Now you can cross ref with others.
Meteor2 - Monday, July 15, 2019 - link
Nice!willis936 - Wednesday, July 17, 2019 - link
The editor's choice awards are a bit strange to me. Zen 1 didn't receive one even though it was the largest CPU performance increase from a company this century. The i7-4950HQ received an editor's choice silver award even though it had little importance to the industry. And the 3700X, which offers comparable SP performance to competing intel products at a huge discount and smaller power budget gets the same editor's choice level as the i7-4950HQ?willis936 - Wednesday, July 17, 2019 - link
I know it was a different editor at the time, but the selective excitement is a bit of a bummer. eDRAM was exciting to see at the time and then nothing ever came of it. The enthusiasm of chiplets under the new editor comes through much less. That too is fine. However if the rating system is what it is then I don't think it's much to argue that chiplets are much more disruptive than eDRAM and is already making much larger waves.Maxiking - Monday, July 22, 2019 - link
AMD fraund getting finally the attention it deserveshttps://www.youtube.com/watch?v=x03FyPQ3a3E
check at 05m25s
Korguz - Monday, July 22, 2019 - link
Maxiking, and HOW LONG till intel gets the SAME treatment?? saying a processor uses x watts, but in reality uses 50 to 100 watts MORE isnt FRAUD ??? hell you confine intels cpus to the watts they state, and their performance goes DOWN THE TOILET !!!. again .. you KEEP saying AMD is a fraud, but you STILL refuse to admit, that intel is a fraud as well..does this guy even acknowlege the issue with intel and the amount of power they " say " their cpus use, and how much power they REALLY use ??
Korguz - Monday, July 22, 2019 - link
further.. intel doesnt do any marketing, cause they DON'T want the general average user to know the cpu they bought, uses MORE power then has been stated, THAT also is false advertising, come on maxiking, go after intel as well, the same same things you are accusing amd of...Maxiking - Tuesday, July 23, 2019 - link
You are uneducated, TDP doesn't mean power consumption but the amount of heat dissipated, it informs you how much of heat the cooler must be able to dissipate in order to keep the cpu cool enough to run.Get it? 1700x TDP was 95W yet there were tasks it managed to consume 120 or even 140w on stock settings. Like do you even watch reviews? It was the same with 2700w.
but mimimimimimi AMD good mimimimimi Intel bad
Korguz - Tuesday, July 23, 2019 - link
sorry dude.. but YOU are uneducated, amd stays A LOT closer to its stated TDP then intel does, AT even did a review on it. power dissipated, also relates to power used. but it also doesnt help, that amd and intel both use the term TDP differently. either way.. intel uses more power then amd does.https://www.anandtech.com/show/13544/why-intel-pro...
Maxiking - Tuesday, July 23, 2019 - link
Again, TDP is not power consumption and it refers to a cooler.You are uneducated and fabricating because you are an amd fanboy. No one really cares about what is more accurate or not, because it does not say anything about power consumption of the chip.
So keep living in your nice little bubble. It is not my fault that you and other sites have been thinking that TDP -> power consumption. I will share something new to you again.. Ever heard about that Frankenstein novel? Frankenstein in not the monster but the doctor, his surname..Shocking I KNOW!!!
mimimimimimi AMD good mimimimimi Intel bad
Korguz - Tuesday, July 23, 2019 - link
again.. TDP, or Thermal Design Power, does relate to power consumption and how much is needed to keep something cool. You are uneducated and fabricating because you are an intel fanboy. i also notice you like to throw personal insults around when someone disagrees with you, or to try to make your opinion valid. so you keep living in your nice little bubble as well, not my fault you dont understand TDP relates to how much power something uses, as the more power a product uses, the more heat it creates, and then, needs to be removed.mimimimimimi intel good mimimimimi amd bad
Maxiking - Thursday, July 25, 2019 - link
What you just did it is just sad. it shows you are little kid.TDP is not power consumption, if TDP - 100% power consumption, it would mean that 100% of the electrical energy is converted into thermal energy so yeah which is impossible it would mean perpetuum mobile you twat, actually the cpu would be net positive, it would convert 100% of electrical energy into thermal whilst managing to perform another task at no energy cost.
Breaking the laws of physics just because of your AMD fanboyism
Korguz - Thursday, July 25, 2019 - link
i said it RELATES to power consumption, what, you cant read ?? cant see passed your intel bias ?? the more power something uses, the more heat it generates, and there for, the more needs to be dissipated, and i also never said anything about 100% power consumption, pulling words and making things up to try to make your self sound right ? And you are calling me names on top of that, who's the kid here ???Maxiking - Tuesday, July 23, 2019 - link
You are uneducated, TDP doesn't mean power consumption but the amount of heat dissipated, it informs you how much of heat the cooler must be able to dissipate in order to keep the cpu cool enough to run.Get it? 1700x TDP was 95W yet there were tasks it managed to consume 120 or even 140w on stock settings. Like do you even watch reviews? It was the same with 2700w.
but mimimimimimi AMD good mimimimimi Intel bad
Qasar - Tuesday, July 23, 2019 - link
hmmm doest really say amd is being fraudulent, just doesnt like the idea the chips might not boost, or run at what AMD says, but didnt mention fraud...and Korguz has a point.. WHY arent you commenting about the power intels cpus use, vs what intel says they use ?
Maxiking - Tuesday, July 23, 2019 - link
LOOOOOOL, so we have a guy confirming AMD doing fraund by misleading people about the frequency, instead of acknowledging the fraund, we gonna talk about semantics.Yeah, if you get sentenced for a sexual assault, you should sue then anyone who has accussed you of raping. Just wow.
Brilliant logic, sir.
Maxiking - Tuesday, July 23, 2019 - link
*fraudQasar - Tuesday, July 23, 2019 - link
still valid there buddy.. like has been said, you are the only one throwing the word fraud around, and that amd should be sued over this. so what everMaxiking - Tuesday, July 23, 2019 - link
And again... let me copy paste."You are uneducated, TDP doesn't mean power consumption or the highest peak but the amount of heat dissipated, it informs you how much of heat the cooler must be able to dissipate in order to keep the cpu cool enough to run.
Get it? 1700x TDP was 95W yet there were tasks it managed to consume 120 or even 140w on stock settings. Like do you even watch reviews? It was the same with 2700x.
but mimimimimimi AMD good mimimimimi Intel bad"
Korguz - Tuesday, July 23, 2019 - link
and yet, you still refuse to admit, that intel has its own issues with fraud and misleading its own customers.does he actually say its fraud ?? not directly, seems only YOU keep saying that, and only YOU say amd should be sued for it. again.., i would love to see YOU file a suit against amd for it, considering you are so hung up about it but you wont, cause you are all talk, no action, and probably know.. you wouldnt get very far with that law suit
Maxiking - Tuesday, July 23, 2019 - link
I said a few times... I don't tend to buy amd products so no, I am not gonna sue anybody.And as pointed out in the video, in his German one, he works for a retailer selling prebuilt pcs.. People keep returning pcs with AMD cpus becaue they do not boost to the promised frequency. You there, there are something like laws, if you write on the box 4.6ghz, it must reach it.
You are so knowledgeable, sharp minded and analytical when comes to meaning of words and what people want to say, you should sue Intel on your own, should be easy.
Korguz - Tuesday, July 23, 2019 - link
why not ?? going by how dead set you are about this.. seems like it would be an easy win for you.. ooooohhhh in the german one.. i understand now.. too bad i dont speak german so i cant confirm this... and if some one writes on the box that something uses a certain amount of power.. then it should use it.. not 50 to 100 watts more.. i have a few friends that buy intels cpus.. they see it uses 95 watts of power.. so they get a HSF that can dissipate that much power.. then wonder why their cpu throttles and runs slow when under load... then i point then to the link i just posted,and they are not happy.. and now need to go buy yet another HSF to handle the extra power.You are so knowledgeable, sharp minded and analytical when comes to meaning of words and what people want to say, you should sue Amd on your own, should be easy. again, too bad you wont.. cause you are all talk. have a good day sir..
Maxiking - Thursday, July 25, 2019 - link
Again, you have once again showed your AMD fanboyism.There is written: TDP 95W. I already explained what TDP means. AMD's TDP isn't accurate either.
AMD has 4.6ghz on the box whilst a bing number cpus does not REACH IT AT ALL. There is no "*" moniker next the 4.6ghz claim and they do not say that their cpu may not reach the frequency at all. In fact, there is a video from AMD on youtube promised even higher frequency, lol. Up to 4.75 ghz.
So yeah, stop being desperate and forcing Intel into the debate.
Because your childish attempts are futile, this is not about AMD or Intel. It is about us consumers. What will be next? 6 Ghz on the box?
Maxiking - Thursday, July 25, 2019 - link
AMD has 4.6ghz on the box whlist a big number of cpus do not REACH IT AT ALL under any load, conditions. Typing on phone is just cancer.Korguz - Thursday, July 25, 2019 - link
and again, like in another thread, you showed how much you hate amd, and are biased against them, and you call me an amd fanboy, you are just as much an intel fanboy. FYI, IF you actually READ the link i posted, you would see that intels 95 watts, is pretty much a MINIMUM their chips use, in reality, its more like 50 to 100 ABOVE that, and also.. amd is A LOT closer then intel is to the TDP they state, but again.. to be fair, amd AND intel use and come do different values for TDP, but you cant see passed your hated for amd to see this.. you are the one that has to resort to name calling, so WHO is being childish ?? what wil be next, intel claiming their cpus use 100 watts, but in reality, they use 300 ?Maxiking - Tuesday, July 23, 2019 - link
I said a few times... I don't tend to buy amd products so no, I am not gonna sue anybody.And as pointed out in the video, in his German one, he works for a retailer selling prebuilt pcs.. People keep returning pcs with AMD cpus becaue they do not boost to the promised frequency. You there, there are something like laws, if you write on the box 4.6ghz, it must reach it.
You are so knowledgeable, sharp minded and analytical when comes to meaning of words and what people want to say, you should sue Intel on your own, should be easy.
Atom2 - Monday, July 29, 2019 - link
ICC compiler is 3x faster than LLVM and AVX512 is 2x faster than AVX2. And both were left out of comparison? The comparison designed purely only for the LLVM compiler users? Used by who?Rudde - Saturday, August 10, 2019 - link
ICC is proprietary afaik and Anandtech prefers open compilers. AVX512 should be found in 3DPM and shows utter demolition by the only processor that supports it (7920X).MasterE - Wednesday, August 7, 2019 - link
I considered going with the Ryzen 9 3900X chip and an x570 motherboard for a new rendering system but since these chips aren't available for less than $820+ anywhere, I guess I'll be back to either the threadripper or Intel 9000+ series. There is simply no way I'm paying that kind of price for a chip with a Manufacters Suggested Retail Price of $499.gglaw - Friday, August 23, 2019 - link
@Andrei - I was just digging through reviews again before biting the bullet on a 3900X and one of the big questions that is not agreed upon in the tech community is gaming performance for PBO vs all-core overclock, yet you only run 2 benches on the overclocked settings. How can a review be complete with only 2 benches run, neither related to gaming? In a PURELY single threaded scenario PBO gives a tiny 2.X percent increase in single threaded Cinebench. This indicates to me that it is not sustaining the max 4.6 on a single core or it would have scaled better, so it may not be really comparing 4.6 vs 4.3 even for single threaded performance. Almost all recent game engines can at least utilize 4 threads, so I feel your exact same test run through the gaming suite would have shown a consistent winner with 4.3 all-core OC vs PBO. And in heavily threaded scenarios the gap would keep growing larger, but specifically in today's GAMES, especially if you consider very few of us have 0 background activity, all-core OC would hands-down win is my guess, but we could have better evidence of this if you could run a complete benchmarking suite. (unless I'm blind and missed it, in case my apologies :)I've been messing around with a 3700X, and even with a 14cm Noctua cooling it, it does not sustain max allowed boost on even a single core with PBO which is another thing I wish you touched on more. During your testing do you monitor the boost speeds and what percent of the time it can stay at the max boost over XX minutes?
Maxiking - Monday, August 26, 2019 - link
Veni, vidi viciYeah, I was right.
I would like to thank my family for all the support I have received whilst fighting amd fanboys.
It was difficult, sometimes I was seriously thinking about giving up but the truth can not be stopped!
The AMD fraud has been confirmed.
https://www.reddit.com/r/pcgaming/comments/cusn2t/...
Ninjawithagun - Thursday, October 10, 2019 - link
Now all you have to do is have all these benchmarks ran again after applying the 1.0.0.3. ABBA BIOS update ;-)quadibloc - Tuesday, November 12, 2019 - link
I am confused by the diagram of the current used by individual cores as the number of threads is increased. Since SMT doesn't double the performance of a core, on the 3900X, for example, shouldn't the number of cores in use increase to all 12 for the first 12 threads, one core for each thread, with all cores then remaining in use as the number of threads continues to increase to 24?Or is it just that this chart represents power consumption under a particular setting that minimizes the number of cores in use, and other settings that maximize performance are also possible?
SjLeonardo - Saturday, December 14, 2019 - link
Core and uncore get supplied by different VRMs, right?Parkab0y - Sunday, October 4, 2020 - link
I really want to see something like this about zen3 5000miabk - Thursday, October 15, 2020 - link
Good working and positive thinking.https://unitconverters.online/