Measuring Real-World Power Consumption

The Equal Workload (EWL) version of vApus FOS is very similar to our previous vApus Mark II "Real-world Power" test. To create a real-world “equal workload” scenario, we throttle the number of users in each VM to a point where you typically get somewhere between 20% and 80% CPU load on a modern dual CPU server. The amount of requests is the same for each system, hence "equal workload".

The CPU load is typically around 30-50%, with peaks up to 65% (for more info see here). At the end of the test, we get to a low 10%, which is ideal for the machine to boost to higher CPU clocks (Turbo) and race to idle. We use the "Balanced" power policy and enable C-states as the current ESXi settings make poor use of the C6 capabilities of the latest Opterons and Xeons.

vApus FOS EWL Power consumption

We cannot say "mission accomplished", but AMD has made significant progress. 12% to 20% better performance while decreasing the power consumption by 6% to 8% is pretty good. The 95W TDP Xeons are still the performance per Watt champs though. Still, it looks like the Opteron is a decent alternative for some. Power consumption is about 12-13% higher (6376 vs E5-2660), but the performance per dollar is slightly better.

vApusMark FOS SAP S&D
Comments Locked

55 Comments

View All Comments

  • coder543 - Wednesday, February 20, 2013 - link

    You realize that we have no trouble recognizing that you've posted about fifty comments that are essentially incompetent racism against AMD, right?

    AMD's processors aren't prefect, but neither are Intel's. And also, AMD, much to your dismay, never announced they were planning to get out of the x86 server market. They'll be joining the ARM server market, but not exclusively. I'm honestly just ready for x86 as a whole to be gone, completely and utterly. It's a horrible CPU architecture, but so much money has been poured into it that it has good performance for now.
  • Duwelon - Thursday, February 21, 2013 - link

    x86 is fine, just fine.
  • coder543 - Wednesday, February 20, 2013 - link

    totes, ain't nobody got time for AMD. they is teh failzor.

    (yeah, that's what I heard when I read your highly misinformed argument.)
  • quiksilvr - Wednesday, February 20, 2013 - link

    Obvious trolling aside, looking at the numbers and its pretty grim. Keep in mind that these are SERVER CPUs. Not only is Intel doing the job faster, its using less energy, and paying a mere $100-$300 more per CPU to cut off on average 20 watts is a no-brainer. These are expected to run 24 hours a day, 7 days a week with no stopping. That power adds up and if AMD has any chance to make any dent in the high end enterprise datacenters they need to push even more.
  • Beenthere - Wednesday, February 20, 2013 - link

    You must be kidding. TCO is what enterprise looks at and $100-$300 more per CPU in addition to the increased cost of Intel based hardware is precisely why AMD is recovering server market share.

    If you do the math you'll find that most servers get upgraded long before the difference in power consumption between an Intel and AMD CPU would pay for itself. The total wattage per CPU is not the actual wattage used under normal operations and AMD has as good or better power saving options in their FX based CPUs as Intel has in IB. The bottom line is those who write the checks are buying AMD again and that's what really counts, in spite of the trolling.

    Rory Read has actually done a decent job so far even though it's not over and it has been painful, especially to see some talent and loyal AMD engineers and execs part ways with the company. This happens in most large company reorganizations and it's unfortunate but unavoidable. Those remaining at AMD seem up for the challenge and some of the fruits of their labor are starting to show with the Jaguar cores. When the Steamroller cores debut later this year, AMD will take another step forward in servers and desktops.
  • Cotita - Wednesday, February 20, 2013 - link

    Most servers have a long life. You'll probably upgrade memory and storage, but CPU is rarely upgraded.
  • Guspaz - Wednesday, February 20, 2013 - link

    Let's assume $0.10 per kilowatt hour. A $100 price difference at 20W would take 1000 kWh, which would take 50,000 hours to produce. The price difference would pay for itself (at $100) in about 6 years.

    So yes, the power savings aren't really enough to justify the cost increase. The higher IPC on the Intel chips, however, might.
  • bsd228 - Wednesday, February 20, 2013 - link

    You're only getting part of the equation here. That extra 20w of power consumed mostly turns into heat, which now must be cooled (requiring more power and more AC infrastructure). Each rack can have over 20 2U servers with two processors each, which means nearly an extra kilowatt per rack, and the corresponding extra heat.

    Also, power costs can vary considerably. I was at a company paying 16-17cents in Oakland, CA. 11 cents in Sacramento, but only 2 cents in Central Washington (hydropower).
  • JonnyDough - Wednesday, February 20, 2013 - link

    +as many as I could give. Best post!
  • Tams80 - Wednesday, February 20, 2013 - link

    I wouldn't even ask the NYSE for the time day.

Log in

Don't have an account? Sign up now