Conclusion

The first impression that the Xeon 7500 series made on the world was seriously blurred. Part of the reason is that the testing platform had a firmware bug that decreased the memory bandwidth by 20% and more. Another reason were the weird benchmarking choices of reviewers. Lightwave, folding@home and Cinebench were somehow popular measuring sticks portraying the Xeon X7560 as the more expensive and at the same time slower brother of the Xeon X5670. That kind of software is run mostly on sub $4000 workstations and cheap 1U server farms, and we seriously doubt that anyone in their right mind would spend $30,000 on a server to run these kind of workloads.

Our own benchmarking was not complete either, as our virtualization benchmarking fell short of giving 32—let alone 64—threads enough work. Still, the impressive SAP S&D benchmark numbers, one of the most reliable and most relevant industry standard benchmarks out there, made it clear to us that we should give the Xeon X7560 another chance to prove itself.

Our new virtualization benchmark vApus Mark II shows that we should give credit where it is due: servers based on the X7560 are really impressive when consolidating services using virtualization: a quad Xeon X7560 can offer 2.3 times better performance than the best dual socket systems today! You might even call the performance numbers historical: for the first time in history, Intel’s multi-socket servers run circles around the dual socket servers. Remember how the quad Xeon 7200 hardly outperformed the dual Xeon 5300 at the end of 2006, and how the quad 7400 was humiliated by the dual Xeon X5500 in 2009? And even if we go even further back in history, the Xeon MP never outperformed the dual socket offerings by a large margin. Memory capacity and RAS features were almost always the main selling points. For the first time, scalability is more than just a hollow phrase; a Xeon X7560 server can replace two or more smaller servers in terms of memory capacity and processing power.

The end result is that these servers can be attractive for people who are not the traditional high-end server buyers. Using a few quad Xeon X7560 servers instead of a lot of dual socket servers to consolidate your software services may turn out to be a very healthy strategy. Based on our current data, two quad Xeon X7560 ($65k- $70k) are worth about five Xeon 5600 servers ($50k-$65k). The acquisitions costs are slightly higher, but you need fewer physical servers and that lowers the management costs somewhat. There are two questions that remain:

1) How bad or good is the power/performance ratio?

2) If RAS is not your top priority, does a quad Opteron 6174 make more sense?

A Dell R815 with four twelve-core Opteron 6174 processors has arrived in our labs. So our search for the best virtualization building block continues.

 

A big thanks to Tijl Deneut and Dieter Vandroemme.

The Virtualization Landscape So Far
Comments Locked

51 Comments

View All Comments

  • Ratman6161 - Wednesday, August 11, 2010 - link

    Many products license on a per CPU basis. For Microsoft anyway, what they actually count is the number of sockets. For example SQL Server Enterprise retails for $25K per CPU. So an old 4 socket system with single cores would be 4 x $25K = $100K. A quad socket system with quad core CPUs would be a total of 16 cores but the pricing would still be 4 sockets x $25K = 100K. It used to be that Oracle had a complex formula for figuring this but I think they have now also gone to the simpler method of just counting sockets (though their enterprise edition is $47.5K).

    If you are using VMWare, they also charge per socket (last I knew) so two dual socket systems would cost the same as a single 4 socket system. Thing is though you need to have at least two boxes in order to enable the high availability (i.e. automatic failover) functionality.
  • Stuka87 - Wednesday, August 11, 2010 - link

    For VMWare they have a few pricing structures. You can be charged per physical socket, or you can get an unlimited socket license (which is what we have, running one seven R910's). You just need to figure out if you really need the top tier license.
  • semo - Tuesday, August 10, 2010 - link

    "Did I mention that there is more than 72GHz of computing power in there?"

    Is this ebay?
  • Devo2007 - Tuesday, August 10, 2010 - link

    I was going to comment on the same thing.

    1) A dual core 2GHz CPU does not equal "4GHz of computing power" - unless somehow you were achieving an exact doubling of performance (which is extremely rare if it exists at all).

    2) Even if there was a workload that did show a full doubling of performance, performance isn't measured in MHz & GHz. A dual-core 2GHz Intel processor does not perform the same as a 2GHz AMD CPU.

    More proof that the quality of content on AT is dropping. :(
  • mino - Wednesday, August 11, 2010 - link

    You seem to know very little about the (40yrs old!) virtualization market.
    It flourishes from *comoditising* processing power.

    Why clearly meant a joke, that statement of Johan, is much closer to the truth than most market "research" reports on x86.
  • JohanAnandtech - Wednesday, August 11, 2010 - link

    Exactly. ESX resource management let you reserve CPU power in GHz. So for ESX, two 2.26 GHz cores are indeed a 4.5 GHz resource.
  • duploxxx - Thursday, August 12, 2010 - link

    sure you can count resources together as much as you want... virtually. But in the end a single process is still only able to handle the max ghz a single cpu can offer but can finish the request faster. That is exactly the thing why those Nehalem and gulf still hold against the huge core count of Magny cours.
  • maeveth - Tuesday, August 10, 2010 - link

    So I have nothing at all against AnandTech's recent articles on Virtualization however so far all of them have only looked at Virtualization from a compute density point of view.

    I currently am the administrator of a VMware environment used for development work and I run into I/O bottle necks FAR before I ever run into a compute bottleneck. In fact computational power is pretty much the LAST bottleneck I run into. My environment currently holds just short of 300 VMs, OS varies. We peak at approximately 10-12K IOPS.

    From my experience you always have to look at potential performance in a virtual environment at a much larger perspective. Every bottleneck effects others in subtle ways. For example if you have a memory bottleneck, either host or guest based you will further impact your I/O subsystem, though you should aim to not have to swap. In my opinion your storage backend is the single most important factor when determining large-scale-out performance in a virtualized environment.

    My environment has never once run into a CPU bottleneck. I use IBM x3650/x3650M2 with Dual Quad Xeons. The M2s use X5570s specifically.

    While I agree having impressive magnitudes of "GHz" in your environment is kinda fun it hardly says anything about how that environment will preform in a real world environment. Granted it is all highly subject to work load patterns.

    I also want to make it clear that I understand that testing on a such a scale is extremely cost prohibitive. As such I am sure AnandTech, Johan speficially, is doing the best he can with what resources he is given. I just wanted to throw my knowledge out there.

    @ELC
    Yes, software licensing is a huge factor when purchasing ESX servers. ESX is licensed per socket. It's a balancing act that depends on your work load however. A top end ESX license costs about $5500/year per socket.
  • mino - Wednesday, August 11, 2010 - link

    However, IMO storage performance analysis is pretty much beyond AT's budget ballpark by an order of magnitude (or two).

    There is a reason this space is so happily "virtualized" by storage vendors AND customers to a "simple" IOPS number.
    It is a science on its own. Often closer to black (empiric) magic than deterministic rules ...

    Johan,
    on the other hand, nothing prevents you form mentioning this sad fact:

    Except edge cases, a good virtualization solution is build from the ground up with
    1. SLA's
    2. storage solution
    3. licensing considerations
    4. everything else (like processing architecture) dictated by the previous
  • JohanAnandtech - Wednesday, August 11, 2010 - link

    I can only agree of course: in most cases the storage solution is the main bottleneck. However, this is aloso a result of the fact that most storage solutions out there are not exactly speed demons. Many storage solutions out there consist of overengineered (and overpriced) software running on outdated hardware. But things are changing quickly now. HP for example seems to recognize that a storage solution is very similar to a server running specialized software. There is more, with a bit of luck, Hitachi and Intel will bring some real competition to the table. (currently STEC has almost a monopoly on the enterprise SSD disks). So your number 2 is going to tumble down :-).

Log in

Don't have an account? Sign up now