Facebook's "Open Compute" Server tested
by Johan De Gelas on November 3, 2011 12:00 AM ESTBenchmark Configuration
HP Proliant DL380 G7
CPU | Two Intel Xeon X5650 at 2.66 GHz |
RAM | 6 x 4GB Kingston DDR3-1333 FB372D3D4P13C9ED1 |
Motherboard | HP proprietary |
Chipset | Intel 5520 |
BIOS version | P67 |
PSU | 2 x HP PS-2461-1C-LF 460W HE |
We have three servers to test. The first is our own standard off-the-shelf server, and HP DL380G7. This server is the natural challenger for the Facebook design, as it is one of the most popular and efficient general purpose servers.
As this server is targeted at a very broad public, it cannot be as lean and mean as the Open Compute servers.
Facebook's Open Compute Xeon version
CPU | Two Intel Xeon X5650 at 2.66 GHz |
RAM | 6 x 4GB Kingston DDR3-1333 FB372D3D4P13C9ED1 |
Motherboard | Quanta Xeon Opencompute 1.0 |
Chipset | Intel 5500 Rev 22 |
BIOS version | F02_3A16 |
PSU | Power-One SPAFCBK-01G 450W |
The Open Compute Xeon server is configured as close to our HP DL380 G7 as possible.
Facebook's Open Compute AMD version
CPU | Two AMD Opteron Magny-Cour 6128 HE at 2.0 GHz |
RAM | 6 x 4GB Kingston DDR3-1333 FB372D3D4P13C9ED1 |
Motherboard | Quanta AMD Open Compute 1.0 |
Chipset | |
BIOS version | F01_3A07 |
PSU | Power-One SPAFCBK-01G 450W |
The benchmark numbers of the AMD Open Compute server are only included for your information. There is no direct comparison possible with the other two systems. The AMD system is better equipped than the Intel, as it has more DIMM slots and uses HE CPUs.
Common Storage system
Each server has an adaptec 5085 PCIe 8x (driver aacraid v1.1-5.1[2459] b 469512) connecting to six Cheetah 300GB 15000 RPM SAS disks in a Promise JBOD J300s.
Software configuration
VMware ESXi 5.0.0 (b 469512 - VMkernel SMP build-348481 Jan-12-2011 x86_64). All vmdks use thick provisioning, independent, and persistent. Power policy is Balanced Power.
Other notes
Both servers were fed by a standard European 230V (16 Amps max.) powerline. The room temperature was monitored and kept at 23°C.
67 Comments
View All Comments
iwod - Thursday, November 3, 2011 - link
And i am guessing Facebook has at least 10 times more then what is shown on that image.DanNeely - Thursday, November 3, 2011 - link
Hundreds or thousands of times more is more likely. FB's grown to the point of building its own data centers instead of leasing space in other peoples. Large data centers consume multiple megawatts of power. At ~100W/box, that's 5-10k servers per MW (depending on cooling costs); so that's tens of thousands of servers/data center and data centers scattered globally to minimize latency and traffic over longhaul trunks.pandemonium - Friday, November 4, 2011 - link
I'm so glad there are other people out there - other than myself - that sees the big picture of where these 'miniscule savings' goes. :)npp - Thursday, November 3, 2011 - link
What you're talking about is how efficient the power factor correction circuits of those PSUs are, and not how power efficient the units their self are... The title is a bit misleading.NCM - Thursday, November 3, 2011 - link
"Only" 10-20% power savings from the custom power distribution????When you've got thousands of these things in a building, consuming untold MW, you'd kill your own grandmother for half that savings. And water cooling doesn't save any energy at all—it's simply an expensive and more complicated way of moving heat from one place to another.
For those unfamiliar with it, 480 VAC three-phase is a widely used commercial/industrial voltage in USA power systems, yielding 277 VAC line-to-ground from each of its phases. I'd bet that even those light fixtures in the data center photo are also off-the-shelf 277V fluorescents of the kind typically used in manufacturing facilities with 480V power. So this isn't a custom power system in the larger sense (although the server level PSUs are custom) but rather some very creative leverage of existing practice.
Remember also that there's a double saving from reduced power losses: first from the electricity you don't have to buy, and then from the power you don't have to use for cooling those losses.
npp - Thursday, November 3, 2011 - link
I don't remember arguing that 10% power savings are minor :) Maybe you should've posted your thoughts as a regular post, and not a reply.JohanAnandtech - Thursday, November 3, 2011 - link
Good post but probably meant to be a reply to erwinerwinerwin ;-)NCM - Thursday, November 3, 2011 - link
Johan writes: "Good post but probably meant to be a reply to erwinerwinerwin ;-)"Exactly.
tiro_uspsss - Thursday, November 3, 2011 - link
Is it just me, or does placing the Xeons *right* next to each other seem like a bad idea in regards to heat dissipation? :-/I realise the aim is performance/watt but, ah, is there any advantage, power usage-wise, if you were to place the CPUs further apart?
JohanAnandtech - Thursday, November 3, 2011 - link
No. the most important rule is that the warm air of one heatsink should not enter the stream of cold air of the other. So placing them next to each other is the best way to do it, placing them serially the worst.Placing them further apart will not accomplish much IMHO. most of the heat is drawn away to the back of the server, the heatsinks do not get very hot. You also lower the airspeed between the heatsinks.