AMD Discloses Bobcat & Bulldozer Architectures at Hot Chips 2010
by Anand Lal Shimpi on August 24, 2010 1:33 AM ESTThree years ago AMD told me about two architectures that would be the future of the company: Bobcat and Bulldozer. Here are some excerpts from an article I wrote after that meeting with AMD.
“Due out in the first half of 2009, AMD's Bulldozer core is the true revolutionary successor to the K8 architecture. While Barcelona and Shanghai are both evolutionary improvements to the current core, Bulldozer is the first ground-up redesign since the K7.”
“If Bulldozer is the architecture that will compete with Nehalem, Bobcat is what will compete with Silverthorne. Bobcat is yet another ground up design from AMD, also due out in the 2009 timeframe, but it will address a more power constrained portion of the market. Systems that require a 1 - 10W TDP will use Bobcat, while Bulldozer is limited to the 10 - 100W range (obviously with some overlap between the two). “
Well, 2009 didn’t happen. Nor will 2010. Bobcat is the closest with production in Q4 2010 and system availability in Q1 2011. Bulldozer is strictly 2011. The long road to a major redesign isn’t unusual and although we’re no where near the point of measuring performance of these parts, we’re getting closer.
AMD has Bobcat and Bulldozer silicon back in its labs and things apparently look good. Later today at Hot Chips 22, AMD will present further details on both of its next generation architectures. What we have here now is a sneak peak of what AMD is going to unveil at the conference later today.
The Three Chip Roadmap
While AMD is committed to a two architecture roadmap going forward (Bobcat and Bulldozer) we’ll see three fairly different chips addressing the various market segments in 2011.
Bobcat will do low end/low power (think netbooks and nettops), Llano will do mainstream notebooks (e.g. MacBook, HP Envy equivalent) and Bulldozer will be used for high end desktops and servers. Llano actually uses a Phenom II derived core so it’s technically a third architecture but I’d expect its market to eventually be split between Bobcat and Bulldozer based designs.
I’m going to start with Bobcat first as it’s the closest to production.
76 Comments
View All Comments
Dustin Sklavos - Tuesday, August 24, 2010 - link
If you're encoding using Adobe software, ditch AMD until Bulldozer. Adobe's software makes heavy use of SSE 4.1 instructions, which current AMD chips lack, and the extra two cores don't pick up the slack compared to a fast i7.flyck - Tuesday, August 24, 2010 - link
From the design of Bulldozer's FPU it is cleared that AMD want Multi Threaded FPU to run on OpenCL.Not sure what you mean with that? (it is true they want to abuse that in the future with fusion) but at this moment i see: Sandybridge 2hreads -> one FPU, Bulldozer 2 threads -> one FPU
BitJunkie - Tuesday, August 24, 2010 - link
I think he's picking up on the point that this general purpose design is going to favour integer operations over floating point. Looking at this architecture from the perspective of someone wanting to perform a lot of floating point matrix calculus then the performance improvement of each "core" is going to be proportionally less than for integer calcs.So what he's saying is that quite clearly AMD believe that general purpose CPUs are just that and have designed for a well defined balance of FP and Interger operations i.e. If you want more FLOPS go talk to the GPU?
stalker27 - Tuesday, August 24, 2010 - link
"And if Bulldozer comes any later, it will be up against the die shrink of SandyBridge, Ivy Bridge. Things dont look so good in here."Basically, you've contradicted yourself right here:
"Most of us dont need SUPER FAST computer."
True, and true.... Ivy will probably be faster than Bulldozer (speculatively) as is Nehalem to Stars, but most people, i.e. the "cash cow" won't buy these expensive products. Instead they will focus on mid to low end computers which by their performance is more then/or enough for their needs.
So things might not look good in reviews and bench tops, but in the stores and on people's bank balances they will look pretty good.
jabber - Tuesday, August 24, 2010 - link
Hooray!I'm glad at last some folks are waking up to the fact that having the fastest or most expensive CPU means absolutely jack!
All the latest fastest CPU stuff just means a little bit more internet traffic for tech review sites.
The rest of the world doesnt give a damn.
All the real world is interested in is the best CPU for the buck in a $400 PC box to run W7 and Office on. AMD needs to get a proper marketing dept to start telling folks that.
All AMD has to do is produce good performing chips for a good price. It dosent need a CPU to beat the best of Intel.
The real world lost interest in CPU performance the minute dual cores arrived and they could finally run IE/Office and a couple of mainframe sessions without it grinding to a halt.
I bet Intel gives out more review samples of its top CPU than it sells.
JPForums - Tuesday, August 24, 2010 - link
"All the real world is interested in is the best CPU for the buck in a $400 PC box to run W7 and Office on. AMD needs to get a proper marketing dept to start telling folks that.""The real world lost interest in CPU performance the minute dual cores arrived and they could finally run IE/Office and a couple of mainframe sessions without it grinding to a halt."
Apparently us Engineers aren't part of "The rest of the world".
Try running products from the likes of Mentor Graphics, Cadence, and Synopsis for reasonably large designs. Check out what a difference each new CPU makes in PROe (assuming sufficient GPU horsepower). Run some large Matlab simulations, Visual studio compilations, and Xilinx builds. You don't even have to get out of college before you run into many of these scenarios.
Trust me when I say that we care about the next greatest thing.
An extra $1000 dollars on a CPU is easily justified when companies are billing $100+ per Engineering hour (not to be confused with take home pay).
BitJunkie - Tuesday, August 24, 2010 - link
Exactly so: An example would be a 24hr calculation to perform a detailed 3D finite element analysis. This is not unusual using highly spec'd Xeon work stations from your vendor of choice.It might take 5 to 10 days to set up a model including testing of different aspects: Mesh density, discretisation errors, boundary effects, parametric studies. The set up time with numerous supporting pre-analysis runs is what really costs. Anything we can do to reduce this is worth while.
The above would be the typical process BEFORE considering a batch-job on a HPC cluster if we wanted to look at a series of load cases etc.
Time is money.
mapesdhs - Tuesday, August 24, 2010 - link
I know a number of movie studios who love every extra bit of
CPU muscle they can get their hands on. Rendering really
hammers current hardware. One place has more than 7000
XEON cores, but it's never enough. Short of writing specialised
sw to exploit shared-memory machines that use i7 XEONs (which
has its own costs), the demand for ever higher processing
speed will always persist. Visual effects complexity constantly
increases as artists push the boundaries of what is possible.
And this is just one example market segment. As BitJunkie
suggests, these issues surface everywhere.
Another good example: the new Cosmos machine in the UK
which contains 128 x 6-core i7 XEON (Nehalem EX) with
2TB RAM (ie. 768 cores total). This is a _single system_,
not a cluster (SGI Altix UV). Nothing less is good enough for
running modern cosmological simulations. There will be
much effort by those using the system on achieving good
efficiency with 512+ cores; atm many HPC tasks don't scale
well beyond 32 to 64 cores. Point being, improving the
performance of a single core is just as important as general
core scaling for such complex tasks. SGI's goal is to produce
a next-gen UV system which will scale to 262144 cores in
a single shared-memory system (32768 x 8-core CPUs).
You can never have enough computing power. :D
Ian.
stalker27 - Wednesday, August 25, 2010 - link
You're 1% of the market... for you, Intel and AMD have reserved cherry-picked chips that they can charge you 1K for but at the same time offer you that needed speed. How's that?BTW, he said real world, not rest of the world. That makes you somewhat of an illusion. But don't take it the bad way... more like most of us would dream working in an environment full of hot setups, big projects and big bux, unlike in the real world where you have to mop the floor after debugging for 8 hours straight... if they don't force you to work extra two hours without pay, never-mind that before you start the workday you have to go to various bureaucratic public clerk offices to deal with stuff that was supposed to be taken care by secretaries... which got fired for no apparent reason some time ago.
So stop moaning... you have it good, even as 1%.
Makaveli - Tuesday, August 24, 2010 - link
lol if AMD and intel followed your logic we would all still be running Pentium 2 and socket A Athlons silly boy.You make yourself look like an ass when you make a generalized statement like that, as if you are speaking for the rest of the world.
As that other guy pointed out some of us do more than just office work on our pc's!