Ummmm sram cells: 0.0588 um2 versus 16nm TSMC 0.07 um2. The intel process is 16% denser in sram cells vs, TSMC (like said months ago an TSMC exec); the lead on density of logic could be larger thanks to the full 14nm backend.
I think that Intel will not needs of eDram in it's server cpus on 14nm node; we well see for 10nm.
I know Moore's Law is about transistor density, not performance. However it's still disappointing that the fastest x86 CPUs currently available are less that twice as fast as my 4.5 year-old i7 920.
Seems the process benefits have mostly gone into power efficiency, which is great for data centres and mobile devices (ok, and possibly the planet) but rather less interesting for a desktop.
Correct me if I'm thinking of this incorrectly, but I think what you're observing is a result of software not having fully caught up with multiple cores whereas data center software is able to take advantage of that today.
Yeah, I'd say it's a combination of software optimization with the fact that Intel isn't only going for X more performance, but also X fewer watts, so overall better perf/W. I'm sure there are places where Haswell is 2X faster while also using half the power.
1) Fastest x86 desktop CPU currently (Core i7-4960X) is around 2.5 times faster than i7-920 (considering both at their stock frequencies) in fully multi-threaded tasks - when all the transistors are at work, since we are talking about transistors here. This happens because of:
a) two extra cores b) higher frequencies c) CPU core arch is two-three generations ahead (depending on how you count)
Check the benchmarks yourself (e.g. Cinebench R11.5).
2) New 8-core Haswell-E Extreme, being released very soon, will be at least 3 times faster than the Core i7-920.
Let's be a bit more fair and compare that $1k i7-4960X to Gulftown CPU and overclock both. I doubt it would be even 1.3x faster than 4 years old CPU in regular applications...
Compare the i7 4960X against it's true first-generation i7 competitor, the hex-core, Core i7 980X. Suddenly that 2.5x performance lead vanishes...
I'm still sitting on an "old" Core i7 3930K (Sandy Bridge-E) and really, once overclocked, it's still one of the fastest consumer processors money can buy.
i have the same i7-920..but I have to say, keeping all else equal (like power used), the latest x86 CPUs are definitely more than twice as fast as the i7-920. If Intel wants to make a haswell that draws as much power as the i7-920, I'm sure it'll be plenty fast.
Intel releases Haswell-E soon which is supposed to have 140W TDP, thus roughly matching Core i7-920 in power. But Haswell-E will have far more performance than i7-920 Nehalem.
Totally agree, almost seems like intel is slacking off because amd isn't giving them much competition. Would love to see another p4>conroe type of performance gain but i doubt we'll see that anytime soon.
As chips have gotten faster and as computing power has gotten more mobile the percentage of customers needing still more performance has shrunk and the percentage needing more performance per watt has increased. IMHO, Intel understands and is responding to its customer base quite well.
Mobile devices as in the two thirds, or more, of computers sold a year that are notebooks? One would think that they are a much more important market these days than desktops. Intel has to follow the money.
Actually IPC on your 920 is up to 40% slower than these broadwell chips. Sure you got a big 8MB cache and QPI, but your missing out on a lot of perf per watt, instruction sets, and wasted electricity.
I also had my 920 for 4 years, and the performance increase going to a 4770k single threaded was remarkable.
Well, when you look at benchmarks this may seem to be the case but depending on your usecase, this may not. I also had a 920 in my main pc for 4.5 years but for Battlefield 4 multiplayer, it was not good enough anymore. I first upgraded to a GTX780 but this didn't change all that much in crowded MP games. Upgrading to an 4770 made a hige difference and now I can play at 120 fps even in a crowded server. I don't understand what the bottleneck exactly was, but what a difference!
Good comparison but less practical in the real world because i7 920 can be easily overclocked to 4-4.4Ghz. On AT, an enthusiast site, the proper comparison should really be i7 920 @ 4.0-4.2Ghz vs. i7 4790K @ 4.7-4.8Ghz. Obviously the latter would bin but nowhere near by 2x in performance and in games it would be very small. The i7 920 @ 4.2Ghz would use a heck of a lot more power though. I am looking forward to seeing what (A) 5820K OC can do on X99 and (B) how much overclocking headroom Skylake has next year on 14nm as i7 3770K and 4770K were pretty disappointing compared to 2600Ks.
I don't think this is necessarily true. The core i7 920(I have one too), was a high end processor running at 130W. If you take into account that the only difference with i7 960 was practically the clock speed, then you realize that the i7 920 was more like a close analogue to 4960x or 4930k than to 4820. The fact that it costs more is only relevant to the market(the fake human rules). Physically, it shows that it's possible to have twice the computing power at about the same space with about the same energy requirements, and this is only that really matters, if you want to be objective.
A 4930k or 4960x(the true high-ends of today, 4770k is high-end for those with short memory) is definitely 2x or maybe more where it matters(loads that can be multithreaded). If you say, that multithreading doesn't offer much benefit, because most programs can't take advantage of more cores, you are practically right, and yet somewhat wrong. I mean, what is the meaning of threading in a "hello world" program? Many apps today, no matter how complex they might seem, they are "hello programs"; meaning that it would be pointless to be multithreaded even if they could. My opinion is known.I don't think these chips have no place. We are just in a transitional period. We still think of computing as office suits, games and media manipulation; these are the glory apps of a time past. Just wait a little and see, speech recognition, image recognition, language comprehension, and then come and tell me multicores are useless. This might not come right away, true, you should keep your money and buy when you have the biggest benefit; but it doesn't mean that progress has stopped for the slightest, market availability has shrinked instead. Maybe they actually reach a dead end after some years, but they haven't, yet...
http://www.anandtech.com/bench/product/47?vs=1260 i7 920 vs i7 4790k. The fact is i7 4790k takes considerably less power consumption to run at 4ghz+ and more then doubles the ingeger performance of an i7 920 and in terms of FPU floating point unit performance is probably 5-10x faster then your i7 920.
"The end result is that while Intel’s cost per transistor is not decreasing as quickly as the area per transistor, the cost is still decreasing and significantly so."
Maybe... The problem is, is Intel saying this as (honest) engineers or as (somewhat less honest) business people? Every IP business has enormous flexibility in how it defines costs and where it places them. nV's complaint reflects the cost it pays, which ultimately reflects some sort of aggregated cost for TSMC over not just per-wafer manufacturing costs, but the costs of R&D, of equipment, of financing, of various salaries, etc etc. Intel, in a graph like this, has the flexibility to define basically whatever it likes as "$/transistor". On the one hand, it could be an honest reckoning (basically the TSMC price), but on the other, it could be a bare "cost of materials and processing", omitting everything from capital expenditures to prior R&D.
If they're doing an engineer count, they're including fabrication R&D over the lifetime of the fab, raw materials, mask building costs and possibly salaries of the fab. no more, no less. I.e: how much cash does it take us to go from this pile of vector masks to a chip assuming this fab is always at full capacity.
TSMC on the other hand is presumably including a profit margin with their prices, and that may or may not make the difference between profitable or not.
Maybe it's my age, but with my first processor being a 6809e TSR80 Color, followed by an Atari with a 6502 I can't help but be fascinated that Intel can build a CPU at 14nm on a massive scale and be better than ever. At 48, I wonder if I will see the last advancement ever in size in my lifetime or if it will continue almost forever. If you haven't seen The_Last_Mimzy, it will make you think about the future. Thanks for the write-up!
We will all see the last advancement in size for silicon. But by then, we will be on to more innovative materials and the cycle to shrink and get more per watt will continue ad infinitium.
While I appreciate the technical info, I'm a bit surprised to see this being called exclusively Intel's. Afaik, the main struggle here was ASML's, making the machines used to make the chips. From what I heard, ASML works quite closely (considering the IP they see of all of them) with chipmakers to build these machines for the process they need.
Do you work for ASML? :D What about TEL? AMAT? NIKON? any other big names in equipment manufacturing? Not to mention materials - photoresist, gas, chemical, list goes on. Maybe we can have one big party. :)
No :p. But from what I understood they are marketleader (by a pretty large margin) and specifically working together with intel to get 14nm working as it was quite problematic. Maybe I'm wrong, I'm not in that industry ;).
There are at least 20 different major equipment manufacturers who work hard to meet the specs for the next generation tools as required by Intel. Once they deliver, the battle actually begins to get the design done (things don't just get drawn as is from previous generation with a reduced size), the masks made and the whole process for the silicon to work out. Every single cog in the huge machinery of Intel and its OEMs has to fire just right to make these chips work. I hope you get a sense from this as to how difficult and amazing a feat it is for Intel to produce these chips with high yield.
I worked for a company which produced one of the first EUV lithography tools, and worked on the first EUV actinic imaging mask defect inspection tool, and can tell you that there is very close cooperation between semiconductor manufacturers and their machine suppliers.
Although we had to do a lot of original research to overcome the engineering challenges of the brief from Intel/Sematech, only once the machines fulfil the spec and are handed over does the *real* research start. Process window optimisation is the real challenge here. It's one thing to lay down 14nm features, but getting from mask set to processor is an entirely different level level of challenge.
I would love to know details of the technology used to produce silicon at the 14nm node.
The wikipedia pages on EUV Lithography and Immersion Lithography suggests that these chips might be being produced using double or triple patterning with 193nm immersion lithography. If that's the case, then it it truly remarkable 193nm light can be used to print features this small, when you would expect this to be way beyond the diffraction limit of such a system.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
38 Comments
Back to Article
Gondalf - Monday, August 11, 2014 - link
Ummmm sram cells: 0.0588 um2 versus 16nm TSMC 0.07 um2.The intel process is 16% denser in sram cells vs, TSMC (like said months ago an TSMC exec); the lead on density of logic could be larger thanks to the full 14nm backend.
I think that Intel will not needs of eDram in it's server cpus on 14nm node; we well see for 10nm.
Gondalf - Monday, August 11, 2014 - link
Ha! the fins are rectangular (finally !!!!)BugblatterIII - Monday, August 11, 2014 - link
I know Moore's Law is about transistor density, not performance. However it's still disappointing that the fastest x86 CPUs currently available are less that twice as fast as my 4.5 year-old i7 920.Seems the process benefits have mostly gone into power efficiency, which is great for data centres and mobile devices (ok, and possibly the planet) but rather less interesting for a desktop.
abrowne1993 - Monday, August 11, 2014 - link
Agreed. I have no reason to upgrade from Sandy Bridge to Broadwell. I guess we'll see if Skylake is worth it, but I doubt it.I guess it's nice that this CPU has lasted so long, but I'd like to see major performance improvements.
whistlerbrk - Monday, August 11, 2014 - link
Correct me if I'm thinking of this incorrectly, but I think what you're observing is a result of software not having fully caught up with multiple cores whereas data center software is able to take advantage of that today.nathanddrews - Monday, August 11, 2014 - link
Yeah, I'd say it's a combination of software optimization with the fact that Intel isn't only going for X more performance, but also X fewer watts, so overall better perf/W. I'm sure there are places where Haswell is 2X faster while also using half the power.TiGr1982 - Monday, August 11, 2014 - link
Not exactly.1) Fastest x86 desktop CPU currently (Core i7-4960X) is around 2.5 times faster than i7-920 (considering both at their stock frequencies) in fully multi-threaded tasks - when all the transistors are at work, since we are talking about transistors here. This happens because of:
a) two extra cores
b) higher frequencies
c) CPU core arch is two-three generations ahead (depending on how you count)
Check the benchmarks yourself (e.g. Cinebench R11.5).
2) New 8-core Haswell-E Extreme, being released very soon, will be at least 3 times faster than the Core i7-920.
But, yes, they cost $1000. That's a lot.
Senti - Wednesday, August 13, 2014 - link
Let's be a bit more fair and compare that $1k i7-4960X to Gulftown CPU and overclock both. I doubt it would be even 1.3x faster than 4 years old CPU in regular applications...StevoLincolnite - Sunday, August 17, 2014 - link
Compare the i7 4960X against it's true first-generation i7 competitor, the hex-core, Core i7 980X.Suddenly that 2.5x performance lead vanishes...
I'm still sitting on an "old" Core i7 3930K (Sandy Bridge-E) and really, once overclocked, it's still one of the fastest consumer processors money can buy.
menting - Monday, August 11, 2014 - link
i have the same i7-920..but I have to say, keeping all else equal (like power used), the latest x86 CPUs are definitely more than twice as fast as the i7-920. If Intel wants to make a haswell that draws as much power as the i7-920, I'm sure it'll be plenty fast.TiGr1982 - Monday, August 11, 2014 - link
Intel releases Haswell-E soon which is supposed to have 140W TDP, thus roughly matching Core i7-920 in power. But Haswell-E will have far more performance than i7-920 Nehalem.Senti - Wednesday, August 13, 2014 - link
Let's compare at equal power with Atoms! Would so fair.shabby - Monday, August 11, 2014 - link
Totally agree, almost seems like intel is slacking off because amd isn't giving them much competition. Would love to see another p4>conroe type of performance gain but i doubt we'll see that anytime soon.TiGr1982 - Monday, August 11, 2014 - link
Or even ever in the future...Number 6 - Tuesday, August 12, 2014 - link
As chips have gotten faster and as computing power has gotten more mobile the percentage of customers needing still more performance has shrunk and the percentage needing more performance per watt has increased. IMHO, Intel understands and is responding to its customer base quite well.melgross - Monday, August 11, 2014 - link
Mobile devices as in the two thirds, or more, of computers sold a year that are notebooks? One would think that they are a much more important market these days than desktops. Intel has to follow the money.Morawka - Monday, August 11, 2014 - link
Actually IPC on your 920 is up to 40% slower than these broadwell chips. Sure you got a big 8MB cache and QPI, but your missing out on a lot of perf per watt, instruction sets, and wasted electricity.I also had my 920 for 4 years, and the performance increase going to a 4770k single threaded was remarkable.
R3dox - Tuesday, August 12, 2014 - link
Well, when you look at benchmarks this may seem to be the case but depending on your usecase, this may not. I also had a 920 in my main pc for 4.5 years but for Battlefield 4 multiplayer, it was not good enough anymore. I first upgraded to a GTX780 but this didn't change all that much in crowded MP games. Upgrading to an 4770 made a hige difference and now I can play at 120 fps even in a crowded server. I don't understand what the bottleneck exactly was, but what a difference!Frenetic Pony - Tuesday, August 12, 2014 - link
Because no one cares about the desktop of course. Well no one but enthusiasts : (Can't blame them for going where the money is I guess.
AnnonymousCoward - Tuesday, August 12, 2014 - link
Compared to the 7 year old original Conroe, today's CPUs are only about 1.7x faster in single thread.nand - Wednesday, August 13, 2014 - link
multithreading is just about twice as fast for less power http://www.anandtech.com/bench/product/47?vs=1260RussianSensation - Wednesday, August 13, 2014 - link
Good comparison but less practical in the real world because i7 920 can be easily overclocked to 4-4.4Ghz. On AT, an enthusiast site, the proper comparison should really be i7 920 @ 4.0-4.2Ghz vs. i7 4790K @ 4.7-4.8Ghz. Obviously the latter would bin but nowhere near by 2x in performance and in games it would be very small. The i7 920 @ 4.2Ghz would use a heck of a lot more power though. I am looking forward to seeing what (A) 5820K OC can do on X99 and (B) how much overclocking headroom Skylake has next year on 14nm as i7 3770K and 4770K were pretty disappointing compared to 2600Ks.r3loaded - Wednesday, August 13, 2014 - link
Ahem: http://www.anandtech.com/bench/product/47?vs=1260IUU - Friday, August 15, 2014 - link
I don't think this is necessarily true.The core i7 920(I have one too), was a high end processor running at 130W. If you take into account that the only difference with i7 960 was practically the clock speed, then you realize
that the i7 920 was more like a close analogue to 4960x or 4930k than to 4820. The fact that it costs more is only relevant to the market(the fake human rules). Physically, it shows that it's possible to have twice the computing power at about the same space with about the same energy requirements, and this is only that really matters, if you want to be objective.
A 4930k or 4960x(the true high-ends of today, 4770k is high-end for those with short memory) is definitely 2x or maybe more where it matters(loads that can be multithreaded). If you say, that multithreading doesn't offer much benefit, because most programs can't take advantage of more
cores, you are practically right, and yet somewhat wrong. I mean, what is the meaning of threading in a "hello world" program? Many apps today, no matter how complex they might seem, they are "hello programs"; meaning that it would be pointless to be multithreaded even if they could.
My opinion is known.I don't think these chips have no place. We are just in a transitional period.
We still think of computing as office suits, games and media manipulation; these are the glory apps of a time past. Just wait a little and see, speech recognition, image recognition, language comprehension, and then come and tell me multicores are useless. This might not come right away, true, you should keep your money and buy when you have the biggest benefit; but it doesn't mean that progress has stopped for the slightest, market availability has shrinked instead.
Maybe they actually reach a dead end after some years, but they haven't, yet...
nissangtr786 - Saturday, January 17, 2015 - link
http://www.anandtech.com/bench/product/47?vs=1260i7 920 vs i7 4790k. The fact is i7 4790k takes considerably less power consumption to run at 4ghz+ and more then doubles the ingeger performance of an i7 920 and in terms of FPU floating point unit performance is probably 5-10x faster then your i7 920.
name99 - Monday, August 11, 2014 - link
"The end result is that while Intel’s cost per transistor is not decreasing as quickly as the area per transistor, the cost is still decreasing and significantly so."Maybe...
The problem is, is Intel saying this as (honest) engineers or as (somewhat less honest) business people? Every IP business has enormous flexibility in how it defines costs and where it places them. nV's complaint reflects the cost it pays, which ultimately reflects some sort of aggregated cost for TSMC over not just per-wafer manufacturing costs, but the costs of R&D, of equipment, of financing, of various salaries, etc etc.
Intel, in a graph like this, has the flexibility to define basically whatever it likes as "$/transistor". On the one hand, it could be an honest reckoning (basically the TSMC price), but on the other, it could be a bare "cost of materials and processing", omitting everything from capital expenditures to prior R&D.
ZeDestructor - Monday, August 11, 2014 - link
If they're doing an engineer count, they're including fabrication R&D over the lifetime of the fab, raw materials, mask building costs and possibly salaries of the fab. no more, no less. I.e: how much cash does it take us to go from this pile of vector masks to a chip assuming this fab is always at full capacity.TSMC on the other hand is presumably including a profit margin with their prices, and that may or may not make the difference between profitable or not.
ol1bit - Tuesday, August 12, 2014 - link
Maybe it's my age, but with my first processor being a 6809e TSR80 Color, followed by an Atari with a 6502 I can't help but be fascinated that Intel can build a CPU at 14nm on a massive scale and be better than ever. At 48, I wonder if I will see the last advancement ever in size in my lifetime or if it will continue almost forever. If you haven't seen The_Last_Mimzy, it will make you think about the future. Thanks for the write-up!mkozakewich - Thursday, August 14, 2014 - link
I honestly think about that scene every time I see new die shots.bhima - Monday, August 18, 2014 - link
We will all see the last advancement in size for silicon. But by then, we will be on to more innovative materials and the cycle to shrink and get more per watt will continue ad infinitium.R3dox - Tuesday, August 12, 2014 - link
While I appreciate the technical info, I'm a bit surprised to see this being called exclusively Intel's. Afaik, the main struggle here was ASML's, making the machines used to make the chips. From what I heard, ASML works quite closely (considering the IP they see of all of them) with chipmakers to build these machines for the process they need.kuroxp - Tuesday, August 12, 2014 - link
Do you work for ASML? :D What about TEL? AMAT? NIKON? any other big names in equipment manufacturing? Not to mention materials - photoresist, gas, chemical, list goes on. Maybe we can have one big party. :)R3dox - Wednesday, August 13, 2014 - link
No :p. But from what I understood they are marketleader (by a pretty large margin) and specifically working together with intel to get 14nm working as it was quite problematic. Maybe I'm wrong, I'm not in that industry ;).daScribe - Monday, August 25, 2014 - link
There are at least 20 different major equipment manufacturers who work hard to meet the specs for the next generation tools as required by Intel. Once they deliver, the battle actually begins to get the design done (things don't just get drawn as is from previous generation with a reduced size), the masks made and the whole process for the silicon to work out. Every single cog in the huge machinery of Intel and its OEMs has to fire just right to make these chips work. I hope you get a sense from this as to how difficult and amazing a feat it is for Intel to produce these chips with high yield.markbanang - Friday, August 15, 2014 - link
I worked for a company which produced one of the first EUV lithography tools, and worked on the first EUV actinic imaging mask defect inspection tool, and can tell you that there is very close cooperation between semiconductor manufacturers and their machine suppliers.Although we had to do a lot of original research to overcome the engineering challenges of the brief from Intel/Sematech, only once the machines fulfil the spec and are handed over does the *real* research start. Process window optimisation is the real challenge here. It's one thing to lay down 14nm features, but getting from mask set to processor is an entirely different level level of challenge.
markbanang - Friday, August 15, 2014 - link
I would love to know details of the technology used to produce silicon at the 14nm node.The wikipedia pages on EUV Lithography and Immersion Lithography suggests that these chips might be being produced using double or triple patterning with 193nm immersion lithography. If that's the case, then it it truly remarkable 193nm light can be used to print features this small, when you would expect this to be way beyond the diffraction limit of such a system.
c plus plus - Saturday, August 16, 2014 - link
go oh intel! slay all cpu makers except amd cause we need them. but please double single thread performance with broadwellsandeep patil - Thursday, August 28, 2014 - link
Hi Guys,Sorry but couldn't figure what's 14 in 14nm technology.