So, went from effectively two FMA pipes (2xMUL and 2xADD, for chained ops) to three, plus doubling SP throughput and significantly increasing ADB size. Not too shabby!
Maybe an American company can come up with something like this so that the DOE can have the second architecture they are looking for while Intel are mucking about.
As far as the Vector Engine being much cheaper than a V100. I'm betting they are comparing the cost to build a Vector Engine with the cost to buy a V100. If so, that's a bad comparison. They aren't making their money by selling Vector Engines they are making their money on the supercomputer. And the V100s that go into a supercomputer aren't bought at the price an enterprise customer would pay. The real comparison is the price/performance of a supercomputer that uses Vector Engines and a supercomputer that uses V100s. I doubt implementing the Vector Engines is much cheaper than implementing the V100s for comparable performance.
It's a specialty card for an HPC supercomputer. I guess it could be used in other HPC supercomputers if it's successful. The thing is, I think it's designed to run a pre-existing code base that developed around its predecessor chips. Most HPC labs would need to do a lot of work on their code base to optimize it for the architecture.
Oh, and of course you have the fact that its Japanese. The big government labs in the US want to use US technology, the big government-funded labs in Europe want to use European technology, etc... And it's those big government labs that are probably in the best position to reformulate their code bases. It's probably not a great option for a smaller university lab.
Yes, and that's one reason why I found it interesting that the second author's affiliation was listed as "NEC Germany". Certainly not an accident. Might help NEC getting over the NIH (not invented here) problem.
It does seem like they are selling servers based on it on the open market. We'll have to see how much uptake there is of it. I dunno if there is a big system planned to use it or not.
It is pretty widely used. If you look at the HPCG results list, you'll see quite a few Japanese supercomputer sites with SX, and performance (at least on the SX-ACE) is absolutely superb on that benchmark.
Not to say it bad thing to have specialize chips - but I would like to see more details on Intel testing, I found the code for the test - but I see no mention that it compile to used AVX-512.
Not even sure if the tests use multi threads and multiple CPU.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
9 Comments
Back to Article
SarahKerrigan - Tuesday, August 21, 2018 - link
So, went from effectively two FMA pipes (2xMUL and 2xADD, for chained ops) to three, plus doubling SP throughput and significantly increasing ADB size. Not too shabby!Yojimbo - Wednesday, August 22, 2018 - link
Maybe an American company can come up with something like this so that the DOE can have the second architecture they are looking for while Intel are mucking about.As far as the Vector Engine being much cheaper than a V100. I'm betting they are comparing the cost to build a Vector Engine with the cost to buy a V100. If so, that's a bad comparison. They aren't making their money by selling Vector Engines they are making their money on the supercomputer. And the V100s that go into a supercomputer aren't bought at the price an enterprise customer would pay. The real comparison is the price/performance of a supercomputer that uses Vector Engines and a supercomputer that uses V100s. I doubt implementing the Vector Engines is much cheaper than implementing the V100s for comparable performance.
iwod - Wednesday, August 22, 2018 - link
Why is it not widely used if it was that good?Yojimbo - Wednesday, August 22, 2018 - link
It's a specialty card for an HPC supercomputer. I guess it could be used in other HPC supercomputers if it's successful. The thing is, I think it's designed to run a pre-existing code base that developed around its predecessor chips. Most HPC labs would need to do a lot of work on their code base to optimize it for the architecture.Yojimbo - Wednesday, August 22, 2018 - link
Oh, and of course you have the fact that its Japanese. The big government labs in the US want to use US technology, the big government-funded labs in Europe want to use European technology, etc... And it's those big government labs that are probably in the best position to reformulate their code bases. It's probably not a great option for a smaller university lab.eastcoast_pete - Wednesday, August 22, 2018 - link
Yes, and that's one reason why I found it interesting that the second author's affiliation was listed as "NEC Germany". Certainly not an accident. Might help NEC getting over the NIH (not invented here) problem.Yojimbo - Wednesday, August 22, 2018 - link
It does seem like they are selling servers based on it on the open market. We'll have to see how much uptake there is of it. I dunno if there is a big system planned to use it or not.SarahKerrigan - Wednesday, August 22, 2018 - link
It is pretty widely used. If you look at the HPCG results list, you'll see quite a few Japanese supercomputer sites with SX, and performance (at least on the SX-ACE) is absolutely superb on that benchmark.HStewart - Wednesday, August 22, 2018 - link
Not to say it bad thing to have specialize chips - but I would like to see more details on Intel testing, I found the code for the test - but I see no mention that it compile to used AVX-512.Not even sure if the tests use multi threads and multiple CPU.
http://accc.riken.jp/en/supercom/himenobmt/downloa...