At its Data-Centric Innovation Summit in Santa Clara today, Intel unveiled its official Xeon roadmap for 2018 – 2019. As expected, the company confirmed its upcoming Cascade Lake, Cooper Lake-SP and Ice Lake-SP platforms.

Later this year Intel will release its Cascade Lake server platform, which will feature CPUs that bring support for hardware security mitigations against side-channel attacks through partitioning. In addition, the new Cascade Lake chips will also support AVX512_VNNI instructions for deep learning (originally expected to be a part of the Ice Lake-SP chips, but inserted into an existing design a generation earlier).

Moving on to the next gen. Intel's Cooper Lake-SP will be launched in 2019, several quarters ahead of what was reported several weeks ago. Cooper Lake processors will still be made using a 14 nm process technology, but will support some functional improvements, including the BFLOAT16 feature. By contrast, the Ice Lake-SP platform is due in 2020, just as expected.

One thing to note about Intel’s Xeon launch schedules is that the Cascade Lake will ship in Q4 2018, several months from now. Normally, Intel does not want to create internal competition and release new server platforms too often. That said, it sounds like we should expect Cooper Lake-SP to launch in late 2019 and Ice Lake-SP to hit the market in late 2020. To make it clear: Intel has not officially announced launch timeframes for its CPL and ICL Xeon products and the aforementioned periods should be considered as educated guesses.

Intel's Server Platform Cadence
Platform Process Node Release Year
Haswell-E 22nm 2014
Broadwell-E 14nm 2016
Skylake-SP 14nm+ 2017
Cascade Lake-SP 14nm++? 2018
Cooper Lake-SP 14nm++? 2019
Ice Lake-SP 10nm+ 2020

While the Cascade Lake will largely rely on the Skylake-SP hardware platform introduced last year (albeit with some significant improvements when it comes to memory support), the Cooper Lake and Ice Lake will use a brand-new hardware platform. As discovered a while back, that Cooper Lake/Ice Lake server platform will use LGA4189 CPU socket and will support an eight-channel per-socket memory sub-system.

Intel has long understood that one size does not fit all, and that many of its customers need customized/optimized Xeon chips to run their unique applications and algorithms. Google was the first company to get a semi-custom Xeon back in 2008, and today over a half of Intel Xeon processors are customized for particular workloads at particular customers. That said, many of Intel’s future Xeons will feature unique capabilities only available to select clients. In fact, the latter want to keep their IP confidential, so these chips will be kept off Intel’s public roadmap. Meanwhile, as far as Intel’s CPUs and platforms are concerned, both should be ready for various ways of customization whether it is silicon IP, binning for extra speed, or adding discrete special-purpose accelerators.

Overall there are several key elements to the announcement.

Timeline and Competition

What is not clear is timeline. Intel has historically been on a 12-18 month cadence when it comes to new server processor families. As it stands, we expect Cascade Lake to hit in Q4 2018. If Cooper Lake is indeed in 2019, then even if we went on the lower bound of at 12-18 month gap then we would still be looking at Q4 2019. Step forward to Ice Lake, which Intel has listed as 2020. Again, this sounds like another 12 month jump, on the edge of that 12-18 month typical gap. This tells us two things:

Firstly, Intel is pushing the server market to update and update quickly. Typical server markets have a slow update cycle, so Intel is expected to push its new products hoping to offer something special above the previous generation. Aside from the options listed below, and depending on how the product stack looks like, there is nothing listed about the silicon which should drive that updates.

Secondly, if Intel wants to keep revenues high, it would have to increase prices for those that can take advantage of the new features. Some media have reported that the price of the new parts will be increased to compensate the fewer reasons to upgrade to keep overall revenue high.

Security Mitigations

This is going to be a big question mark. With the advent of Spectre and Meltdown, and other side channel attacks, Intel and Microsoft have scrambled to fix the issues mostly through software. The downside of these software fixes is that sometimes they cause performance slowdowns – in our recent Xeon W using Skylake-SP cores, we saw up to a 3-10% performance decreases. At some point we are expecting the processors to implement hardware fixes, and one of the questions will be on the effect on performance that these fixes give.

The fact that the slide mentions security mitigations is confusing – are they hardware or software? (Confirmed hardware) What is the performance impact? (None to next-to-none) Will this require new chipsets to enable? Will this harden against future side channel attacks? (Hopefully) What additional switches are in the firmware for these?

Updated these questions with answers from our interview with Lisa Spelman. Our interview with Lisa will be posted next week (probably).

New Instructions

Running in line with new instructions will be VNNI for Cascade Lake and bfloat16 for Cooper Lake. It is likely that Ice Lake will have new instructions too, but those are not mentioned at this time.

VNNI, or Variable Length Neural Network Instructions, is essentially the ability to support 8-bit INT using the AVX-512 units. This will be one step towards assisting machine learning, which Intel cited as improving performance (along with software enhancements) of 11x compared to when Skylake-SP was first launched. VNNI4, a variant of VNNI, was seen in Knights Mill, and VNNI was meant to be in Ice Lake, but it would appear that Intel is moving this into Cascade Lake. It does make me wonder exactly what is needed to enable VNNI on Cascade compared to what wasn’t possible before, or whether this was just part of Intel’s expected product segmentation.

Also on the cards is the support for bfloat16 in Cooper Lake. bfloat16 is a data format, used most recently by Google, like a 16-bit float but in a different way. The letter ‘b’ in this case stands for brain, with the data format expected for deep learning. How it differs regarding a standard 16-bit float is in how the number is defined.

A standard float has the bits split into the sign, the exponent, and the fraction. This is given as:

  • <sign> * 1 + <fraction> * 2<exponent>

For a standard IEEE754 compliant number, the standard for computing, there is one bit for the sign, five bits for the exponent, and 10 bits for the fraction. The idea is that this gives a good mix of precision for fractional numbers but also offer numbers large enough to work with.

What bfloat16 does is use one bit for the sign, eight bits for the exponent, and 7 bits for the fraction. This data type is meant to give 32-bit style ranges, but with reduced accuracy in the fraction. As machine learning is resilient to this type of precision, where machine learning would have used a 32-bit float, they can now use a 16-bit bfloat16.

These can be represented as:

Data Type Representations
Type Bits Exponent Fraction Precision Range Speed
float32 32 8 23 High High Slow
float16 16 5 10 Low Low 2x Fast
bfloat16 16 8 7 Lower High 2x Fast

This is a breaking news that will be updated as we receive more information.

Related Reading:

Comments Locked

51 Comments

View All Comments

  • Cooe - Wednesday, August 8, 2018 - link

    This. The 10nm shipping to consumers next year is NOT the 10nm Intel had originally planned & spec'ed out. The new "node" is more like 12nm (using Intel's own measurement definitions). Either way, it's ALL bad news.
  • iwod - Thursday, August 9, 2018 - link

    This is going to be an unpopular opinion on anandtech.

    The 7nm is better than Intel 14nm+++, but it is not miles ahead as you would imagine by the numbers. The Intel 10nm on paper and initial chip are actually much better than TSMC 7nm. But of coz that is an apple to orange comparison because Intel 10nm isn't even ready for HVM.

    The Apple 7nm are also custom and made to yield, so it is not exactly the same as normal TSMC 7nm everyone else are getting.

    So there is no denying that Apple and TSMC for the first time ever will likely to have better transistor than Intel, but it is still not 14 vs 7 difference. One shouldn't be all hyped up and ignoring all the technical and business details behind it.
  • Wilco1 - Thursday, August 9, 2018 - link

    I hate to burst your bubble but Intel hasn't had the best transistors for some time now. Centriq on 10nm was shown to be both faster and far more power efficient than Skylake. 7nm is only going to widen the gap further...
  • Carl Bicknell - Wednesday, August 8, 2018 - link

    Has there been any conformation of the number of cores in the Xeon SP, for Cascade Lake, Cooper Lake and Ice Lake respectively?

    I read somewhere Cascade Lake is due to get 28 cores which is no improvement at all. I find it surprising they wouldn't try to add a few more.
  • siberian3 - Wednesday, August 8, 2018 - link

    With these news they will compete with Epic 3 which is the next after Rome sad news for intel.I think they are in the worst situation i can remember.The only hope is Jim Keler pull another rabbit from his hat but thats for 2021 or later
  • Elstar - Wednesday, August 8, 2018 - link

    Anybody know what the 'b' in bfloat16 stands for? Normally, I'd guess "binary", but IEEE floating point already has naming conventions. For binary floating point, we have "half", "single", "double", and "quad" for 16-, 32-, 64-, and 128-bit binary floating point respectively. For decimal floating point, we have "decimal32", "decimal64", etc.
  • Ian Cutress - Wednesday, August 8, 2018 - link

    b is for 'brain' I believe. It's related to a different exponent/mantissa config vs a standard 16-bit value, optimized for machine learning
  • boeush - Wednesday, August 8, 2018 - link

    "...Cascade Lake server platform, which will feature CPUs that bring support for hardware security mitigations against side-channel attacks through partitioning."

    I'd love to see some more details regarding this...
  • Frenetic Pony - Wednesday, August 8, 2018 - link

    Meanwhile ex CEO Brian enjoys his quick, sneaky retirement with all benefits still included.
  • abufrejoval - Wednesday, August 8, 2018 - link

    Sounds like this here: https://software.intel.com/sites/default/files/man...

    And there is a critique here: https://lwn.net/Articles/758284/

    ARM goes for address tagging: https://www.qualcomm.com/media/documents/files/whi...

Log in

Don't have an account? Sign up now