Intel shows its new Xeon CPUs with 64 GB HBM2 and 5700 mm2 memory!


It is no secret considering everything that has been leaked about the future Xeon server CPUs, codenamed Sapphire Rapids, that Intel intends to use HBM memory, a type of memory so far only for HPC graphics cards, but which is slowly starting to be seen in server CPUs. In the Hot Chips, Intel has given new details about HBM in Sapphire Rapids-SP.

The upcoming Intel Sapphire Rapids with HBM memory has already been formally announced by the firm now managed by Pat Gelsinger; precisely, the model-name is Intel Sapphire Rapids-SP, which differentiates from the conventional Intel Sapphire Rapids by the use of HBM2e memory. Intel, in particular, has chosen to employ four stacks of eight chips each, which on the one hand means being able to attain the largest possible memory capacity but, on the other hand, are significantly more expensive than typical stacks of four chips each.

HBM2E memory

There is no change in bandwidth between 4-chip and 8-chip memory stacks, but the differences are limited to storage capacity only. Stacks with 8 chips vertically integrated onto a single memory chip are also within the JEDEC standard for this form of memory, hence this is not a specific HBM2E memory type. This is exactly what we’ve seen in recent years with high-performance GPUs like NVIDIA Tesla and AMD Instinct.

In any event, the use of HBM memory instead of DDR is not a new development in the server CPU industry. There are already instances of processors based on the ARM instruction set architecture that use HBM memory rather than DDR. HBM memory becomes the optimal form of memory for these CPUs, particularly for users that work with large volumes of data.

Sapphire Rapids-SP HBM

HBM memory, unlike traditional DDR memory, does not communicate over a traditional parallel interface, but instead needs the usage of an interposer on which both the processor and the HBM memory stacks are located. These combinations are known as 2.5DIC and are a new sort of packaging and hence a different manner of constructing a chip than the standard way.

Intel will employ its EMIB technology to produce Sapphire Rapids, which will allow it to insert the four tiles or chiplets that make up the Sapphire Rapids SP, as well as the HBM memory stacks. This will employ DDR5 memory as the primary RAM, but will also allow them to construct a version using HBM memory. 

While the normal version includes 10 EMIB interconnects and is installed on a massive 4446 square mm package, the HBM version raises the number of EMIB interconnects from 10 to 14 and increases the package size to 5700 square mm. This could imply that the Sapphire Rapids SP will have a different socket type.

The presence of HBM memory in the Sapphire Rapids-SP is due to the presence of tensor units in the processor, specifically, Intel’s new AMX units, which are critical for Machine Learning, a subfield of artificial intelligence. These are devices that handle massive amounts of data and whose performance is determined on the bandwidth of external memory. As a result, the Intel Sapphire Rapids SP with HBM memory is aimed squarely at the artificial intelligence business.

Previous Post

PlayStation Summer Showcase – What To Expect

Next Post

No 3NM Intel CPU’s

Related Posts