Intel Sapphire Rapids – 56Core DDR5 HBM RAM

DeBuffAndy
How Intel will fight against AMD in CPUs: 56 Cores and DDR5 + HBM RAM

This year’s Intel Architecture Day is giving a lot to talk about. In addition to presenting in public, the features of its dedicated gaming GPU Alchemist or talk about its GPU architecture for server accelerators, the brand has also given details of its next generation of Xeon processors, known as Sapphire Rapids. And more specifically has talked about how it will manage the HBM + DDR5 memory and does so in a way that reminds too much of the first-generation AMD EPYC.

The memory interface topology of Intel’s next Sapphire Rapids Xeon processors closely mirrors that of the first-generation AMD EPYC Rome, having a modular design with many chips in the processor. The Skylake-SP Xeon processors featured a monolithic die in 2017. However, that age has passed, and multi-tile chips appear to be the future of modern computing.

Intel’s Sapphire Rapids memory topology, with HBM + DDR5

Despite being split across numerous memory controller tiles, Intel regarded the 6-channel DDR4 memory interface as a competitive advantage versus AMD’s first Zen-based enterprise processor, EPYC “Rome.” Each of the eight-core “Zeppelin” multi-chip modules features a DDR4 memory interface with two channels. This means that any of the four CPU cores can access memory and I/O controlled by any of the other dies in a topology similar to “4P on a stick.”

Intel is taking a suspiciously similar approach with Sapphire Rapids: it has four compute tiles (arrays) instead of a monolithic die, which Intel claims helps with scalability in both directions. There are two 2-channel DDR5 or 1024-bit HBM memory interfaces per computation tile add to an 8-channel DDR5 I/O processor.

Sapphire Rapids memory

According to Intel, each tile’s CPU cores have the same access to memory, last-level cache, and I/O managed by any die. Physical EMIB (55-micron tap-pass wiring) handles inter-tile communication; UPI 2.0 serves as the interface between sockets, and each of the four compute tiles has 24 UPI 2.0 links working at 16 GT/s.

Intel has not specified how memory is delivered to the operating system or NUMA hierarchy. Yet, a significant portion of Intel’s technical effort appears to be concentrated on making this discontinuous memory I/O operate as if “Sapphire Rapids” were a monolithic die. According to the business, a common denominator throughout the SoC is high cross bandwidth and consistently low latency.

Sapphire Rapids HBM DDR5 memory

HBM support is another intriguing feature of the “Sapphire Rapids” Xeon processors, which could be a game-changer for the CPU in the high-density computing and HPC industries. As a result, specific Sapphire Rapids Xeon processors may include HBM on the CPU die itself.

This memory may be used as a sacrificial cache for the compute tile array’s caches, greatly boosting the memory subsystem’s capacity, as a primary independent memory, or even as un-leveled main memory in combination with DDR5 RAM that has flat memory regions. 

These are referred to as software-visible HBM + DDR5 and software-transparent HBM + DDR5 modes, respectively, by Intel.

Total
20
Shares
Previous Post

Quake Remastered Looks Glorious!

Next Post

Why I Love The Steam Deck

Related Posts