5/19/2025 at 1:52:11 PM
It's a shame the article chose to compare solely against AMD CPUs, because AMD and Intel have very different L3 architectures. AMD CPUs have their cores oranised into groups, called a CCX, each of which have their own small L3 cache. For example the Turin-based 9755 has 16 CCXs each with 32MB of L3 cache. Far less cache per core than the mainframe CPU being described. In contrast to this, Intel uses an approach that's a little closer to the Telum II CPU being described - a Granite Rapids AP chip such as 6960P has 432 MB of L3 cache shared between 72 physical cores, each with its own 2MB L2 cache. This is still considerably less cache, but it's not quite as stark a difference as the picture painted by the article.This doesn't really detract from the overall point - stacking a huge per-core L2 cache and using cross-chip reads to emulate L3 with clever saturation metrics and management is very different to what any x86 CPU I'm aware of has ever done, and I wouldn't be surprised if it works extremely well in practice. It's just that it'd have made a stronger article IMO if it had instead compared dedicated L2 + shared L2 (IBM) against dedicated L2 + shared L3 (intel), instead of dedicated L2 + sharded L3 (amd).
by jfindley
5/19/2025 at 3:49:37 PM
Granite Rapids is also a better example because it's an enterprise processor with a huge monolithic die (almost 600 square mm).A key distinction, however, is latency. I don't know about Granite Rapids, but sources show that Sapphire Rapids had an L3 latency around 33 ns: https://www.tomshardware.com/news/5th-gen-emerald-rapids-cpu.... According to the article, the L2 latency in the Tellum II chips is just 3.8 ns (about 21 clock cycles at 5.5 GHz). Sapphire Rapids has an L2 latency of about 16 clock cycles.
IBM's cache architecture enables a different trade-off in terms of balancing the L2 versus L3. In Intel's architecture, the shared L3 is inclusive, so it has to be at least as big as L2 (and preferably, a lot bigger). That weighs in favor of making L2 smaller, so most of your on-chip cache is actually L3. But L3 is always going to have higher latency. IBM's design improves single-thread performance by allowing most of the on-chip cache to be lower-latency L2.
by rayiner