Intel's Cache Aware Scheduling feature for the Linux kernel has shown performance gains on Xeon 6 Granite Rapids processors. Engineers developed this functionality to optimize task placement on multi-cache systems. Benchmarks on a dual Xeon setup demonstrate benefits across various workloads.
Over the past year, Intel engineers have advanced Cache Aware Scheduling in the Linux kernel. This yet-to-be-merged code enables the kernel to group tasks that share data onto the same last-level cache domain, minimizing cache misses and data bouncing between cores. While Intel led the effort, the feature benefits processors from other vendors with multiple cache domains as well.
In October, tests on a dual AMD EPYC 9965 server revealed improvements in several workloads. Now, similar evaluations target Intel's Xeon 6 Granite Rapids series. Using a Gigabyte R284-A92-AAL1 server equipped with two Xeon 6980P processors and 24 modules of 64GB DDR5-8800 MRDIMM memory, recent benchmarks assessed the latest code iteration.
The tests employed the cache-aware-v2 Git branch, aligned with Linux 6.18-rc7, and compared it to the mainline Linux 6.18.7 kernel without the scheduling enhancement. The system ran Ubuntu 25.10, featuring default packages including GCC 15.2, but with the custom kernel swap. These setups highlight how Cache Aware Scheduling can enhance efficiency on high-end server hardware like Granite Rapids, potentially influencing future kernel integrations for better multi-core performance.