Erofs advances container efficiency with page cache sharing

The Enhanced Read-Only File System, or EROFS, has introduced page cache sharing to significantly reduce memory usage in containerized environments. This feature allows multiple containers to share cached pages from the same file system image, cutting memory waste by 40% to 60%. Developed initially by Huawei, EROFS is gaining traction in cloud and edge computing scenarios.

Originally created by Huawei for mobile devices, EROFS has evolved into a key tool for container orchestration in cloud and edge settings. The new page cache sharing feature enables multiple instances of identical file system images to share memory caches, avoiding redundant copies that inflate resource demands. In container-heavy workloads, such as those in Kubernetes, this addresses duplicative caching that hampers performance during rapid pod spin-ups.

Kernel contributors, including Hongzhen Luo and Hongbo Li, have driven this development through patch series submitted to the Linux Kernel Mailing List. The latest version, v11, refines earlier prototypes by fixing bugs, adding readahead support, and improving compatibility with fscache mode and anonymous files. These patches build on work from earlier in the year and target integration into kernels from version 5.16 onward, leveraging the folio infrastructure for efficient memory management.

Benchmarks demonstrate clear benefits. Tests with Android container images showed significant memory reductions when sharing caches across mounts. For example, deploying similar TensorFlow containers on one node achieved up to 20% memory savings, while broader container scenarios yielded 40% to 60% cuts during peak loads like boot storms. Phoronix reports highlight improved read throughput alongside lower memory consumption, especially for overlapping data in machine learning workflows.

The 'sharecache' mount option activates this capability, ensuring safe shared access via techniques like copy-on-write. Community discussions on platforms like X praise its potential, with one post noting it could "cut container memory waste by 40-60%," reducing costs for hyperscale operators. Adoption is expanding beyond Huawei, attracting contributors from Alibaba and others, as EROFS competes with systems like SquashFS through superior compression and caching.

Challenges include securing shared caches against data leaks between containers, with maintainers debating edge cases on the mailing list. Future integrations with cgroups and tools like CRI-O or Docker could enhance density in microservices and IoT gateways, promoting sustainable computing in data-intensive environments.

Articoli correlati

Developers are exploring page cache sharing as a way to improve performance for EROFS containers. This technique appears to offer significant advantages in Linux environments. The findings come from Phoronix, a site focused on Linux hardware and software reviews.

Riportato dall'IA

Kernel developer Cong Wang has introduced DAXFS, a new read-only filesystem designed for direct access to shared memory in Linux. Built on the Direct Access infrastructure, it bypasses traditional page caching to reduce memory overhead. The proposal aims to enable efficient sharing of data across kernels and devices.

Kernel developer Namjae Jeon has submitted the v3 patch series for the NTFSPlus driver, renaming it to simply NTFS to facilitate integration into the Linux kernel. This update builds on the existing read-only NTFS driver and introduces performance enhancements and new features. The changes aim to streamline code review and improve compatibility with modern Linux filesystem operations.

Riportato dall'IA

Phoronix has reported on updated Linux patches aimed at managing out-of-memory behavior through BPF technology. These developments focus on improving how the Linux kernel handles memory shortages. The updates are part of ongoing efforts in open-source Linux advancements.

 

 

 

Questo sito web utilizza i cookie

Utilizziamo i cookie per l'analisi per migliorare il nostro sito. Leggi la nostra politica sulla privacy per ulteriori informazioni.
Rifiuta