The Linux Foundation has secured $12.5 million in grants from AI companies to bolster open source software security. The funding addresses maintainers overwhelmed by AI-generated vulnerability reports. It will be managed by Alpha-Omega and the Open Source Security Foundation.
The Linux Foundation announced $12.5 million in grants on March 19, 2026, aimed at strengthening open source software security. This initiative, managed by its security-focused projects Alpha-Omega and the Open Source Security Foundation (OpenSSF), targets the challenge of open source maintainers struggling with a surge of security findings from AI tools—some legitimate, others hallucinations generated at a scale they cannot handle alone. Contributing AI companies include Anthropic, Google, Google DeepMind, GitHub, Microsoft, and OpenAI. The projects plan to collaborate directly with maintainers to develop practical security tooling that integrates into existing workflows, helping them manage rising demands without being overwhelmed. Greg Kroah-Hartman, a Linux Foundation Fellow and Linux kernel maintainer, noted the issue's validity, referencing a prior incident. In 2025, cURL's bug bounty program on HackerOne faced a flood of AI-generated reports lacking proper research. cURL creator Daniel Stenberg warned that submitters of such reports would be publicly named, ridiculed, and banned, but this did not deter them. By January 2026, the program had received 20 such submissions in its first few weeks, leading to its complete shutdown. Proponents view the grants as a constructive step, though not a full solution to AI-generated noise in open source security efforts.