The Linux kernel project has officially documented its policy on AI-assisted code contributions with the release of Linux 7.0. The guidelines require human accountability, disclosure of AI tool use, and a new 'Assisted-by' tag for patches involving AI. Sasha Levin formalized the consensus reached at the 2025 Maintainers Summit.
At the 2025 Maintainers Summit, Sasha Levin advocated for clear rules on AI tools in kernel development. The resulting policy emphasizes that human reviewers must take full responsibility for any AI-generated code, ensuring compliance with the GPL-2.0-only license. Purely machine-generated submissions are not accepted, and AI agents cannot sign off on patches using Signed-off-by tags, as the Developer Certificate of Origin demands human accountability for every contribution. Levin committed to documenting these principles without enforcement, and the new 'AI Coding Assistants' guidelines now appear in the kernel's process documentation alongside other contribution rules. This policy builds on earlier discussions, where Linus Torvalds questioned the need for a dedicated tag, suggesting changelogs suffice. However, the community opted for the 'Assisted-by' tag, formatted as 'Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]'. An example given is 'Assisted-by: Claude:claude-3-opus coccinelle sparse' for patches using multiple tools. Greg Kroah-Hartman, the stable kernel maintainer, has already applied this approach in his 'clanker' branch. He used AI-assisted fuzzing on ksmbd and SMB code, identified issues, and submitted fixes with instructions for reviewers to verify independently. In comparison, Gentoo banned AI-generated contributions in 2024 over copyright, quality, and ethical issues, while NetBSD labels LLM code as 'tainted' requiring core developer approval. Linux maintains a more permissive stance, relying on humans to validate AI output.