LLVM implements AI policy requiring human oversight

The open-source project LLVM has introduced a new policy allowing AI-generated code in contributions, provided humans review and understand the submissions. This 'human in the loop' approach ensures accountability while addressing community concerns about transparency. The policy, developed with input from contributors, balances innovation with reliability in software development.

LLVM, a foundational collection of compiler and toolchain components used in projects like Clang, Rust, Swift, and the Linux kernel, has adopted a policy on AI tool use in contributions. Published on January 22, 2026, the guidelines permit developers to employ any AI tools but emphasize full accountability for the submitted work.

Under the policy, contributors must disclose the AI tool used, either in the pull request description, commit message, or authorship details. They are required to review and comprehend their submissions, confidently justifying them during reviews and ensuring they merit a maintainer's attention. The rules clarify that violations will be handled according to existing community processes.

The development process involved extensive community engagement. A LLVM member highlighted discrepancies between the project's AI handling, code of conduct, and actual practices, referencing a notable pull request discussed on Hacker News where AI use was admitted post-submission but not initially disclosed.

LLVM maintainer Reid Kleckner spearheaded the effort. His initial draft, inspired by Fedora's AI policy, proposed restrictions such as limiting newcomers to 150 lines of non-test code. After feedback from community meetings and forums, the final version shifted to more explicit requirements, focusing on review readiness and question-answering ability rather than vague ownership clauses.

The updated AI Tool Use Policy is now available on LLVM's documentation site, including examples of acceptable AI-assisted work and violation guidelines. This move aligns LLVM with other open-source initiatives adapting to AI's growing role in development.

Verwandte Artikel

Linus Torvalds, creator of the Linux kernel, has criticized efforts to create rules for AI-generated code submissions, calling them pointless. In a recent email, he argued that such policies would not deter malicious contributors and urged focus on code quality instead. This stance highlights ongoing tensions in open-source development over artificial intelligence tools.

Von KI berichtet

The Linux developer community has shifted from debating AI's role to integrating it into kernel engineering processes. Developers now use AI for project maintenance, though questions persist about writing code with it. Concerns over copyright and open-source licensing remain.

The Linux kernel project has begun using Sashiko, an AI-powered system, to automatically review patches. This agentic, LLM-driven tool is identifying bugs that human reviewers overlooked. The initiative aims to enhance code quality and maintainability.

Von KI berichtet

Linus Torvalds, the creator of the Linux kernel, has strongly criticized discussions about AI-generated content in kernel documentation. He called talk of 'AI slop' pointless and stupid. The comments highlight ongoing tensions around AI in open-source development.

Samstag, 28. März 2026, 02:04 Uhr

Linux maintainer says AI tools now find real bugs

Donnerstag, 26. März 2026, 15:07 Uhr

Wikipedia bans AI-generated content with two exceptions

Donnerstag, 19. März 2026, 20:22 Uhr

Linux Foundation announces $12.5m for open source security

Samstag, 07. März 2026, 22:17 Uhr

AMD VP uses AI to create Radeon Linux driver in Python

Dienstag, 17. Februar 2026, 10:03 Uhr

Japanese government unveils rules requiring AI agents to consult humans

Sonntag, 01. Februar 2026, 17:39 Uhr

Linux b4 tool begins testing AI code review agent

Samstag, 24. Januar 2026, 07:33 Uhr

AerynOS rejects LLM use in contributions over ethical concerns

Mittwoch, 14. Januar 2026, 14:22 Uhr

Linus Torvalds uses AI for personal coding project

Mittwoch, 24. Dezember 2025, 04:08 Uhr

How AI coding agents function and their limitations

Montag, 15. Dezember 2025, 03:11 Uhr

GNOME bans AI-generated extensions from shell store

 

 

 

Diese Website verwendet Cookies

Wir verwenden Cookies für Analysen, um unsere Website zu verbessern. Lesen Sie unsere Datenschutzrichtlinie für weitere Informationen.
Ablehnen