LLVM implements AI policy requiring human oversight

The open-source project LLVM has introduced a new policy allowing AI-generated code in contributions, provided humans review and understand the submissions. This 'human in the loop' approach ensures accountability while addressing community concerns about transparency. The policy, developed with input from contributors, balances innovation with reliability in software development.

LLVM, a foundational collection of compiler and toolchain components used in projects like Clang, Rust, Swift, and the Linux kernel, has adopted a policy on AI tool use in contributions. Published on January 22, 2026, the guidelines permit developers to employ any AI tools but emphasize full accountability for the submitted work.

Under the policy, contributors must disclose the AI tool used, either in the pull request description, commit message, or authorship details. They are required to review and comprehend their submissions, confidently justifying them during reviews and ensuring they merit a maintainer's attention. The rules clarify that violations will be handled according to existing community processes.

The development process involved extensive community engagement. A LLVM member highlighted discrepancies between the project's AI handling, code of conduct, and actual practices, referencing a notable pull request discussed on Hacker News where AI use was admitted post-submission but not initially disclosed.

LLVM maintainer Reid Kleckner spearheaded the effort. His initial draft, inspired by Fedora's AI policy, proposed restrictions such as limiting newcomers to 150 lines of non-test code. After feedback from community meetings and forums, the final version shifted to more explicit requirements, focusing on review readiness and question-answering ability rather than vague ownership clauses.

The updated AI Tool Use Policy is now available on LLVM's documentation site, including examples of acceptable AI-assisted work and violation guidelines. This move aligns LLVM with other open-source initiatives adapting to AI's growing role in development.

Related Articles

Realistic illustration of Linux Foundation executives and AI partners launching Agentic AI Foundation, featuring collaborative autonomous AI agents on a conference screen.
Image generated by AI

Linux Foundation launches Agentic AI Foundation

Reported by AI Image generated by AI

The Linux Foundation has launched the Agentic AI Foundation to foster open collaboration on autonomous AI systems. Major tech companies, including Anthropic, OpenAI, and Block, contributed key open-source projects to promote interoperability and prevent vendor lock-in. The initiative aims to create neutral standards for AI agents that can make decisions and execute tasks independently.

Linus Torvalds, creator of the Linux kernel, has criticized efforts to create rules for AI-generated code submissions, calling them pointless. In a recent email, he argued that such policies would not deter malicious contributors and urged focus on code quality instead. This stance highlights ongoing tensions in open-source development over artificial intelligence tools.

Reported by AI

The Linux developer community has shifted from debating AI's role to integrating it into kernel engineering processes. Developers now use AI for project maintenance, though questions persist about writing code with it. Concerns over copyright and open-source licensing remain.

The b4 kernel development tool for Linux is now internally testing its AI agent designed to assist with code reviews. This step, known as dog-feeding, marks a practical application of the AI feature within the tool's development process. The update comes from Phoronix, a key source for Linux news.

Reported by AI

OpenClaw, an open-source AI project formerly known as Moltbot and Clawdbot, has surged to over 100,000 GitHub stars in less than a week. This execution engine enables AI agents to perform actions like sending emails and managing calendars on users' behalf within chat interfaces. Its rise highlights potential to simplify crypto usability while raising security concerns.

Bandcamp has prohibited music generated wholly or substantially by AI on its platform, aiming to safeguard the human element in music creation. The policy, announced on January 14, 2026, allows users to flag suspected AI content for review and removal. This move contrasts with other streaming services grappling with an influx of AI-produced tracks.

Reported by AI

Music labels and tech companies are addressing the unauthorized use of artists' work in training AI music generators like Udio and Suno. Recent settlements with major labels aim to create new revenue streams, while innovative tools promise to remove unlicensed content from AI models. Artists remain cautious about the technology's impact on their livelihoods.

 

 

 

This website uses cookies

We use cookies for analytics to improve our site. Read our privacy policy for more information.
Decline