LLVM implements AI policy requiring human oversight

The open-source project LLVM has introduced a new policy allowing AI-generated code in contributions, provided humans review and understand the submissions. This 'human in the loop' approach ensures accountability while addressing community concerns about transparency. The policy, developed with input from contributors, balances innovation with reliability in software development.

LLVM, a foundational collection of compiler and toolchain components used in projects like Clang, Rust, Swift, and the Linux kernel, has adopted a policy on AI tool use in contributions. Published on January 22, 2026, the guidelines permit developers to employ any AI tools but emphasize full accountability for the submitted work.

Under the policy, contributors must disclose the AI tool used, either in the pull request description, commit message, or authorship details. They are required to review and comprehend their submissions, confidently justifying them during reviews and ensuring they merit a maintainer's attention. The rules clarify that violations will be handled according to existing community processes.

The development process involved extensive community engagement. A LLVM member highlighted discrepancies between the project's AI handling, code of conduct, and actual practices, referencing a notable pull request discussed on Hacker News where AI use was admitted post-submission but not initially disclosed.

LLVM maintainer Reid Kleckner spearheaded the effort. His initial draft, inspired by Fedora's AI policy, proposed restrictions such as limiting newcomers to 150 lines of non-test code. After feedback from community meetings and forums, the final version shifted to more explicit requirements, focusing on review readiness and question-answering ability rather than vague ownership clauses.

The updated AI Tool Use Policy is now available on LLVM's documentation site, including examples of acceptable AI-assisted work and violation guidelines. This move aligns LLVM with other open-source initiatives adapting to AI's growing role in development.

Labaran da ke da alaƙa

Tech leaders announcing Linux Foundation's AI-powered cybersecurity initiative for open source software with major partners.
Hoton da AI ya samar

Linux Foundation announces AI security initiative with tech partners

An Ruwaito ta hanyar AI Hoton da AI ya samar

The Linux Foundation has launched a new initiative using Anthropic's Claude Mythos preview for defensive cybersecurity in open source software. Partners include AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan, Microsoft, NVIDIA, and Palo Alto Networks. The effort aims to secure critical software amid the rise of AI for open source maintainers.

The Linux kernel project has officially documented its policy on AI-assisted code contributions with the release of Linux 7.0. The guidelines require human accountability, disclosure of AI tool use, and a new 'Assisted-by' tag for patches involving AI. Sasha Levin formalized the consensus reached at the 2025 Maintainers Summit.

An Ruwaito ta hanyar AI

Greg Kroah-Hartman, maintainer of the Linux kernel, stated that AI-driven code review tools have become genuinely useful. He told The Register that the technology reached an inflection point about a month ago, leading to actionable bug reports.

The release of version 7.0 of the open-source Python library chardet has sparked controversy over whether an AI-assisted rewrite can change its original restrictive license. Maintainer Dan Blanchard used Anthropic's Claude tool to create a faster, MIT-licensed version, but original author Mark Pilgrim argues it violates the LGPL terms. The case highlights emerging legal and ethical questions in AI-generated code.

An Ruwaito ta hanyar AI

Researchers have used artificial intelligence to identify a significant performance boost in Linux's IO_uring subsystem. The discovery reveals a 50-80x improvement in efficiency. This finding highlights AI's role in optimizing open-source software.

Ethereum cofounder Vitalik Buterin has suggested using personal AI agents to automate voting in decentralized autonomous organizations, aiming to boost participation and protect privacy. The proposal, shared on social media platform X, addresses issues like low voter turnout and power concentration among large token holders. It incorporates cryptographic tools to safeguard sensitive data and prevent coercion.

Wannan shafin yana amfani da cookies

Muna amfani da cookies don nazari don inganta shafin mu. Karanta manufar sirri mu don ƙarin bayani.
Ƙi