LLVM implements AI policy requiring human oversight

The open-source project LLVM has introduced a new policy allowing AI-generated code in contributions, provided humans review and understand the submissions. This 'human in the loop' approach ensures accountability while addressing community concerns about transparency. The policy, developed with input from contributors, balances innovation with reliability in software development.

LLVM, a foundational collection of compiler and toolchain components used in projects like Clang, Rust, Swift, and the Linux kernel, has adopted a policy on AI tool use in contributions. Published on January 22, 2026, the guidelines permit developers to employ any AI tools but emphasize full accountability for the submitted work.

Under the policy, contributors must disclose the AI tool used, either in the pull request description, commit message, or authorship details. They are required to review and comprehend their submissions, confidently justifying them during reviews and ensuring they merit a maintainer's attention. The rules clarify that violations will be handled according to existing community processes.

The development process involved extensive community engagement. A LLVM member highlighted discrepancies between the project's AI handling, code of conduct, and actual practices, referencing a notable pull request discussed on Hacker News where AI use was admitted post-submission but not initially disclosed.

LLVM maintainer Reid Kleckner spearheaded the effort. His initial draft, inspired by Fedora's AI policy, proposed restrictions such as limiting newcomers to 150 lines of non-test code. After feedback from community meetings and forums, the final version shifted to more explicit requirements, focusing on review readiness and question-answering ability rather than vague ownership clauses.

The updated AI Tool Use Policy is now available on LLVM's documentation site, including examples of acceptable AI-assisted work and violation guidelines. This move aligns LLVM with other open-source initiatives adapting to AI's growing role in development.

Verwandte Artikel

Realistic illustration of Linux Foundation executives and AI partners launching Agentic AI Foundation, featuring collaborative autonomous AI agents on a conference screen.
Bild generiert von KI

Linux Foundation launches Agentic AI Foundation

Von KI berichtet Bild generiert von KI

The Linux Foundation has launched the Agentic AI Foundation to foster open collaboration on autonomous AI systems. Major tech companies, including Anthropic, OpenAI, and Block, contributed key open-source projects to promote interoperability and prevent vendor lock-in. The initiative aims to create neutral standards for AI agents that can make decisions and execute tasks independently.

Linus Torvalds, creator of the Linux kernel, has criticized efforts to create rules for AI-generated code submissions, calling them pointless. In a recent email, he argued that such policies would not deter malicious contributors and urged focus on code quality instead. This stance highlights ongoing tensions in open-source development over artificial intelligence tools.

Von KI berichtet

The Linux developer community has shifted from debating AI's role to integrating it into kernel engineering processes. Developers now use AI for project maintenance, though questions persist about writing code with it. Concerns over copyright and open-source licensing remain.

The b4 kernel development tool for Linux is now internally testing its AI agent designed to assist with code reviews. This step, known as dog-feeding, marks a practical application of the AI feature within the tool's development process. The update comes from Phoronix, a key source for Linux news.

Von KI berichtet

OpenClaw, an open-source AI project formerly known as Moltbot and Clawdbot, has surged to over 100,000 GitHub stars in less than a week. This execution engine enables AI agents to perform actions like sending emails and managing calendars on users' behalf within chat interfaces. Its rise highlights potential to simplify crypto usability while raising security concerns.

Bandcamp has prohibited music generated wholly or substantially by AI on its platform, aiming to safeguard the human element in music creation. The policy, announced on January 14, 2026, allows users to flag suspected AI content for review and removal. This move contrasts with other streaming services grappling with an influx of AI-produced tracks.

Von KI berichtet

Musiklabels und Tech-Unternehmen gehen dem unbefugten Einsatz von Künstlernwerken beim Training von KI-Musikgeneratoren wie Udio und Suno nach. Jüngste Einigungen mit großen Labels zielen darauf ab, neue Einnahmequellen zu schaffen, während innovative Tools unlizenzierten Inhalt aus KI-Modellen entfernen sollen. Künstler bleiben vorsichtig hinsichtlich der Auswirkungen der Technologie auf ihren Lebensunterhalt.

Samstag, 24. Januar 2026, 07:33 Uhr

AerynOS rejects LLM use in contributions over ethical concerns

Dienstag, 20. Januar 2026, 20:48 Uhr

Vor AI-Gipfel wird Ethik-Checkliste gefordert

Donnerstag, 15. Januar 2026, 14:22 Uhr

Bandcamp verbietet KI-generierte Musik auf seiner Plattform

Mittwoch, 14. Januar 2026, 14:22 Uhr

Linus Torvalds uses AI for personal coding project

Dienstag, 13. Januar 2026, 19:07 Uhr

Games Workshop verbietet KI in Warhammer-Kreativprozessen

Freitag, 26. Dezember 2025, 01:16 Uhr

Commentary urges end to anthropomorphizing AI

Mittwoch, 24. Dezember 2025, 10:12 Uhr

AI boosts scientific productivity but erodes paper quality

Mittwoch, 24. Dezember 2025, 04:08 Uhr

How AI coding agents function and their limitations

Samstag, 20. Dezember 2025, 03:32 Uhr

Gemini AI yields sloppy code in Ubuntu development helper script

Montag, 15. Dezember 2025, 03:11 Uhr

GNOME bans AI-generated extensions from shell store

 

 

 

Diese Website verwendet Cookies

Wir verwenden Cookies für Analysen, um unsere Website zu verbessern. Lesen Sie unsere Datenschutzrichtlinie für weitere Informationen.
Ablehnen