LLVM implements AI policy requiring human oversight

The open-source project LLVM has introduced a new policy allowing AI-generated code in contributions, provided humans review and understand the submissions. This 'human in the loop' approach ensures accountability while addressing community concerns about transparency. The policy, developed with input from contributors, balances innovation with reliability in software development.

LLVM, a foundational collection of compiler and toolchain components used in projects like Clang, Rust, Swift, and the Linux kernel, has adopted a policy on AI tool use in contributions. Published on January 22, 2026, the guidelines permit developers to employ any AI tools but emphasize full accountability for the submitted work.

Under the policy, contributors must disclose the AI tool used, either in the pull request description, commit message, or authorship details. They are required to review and comprehend their submissions, confidently justifying them during reviews and ensuring they merit a maintainer's attention. The rules clarify that violations will be handled according to existing community processes.

The development process involved extensive community engagement. A LLVM member highlighted discrepancies between the project's AI handling, code of conduct, and actual practices, referencing a notable pull request discussed on Hacker News where AI use was admitted post-submission but not initially disclosed.

LLVM maintainer Reid Kleckner spearheaded the effort. His initial draft, inspired by Fedora's AI policy, proposed restrictions such as limiting newcomers to 150 lines of non-test code. After feedback from community meetings and forums, the final version shifted to more explicit requirements, focusing on review readiness and question-answering ability rather than vague ownership clauses.

The updated AI Tool Use Policy is now available on LLVM's documentation site, including examples of acceptable AI-assisted work and violation guidelines. This move aligns LLVM with other open-source initiatives adapting to AI's growing role in development.

Makala yanayohusiana

Realistic illustration of Linux Foundation executives and AI partners launching Agentic AI Foundation, featuring collaborative autonomous AI agents on a conference screen.
Picha iliyoundwa na AI

Linux Foundation launches Agentic AI Foundation

Imeripotiwa na AI Picha iliyoundwa na AI

The Linux Foundation has launched the Agentic AI Foundation to foster open collaboration on autonomous AI systems. Major tech companies, including Anthropic, OpenAI, and Block, contributed key open-source projects to promote interoperability and prevent vendor lock-in. The initiative aims to create neutral standards for AI agents that can make decisions and execute tasks independently.

Linus Torvalds, creator of the Linux kernel, has criticized efforts to create rules for AI-generated code submissions, calling them pointless. In a recent email, he argued that such policies would not deter malicious contributors and urged focus on code quality instead. This stance highlights ongoing tensions in open-source development over artificial intelligence tools.

Imeripotiwa na AI

The Linux developer community has shifted from debating AI's role to integrating it into kernel engineering processes. Developers now use AI for project maintenance, though questions persist about writing code with it. Concerns over copyright and open-source licensing remain.

The b4 kernel development tool for Linux is now internally testing its AI agent designed to assist with code reviews. This step, known as dog-feeding, marks a practical application of the AI feature within the tool's development process. The update comes from Phoronix, a key source for Linux news.

Imeripotiwa na AI

OpenClaw, an open-source AI project formerly known as Moltbot and Clawdbot, has surged to over 100,000 GitHub stars in less than a week. This execution engine enables AI agents to perform actions like sending emails and managing calendars on users' behalf within chat interfaces. Its rise highlights potential to simplify crypto usability while raising security concerns.

Bandcamp has prohibited music generated wholly or substantially by AI on its platform, aiming to safeguard the human element in music creation. The policy, announced on January 14, 2026, allows users to flag suspected AI content for review and removal. This move contrasts with other streaming services grappling with an influx of AI-produced tracks.

Imeripotiwa na AI

Music labels and tech companies are addressing the unauthorized use of artists' work in training AI music generators like Udio and Suno. Recent settlements with major labels aim to create new revenue streams, while innovative tools promise to remove unlicensed content from AI models. Artists remain cautious about the technology's impact on their livelihoods.

Jumamosi, 24. Mwezi wa kwanza 2026, 07:33:30

AerynOS rejects LLM use in contributions over ethical concerns

Jumanne, 20. Mwezi wa kwanza 2026, 20:48:40

Before AI summit, an ethics checklist urged

Alhamisi, 15. Mwezi wa kwanza 2026, 14:22:35

Bandcamp bans AI-generated music from its platform

Jumatano, 14. Mwezi wa kwanza 2026, 14:22:25

Linus Torvalds uses AI for personal coding project

Jumanne, 13. Mwezi wa kwanza 2026, 19:07:17

Games Workshop bans AI in Warhammer creative processes

Ijumaa, 26. Mwezi wa kumi na mbili 2025, 01:16:14

Commentary urges end to anthropomorphizing AI

Jumatano, 24. Mwezi wa kumi na mbili 2025, 10:12:48

AI boosts scientific productivity but erodes paper quality

Jumatano, 24. Mwezi wa kumi na mbili 2025, 04:08:04

How AI coding agents function and their limitations

Jumamosi, 20. Mwezi wa kumi na mbili 2025, 03:32:45

Gemini AI yields sloppy code in Ubuntu development helper script

Jumatatu, 15. Mwezi wa kumi na mbili 2025, 03:11:00

GNOME bans AI-generated extensions from shell store

 

 

 

Tovuti hii inatumia vidakuzi

Tunatumia vidakuzi kwa uchambuzi ili kuboresha tovuti yetu. Soma sera ya faragha yetu kwa maelezo zaidi.
Kataa