AerynOS rejects LLM use in contributions over ethical concerns

AerynOS, an alpha-stage Linux distribution, has implemented a policy banning large language models in its development and community activities. The move addresses ethical issues with training data, environmental impacts, and quality risks. Exceptions are limited to translation and accessibility needs.

AerynOS, a Linux distribution focused on atomic updates and still in its alpha phase, has updated its contributing guidelines to prohibit the use of large language models (LLMs) throughout the project. This decision, announced on Reddit, applies to all aspects of development and community engagement, including source code, documentation, issue reports, and artwork.

The policy stems from several key concerns. Developers highlight ethical problems with how LLMs are trained, including the sourcing of data. They also point to the high environmental costs, such as excessive electricity and water consumption involved in building and operating these models. Additionally, there are worries about how LLM-generated content could degrade the overall quality of contributions and raise potential copyright issues.

While the ban is comprehensive, AerynOS allows narrow exceptions. Contributors can use LLMs only to translate text into English for issues or comments. The project may consider further allowances for accessibility purposes. In terms of user support, the team advises against depending on AI chatbots over official documentation. Requests based on inaccurate LLM outputs risk being overlooked, as maintainers aim to avoid debugging third-party errors.

This forward-looking policy seeks to ensure that all contributions undergo human review, thereby upholding the project's technical standards and reliability. It reflects a growing trend among open-source initiatives to scrutinize AI integration amid broader debates on its implications.

Makala yanayohusiana

Realistic illustration of Linux Foundation executives and AI partners launching Agentic AI Foundation, featuring collaborative autonomous AI agents on a conference screen.
Picha iliyoundwa na AI

Linux Foundation launches Agentic AI Foundation

Imeripotiwa na AI Picha iliyoundwa na AI

The Linux Foundation has launched the Agentic AI Foundation to foster open collaboration on autonomous AI systems. Major tech companies, including Anthropic, OpenAI, and Block, contributed key open-source projects to promote interoperability and prevent vendor lock-in. The initiative aims to create neutral standards for AI agents that can make decisions and execute tasks independently.

The open-source project LLVM has introduced a new policy allowing AI-generated code in contributions, provided humans review and understand the submissions. This 'human in the loop' approach ensures accountability while addressing community concerns about transparency. The policy, developed with input from contributors, balances innovation with reliability in software development.

Imeripotiwa na AI

Linus Torvalds, creator of the Linux kernel, has criticized efforts to create rules for AI-generated code submissions, calling them pointless. In a recent email, he argued that such policies would not deter malicious contributors and urged focus on code quality instead. This stance highlights ongoing tensions in open-source development over artificial intelligence tools.

Linus Torvalds, the creator of the Linux kernel, has strongly criticized discussions about AI-generated content in kernel documentation. He called talk of 'AI slop' pointless and stupid. The comments highlight ongoing tensions around AI in open-source development.

Imeripotiwa na AI

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

The b4 kernel development tool for Linux is now internally testing its AI agent designed to assist with code reviews. This step, known as dog-feeding, marks a practical application of the AI feature within the tool's development process. The update comes from Phoronix, a key source for Linux news.

Imeripotiwa na AI

A Guardian report has revealed that OpenAI's latest AI model, GPT-5.2, draws from Grokipedia, an xAI-powered online encyclopedia, when addressing sensitive issues like the Holocaust and Iranian politics. While the model is touted for professional tasks, tests question its source reliability. OpenAI defends its approach by emphasizing broad web searches with safety measures.

Jumatano, 28. Mwezi wa kwanza 2026, 15:59:15

Linux Foundation launches workshop on small language models

Jumatatu, 26. Mwezi wa kwanza 2026, 00:51:57

Hackers are using LLMs to build next-generation phishing attacks

Jumamosi, 24. Mwezi wa kwanza 2026, 06:44:08

Experts highlight AI threats like deepfakes and dark LLMs in cybercrime

Alhamisi, 22. Mwezi wa kwanza 2026, 06:54:19

cURL scraps bug bounties due to AI-generated slop

Ijumaa, 16. Mwezi wa kwanza 2026, 00:49:36

Wikimedia foundation partners with ai firms for wikipedia data access

Jumatatu, 12. Mwezi wa kwanza 2026, 19:05:07

Linus Torvalds uses AI tool for personal audio project

Jumamosi, 10. Mwezi wa kwanza 2026, 12:20:46

Larian elaborates on machine learning for Divinity amid generative AI ban

Jumatano, 24. Mwezi wa kumi na mbili 2025, 10:12:48

AI boosts scientific productivity but erodes paper quality

Jumamosi, 20. Mwezi wa kumi na mbili 2025, 03:32:45

Gemini AI yields sloppy code in Ubuntu development helper script

Ijumaa, 12. Mwezi wa kumi na mbili 2025, 20:51:19

AI embeds deeply in Linux kernel workflows

 

 

 

Tovuti hii inatumia vidakuzi

Tunatumia vidakuzi kwa uchambuzi ili kuboresha tovuti yetu. Soma sera ya faragha yetu kwa maelezo zaidi.
Kataa