AI rewrite of chardet library ignites open-source licensing debate

The release of version 7.0 of the open-source Python library chardet has sparked controversy over whether an AI-assisted rewrite can change its original restrictive license. Maintainer Dan Blanchard used Anthropic's Claude tool to create a faster, MIT-licensed version, but original author Mark Pilgrim argues it violates the LGPL terms. The case highlights emerging legal and ethical questions in AI-generated code.

The chardet library, first developed by Mark Pilgrim in 2006 and released under the GNU Lesser General Public License (LGPL), detects character encodings in text. Dan Blanchard assumed maintenance in 2012 and last week unveiled version 7.0, describing it as a complete rewrite under the more permissive MIT license. Built with assistance from Anthropic's Claude coding tool, the update promises a 48-fold performance improvement and greater accuracy, achieved in about five days.

Blanchard aimed to make chardet suitable for inclusion in the Python standard library by addressing issues with its license, speed, and accuracy. He started with an empty repository, drafted a design document outlining the architecture, and instructed Claude to avoid basing the code on LGPL- or GPL-licensed material. After generation, Blanchard reviewed, tested, and iterated on every part without hand-writing the code.

However, a GitHub commenter using the name Mark Pilgrim contested the relicensing, claiming the new version derives from the original LGPL code despite the rewrite. "Their claim that it is a ‘complete rewrite’ is irrelevant, since they had ample exposure to the originally licensed code (i.e., this is not a ‘clean room’ implementation)," Pilgrim wrote. "Adding a fancy code generator into the mix does not somehow grant them any additional rights. I respectfully insist that they revert the project to its original license."

Blanchard acknowledged his familiarity with the prior codebase but maintained the AI output is structurally independent. Similarity analysis via JPlag showed at most 1.29 percent overlap between version 7.0 files and their predecessors, compared to up to 80 percent in earlier updates. He noted reliance on metadata files from old versions and Claude's training on public data, including possibly chardet's code, as potential complications.

The dispute has fueled broader discussions in the open-source community. Free Software Foundation Executive Director Zoë Kooyman told The Register, "There is nothing ‘clean’ about a Large Language Model which has ingested the code it is being asked to reimplement." Open-source developer Armin Ronacher argued in a blog post that discarding all original code creates a new work, likening it to the Ship of Theseus. Italian coder Salvatore “antirez” Sanfilippo suggested adapting to AI's transformative impact on software, while evangelist Bruce Perens warned of profound economic shifts, comparing it to the printing press's effects.

Awọn iroyin ti o ni ibatan

Illustrative photo of Pentagon challenging Anthropic's limits on Claude AI for military use during strained contract talks.
Àwòrán tí AI ṣe

Pentagon disputes Anthropic limits on Claude’s military use as contract talks strain

Ti AI ṣe iroyin Àwòrán tí AI ṣe Ti ṣayẹwo fun ododo

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

AI coding agents from companies like OpenAI, Anthropic, and Google enable extended work on software projects, including writing apps and fixing bugs under human oversight. These tools rely on large language models but face challenges like limited context processing and high computational costs. Understanding their mechanics helps developers decide when to deploy them effectively.

Ti AI ṣe iroyin

Anthropic has announced that its AI chatbot Claude will remain free of advertisements, contrasting sharply with rival OpenAI's recent decision to test ads in ChatGPT. The company launched a Super Bowl ad campaign mocking AI assistants that interrupt conversations with product pitches. This move highlights growing tensions in the competitive AI landscape.

On February 5, 2026, Anthropic and OpenAI simultaneously launched products shifting users from chatting with AI to managing teams of AI agents. Anthropic introduced Claude Opus 4.6 with agent teams for developers, while OpenAI unveiled Frontier and GPT-5.3-Codex for enterprise workflows. These releases coincide with a $285 billion drop in software stocks amid fears of AI disrupting traditional SaaS vendors.

Ti AI ṣe iroyin

An open-source AI assistant originally called Clawdbot has rapidly gained popularity before undergoing two quick rebrands to OpenClaw due to trademark concerns and online disruptions. Created by developer Peter Steinberger, the tool integrates into messaging apps to automate tasks and remember conversations. Despite security issues and scams, it continues to attract enthusiasts.

Wikipedia has prohibited the use of large language models to create or rewrite article content, citing violations of core content policies. Basic edits like fixing typos and certain article translations are permitted under strict conditions. The policy's enforcement details remain unclear.

Ti AI ṣe iroyin

Encyclopedia Britannica and its subsidiary Merriam-Webster have sued OpenAI, alleging copyright infringement for using their content to train AI models like ChatGPT without permission, as well as trademark infringement from the AI falsely attributing hallucinations to Britannica. The suit claims ChatGPT reproduces verbatim or near-verbatim portions, summaries, or abridgments of their works, cannibalizing traffic to their sites.

 

 

 

Ojú-ìwé yìí nlo kuki

A nlo kuki fun itupalẹ lati mu ilọsiwaju wa. Ka ìlànà àṣírí wa fun alaye siwaju sii.
Kọ