Python foundation accepts Anthropic funding after rejecting US grant

The Python Software Foundation has secured $1.5 million from Anthropic, the company behind Claude AI, for a two-year partnership focused on enhancing Python ecosystem security. This follows the foundation's rejection of similar funding from the US government last year over concerns about diversity, equity, and inclusion policies. The investment aims to protect the Python Package Index from supply chain attacks and support ongoing operations.

Python has become essential to modern AI development, powering frameworks like TensorFlow and PyTorch due to its accessibility and rich libraries. On January 15, 2026, the Python Software Foundation (PSF) announced a $1.5 million investment from Anthropic over the next two years.

Last year, the PSF rejected a comparable $1.5 million grant from the National Science Foundation (NSF). The decision stemmed from a clause allowing the NSF to reclaim funds if the PSF violated the US government's anti-DEI policies. Loren Crary of the PSF addressed this in a statement, highlighting the foundation's concerns.

Anthropic's funding targets security improvements for the Python ecosystem, particularly the Python Package Index (PyPI). PyPI hosts hundreds of thousands of packages and serves millions of developers worldwide but remains vulnerable to malicious open-source uploads. The partnership will develop automated review tools for uploaded packages, shifting from reactive measures to proactive detection.

Key initiatives include creating a dataset of known malware to train detection tools that spot suspicious patterns. This approach could extend to other open-source repositories. Beyond security, the funds will sustain PyPI operations, the Developers in Residence program for CPython contributions, and community grants.

Anthropic's contribution underscores its reliance on Python for operations, blending self-interest with community support. As AI firms increasingly depend on open-source infrastructure, such investments highlight the need for sustainable funding models amid corporate freeloading concerns.

Labaran da ke da alaƙa

Realistic illustration of Linux Foundation executives and AI partners launching Agentic AI Foundation, featuring collaborative autonomous AI agents on a conference screen.
Hoton da AI ya samar

Linux Foundation launches Agentic AI Foundation

An Ruwaito ta hanyar AI Hoton da AI ya samar

The Linux Foundation has launched the Agentic AI Foundation to foster open collaboration on autonomous AI systems. Major tech companies, including Anthropic, OpenAI, and Block, contributed key open-source projects to promote interoperability and prevent vendor lock-in. The initiative aims to create neutral standards for AI agents that can make decisions and execute tasks independently.

The Linux Foundation announced the creation of the Agentic AI Foundation (AAIF) on December 9, 2025, in San Francisco, to foster open-source development of AI agents. Co-founded by Anthropic, Block, and OpenAI, the initiative includes donations of key projects: Anthropic's Model Context Protocol (MCP), Block's goose framework, and OpenAI's AGENTS.md. The foundation aims to promote interoperability and prevent fragmentation in AI agent technologies.

An Ruwaito ta hanyar AI

Anthropic's official Git MCP server contained worrying security vulnerabilities that could be chained together for severe impacts. The issues were highlighted in a recent TechRadar report. Details emerged on potential risks to the AI company's infrastructure.

Anthropic has introduced Cowork, a new tool that extends its Claude AI to handle general office tasks by accessing user folders on Mac computers. Designed for non-developers, it allows plain-language instructions to organize files, create reports, and more. The feature is available as a research preview for Claude Max subscribers.

An Ruwaito ta hanyar AI

Beijing-based ai-for-science firm dp technology has raised more than 800 million yuan (us$114 million) in series c financing to expand research and development and hire talent. The round attracted backing from a mix of state-linked and venture investors. This comes as interest grows in using ai to speed up scientific discovery.

AI coding agents from companies like OpenAI, Anthropic, and Google enable extended work on software projects, including writing apps and fixing bugs under human oversight. These tools rely on large language models but face challenges like limited context processing and high computational costs. Understanding their mechanics helps developers decide when to deploy them effectively.

An Ruwaito ta hanyar AI

AerynOS, an alpha-stage Linux distribution, has implemented a policy banning large language models in its development and community activities. The move addresses ethical issues with training data, environmental impacts, and quality risks. Exceptions are limited to translation and accessibility needs.

 

 

 

Wannan shafin yana amfani da cookies

Muna amfani da cookies don nazari don inganta shafin mu. Karanta manufar sirri mu don ƙarin bayani.
Ƙi