Python foundation accepts Anthropic funding after rejecting US grant

The Python Software Foundation has secured $1.5 million from Anthropic, the company behind Claude AI, for a two-year partnership focused on enhancing Python ecosystem security. This follows the foundation's rejection of similar funding from the US government last year over concerns about diversity, equity, and inclusion policies. The investment aims to protect the Python Package Index from supply chain attacks and support ongoing operations.

Python has become essential to modern AI development, powering frameworks like TensorFlow and PyTorch due to its accessibility and rich libraries. On January 15, 2026, the Python Software Foundation (PSF) announced a $1.5 million investment from Anthropic over the next two years.

Last year, the PSF rejected a comparable $1.5 million grant from the National Science Foundation (NSF). The decision stemmed from a clause allowing the NSF to reclaim funds if the PSF violated the US government's anti-DEI policies. Loren Crary of the PSF addressed this in a statement, highlighting the foundation's concerns.

Anthropic's funding targets security improvements for the Python ecosystem, particularly the Python Package Index (PyPI). PyPI hosts hundreds of thousands of packages and serves millions of developers worldwide but remains vulnerable to malicious open-source uploads. The partnership will develop automated review tools for uploaded packages, shifting from reactive measures to proactive detection.

Key initiatives include creating a dataset of known malware to train detection tools that spot suspicious patterns. This approach could extend to other open-source repositories. Beyond security, the funds will sustain PyPI operations, the Developers in Residence program for CPython contributions, and community grants.

Anthropic's contribution underscores its reliance on Python for operations, blending self-interest with community support. As AI firms increasingly depend on open-source infrastructure, such investments highlight the need for sustainable funding models amid corporate freeloading concerns.

相关文章

Realistic illustration of Linux Foundation executives and AI partners launching Agentic AI Foundation, featuring collaborative autonomous AI agents on a conference screen.
AI 生成的图像

Linux Foundation launches Agentic AI Foundation

由 AI 报道 AI 生成的图像

The Linux Foundation has launched the Agentic AI Foundation to foster open collaboration on autonomous AI systems. Major tech companies, including Anthropic, OpenAI, and Block, contributed key open-source projects to promote interoperability and prevent vendor lock-in. The initiative aims to create neutral standards for AI agents that can make decisions and execute tasks independently.

The Linux Foundation announced the creation of the Agentic AI Foundation (AAIF) on December 9, 2025, in San Francisco, to foster open-source development of AI agents. Co-founded by Anthropic, Block, and OpenAI, the initiative includes donations of key projects: Anthropic's Model Context Protocol (MCP), Block's goose framework, and OpenAI's AGENTS.md. The foundation aims to promote interoperability and prevent fragmentation in AI agent technologies.

由 AI 报道

Anthropic's official Git MCP server contained worrying security vulnerabilities that could be chained together for severe impacts. The issues were highlighted in a recent TechRadar report. Details emerged on potential risks to the AI company's infrastructure.

Anthropic has introduced Cowork, a new tool that extends its Claude AI to handle general office tasks by accessing user folders on Mac computers. Designed for non-developers, it allows plain-language instructions to organize files, create reports, and more. The feature is available as a research preview for Claude Max subscribers.

由 AI 报道

北京人工智能科学公司DP Technology宣布完成超过8亿元人民币的C轮融资,用于招聘和研发。该轮融资吸引了多家国有和风险投资机构的支持,随着AI加速科学发现的兴趣日益增加,此举备受关注。

AI coding agents from companies like OpenAI, Anthropic, and Google enable extended work on software projects, including writing apps and fixing bugs under human oversight. These tools rely on large language models but face challenges like limited context processing and high computational costs. Understanding their mechanics helps developers decide when to deploy them effectively.

由 AI 报道

AerynOS, an alpha-stage Linux distribution, has implemented a policy banning large language models in its development and community activities. The move addresses ethical issues with training data, environmental impacts, and quality risks. Exceptions are limited to translation and accessibility needs.

 

 

 

此网站使用 cookie

我们使用 cookie 进行分析以改进我们的网站。阅读我们的 隐私政策 以获取更多信息。
拒绝