Python foundation accepts Anthropic funding after rejecting US grant

The Python Software Foundation has secured $1.5 million from Anthropic, the company behind Claude AI, for a two-year partnership focused on enhancing Python ecosystem security. This follows the foundation's rejection of similar funding from the US government last year over concerns about diversity, equity, and inclusion policies. The investment aims to protect the Python Package Index from supply chain attacks and support ongoing operations.

Python has become essential to modern AI development, powering frameworks like TensorFlow and PyTorch due to its accessibility and rich libraries. On January 15, 2026, the Python Software Foundation (PSF) announced a $1.5 million investment from Anthropic over the next two years.

Last year, the PSF rejected a comparable $1.5 million grant from the National Science Foundation (NSF). The decision stemmed from a clause allowing the NSF to reclaim funds if the PSF violated the US government's anti-DEI policies. Loren Crary of the PSF addressed this in a statement, highlighting the foundation's concerns.

Anthropic's funding targets security improvements for the Python ecosystem, particularly the Python Package Index (PyPI). PyPI hosts hundreds of thousands of packages and serves millions of developers worldwide but remains vulnerable to malicious open-source uploads. The partnership will develop automated review tools for uploaded packages, shifting from reactive measures to proactive detection.

Key initiatives include creating a dataset of known malware to train detection tools that spot suspicious patterns. This approach could extend to other open-source repositories. Beyond security, the funds will sustain PyPI operations, the Developers in Residence program for CPython contributions, and community grants.

Anthropic's contribution underscores its reliance on Python for operations, blending self-interest with community support. As AI firms increasingly depend on open-source infrastructure, such investments highlight the need for sustainable funding models amid corporate freeloading concerns.

Связанные статьи

Tech leaders announcing Linux Foundation's AI-powered cybersecurity initiative for open source software with major partners.
Изображение, созданное ИИ

Linux Foundation announces AI security initiative with tech partners

Сообщено ИИ Изображение, созданное ИИ

The Linux Foundation has launched a new initiative using Anthropic's Claude Mythos preview for defensive cybersecurity in open source software. Partners include AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan, Microsoft, NVIDIA, and Palo Alto Networks. The effort aims to secure critical software amid the rise of AI for open source maintainers.

The Linux Foundation has secured $12.5 million in grants from AI companies to bolster open source software security. The funding addresses maintainers overwhelmed by AI-generated vulnerability reports. It will be managed by Alpha-Omega and the Open Source Security Foundation.

Сообщено ИИ

Anthropic has launched the Anthropic Institute, a new research initiative, and opened its first Public Policy office in Washington, DC, this spring. These steps follow the AI company's recent federal lawsuit against the US government over a Defense Department supply chain risk designation tied to a contract dispute.

In the wake of Anthropic's unveiling of its powerful Claude Mythos AI—capable of detecting and exploiting software vulnerabilities—the US Treasury Secretary has convened top bank executives to highlight escalating AI-driven cyber threats. The move underscores growing concerns as the AI is restricted to a tech coalition via Project Glasswing.

Сообщено ИИ Проверено фактами

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

A federal judge in San Francisco issued a preliminary injunction on March 27, 2026, blocking the Trump administration's designation of AI company Anthropic as a military supply chain risk—a label applied three weeks earlier amid disputes over the firm's limits on its Claude AI models for military uses like autonomous weapons.

Сообщено ИИ

Global investors are questioning the returns on massive tech spending in artificial intelligence. Christopher Wood, from Jefferies, identifies Anthropic as a standout in the evolving AI landscape. The AI boom has boosted US equities, but concerns grow over its sustainability.

 

 

 

Этот сайт использует куки

Мы используем куки для анализа, чтобы улучшить наш сайт. Прочитайте нашу политику конфиденциальности для дополнительной информации.
Отклонить