Anthropic reportedly agrees to pay Google $200 billion over five years

Anthropic has reportedly agreed to pay Google $200 billion over the next five years for access to chips and cloud servers. The deal, reported by The Information, follows an earlier agreement granting the Claude AI creator access to Google's infrastructure. It highlights the massive investments fueling the AI sector.

Google and Anthropic struck a deal earlier this month providing the AI startup with cloud servers and chips, according to reports. The Information detailed on Monday that Anthropic committed to paying Google a staggering $200 billion over five years as part of this arrangement. This five-year pact underscores the enormous financial flows between AI firms and tech giants amid the ongoing boom. Similar multi-billion-dollar deals include Anthropic's recent arrangement with Amazon, contributing to a combined revenue backlog of $2 trillion from agreements with Anthropic and OpenAI across Amazon, Google, Microsoft, and Oracle. Cloud providers have invested early in the AI surge, betting on startups' growing demand for resources, and these contracts have so far delivered returns. Projections had pegged Anthropic's 2026 server costs at $20 billion, with OpenAI facing $45 billion. Experts note that such circular deals, alongside investments like NVIDIA's in OpenAI, drive the AI expansion but strain resources like data centers and RAM supplies.

Related Articles

Courtroom illustration of Anthropic suing the US DoD over AI supply-chain risk label, featuring executives, documents, and Claude AI elements.
Image generated by AI

Anthropic sues US defense department over supply chain risk designation

Reported by AI Image generated by AI

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

AI company Anthropic has sparked buzz on social media with a chart showing its revenue run rate surging from zero to $14 billion in just three years. This stands in stark contrast to the stagnant revenues of Indian IT companies over the same period.

Reported by AI

Global investors are questioning the returns on massive tech spending in artificial intelligence. Christopher Wood, from Jefferies, identifies Anthropic as a standout in the evolving AI landscape. The AI boom has boosted US equities, but concerns grow over its sustainability.

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

Reported by AI

US President Donald Trump has directed all federal agencies to immediately cease using Anthropic's AI tools amid a dispute over military applications. The move follows weeks of clashes between Anthropic and Pentagon officials regarding restrictions on AI for mass surveillance and autonomous weapons. A six-month phase-out period has been announced.

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's AI technology. The order follows a dispute with the Pentagon, where the company refused unconditional military use of its Claude models. Anthropic has vowed to challenge the Pentagon's ban in court.

Reported by AI Fact checked

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

 

 

 

This website uses cookies

We use cookies for analytics to improve our site. Read our privacy policy for more information.
Decline