Anthropic teria concordado em pagar US$ 200 bilhões ao Google ao longo de cinco anos

A Anthropic teria concordado em pagar US$ 200 bilhões ao Google nos próximos cinco anos pelo acesso a chips e servidores em nuvem. O acordo, noticiado pelo The Information, segue um pacto anterior que concedeu à criadora da IA Claude acesso à infraestrutura do Google. O caso destaca os investimentos massivos que impulsionam o setor de IA.

O Google e a Anthropic fecharam um acordo no início deste mês fornecendo à startup de IA servidores em nuvem e chips, de acordo com relatos. O The Information detalhou na segunda-feira que a Anthropic se comprometeu a pagar ao Google o valor expressivo de US$ 200 bilhões ao longo de cinco anos como parte desse arranjo. Este pacto de cinco anos ressalta os enormes fluxos financeiros entre empresas de IA e gigantes da tecnologia em meio ao boom atual. Acordos multibilionários semelhantes incluem o recente contrato da Anthropic com a Amazon, contribuindo para um acúmulo de receita combinado de US$ 2 trilhões proveniente de contratos com a Anthropic e a OpenAI em empresas como Amazon, Google, Microsoft e Oracle. Os provedores de nuvem investiram cedo na onda da IA, apostando na demanda crescente das startups por recursos, e esses contratos têm gerado retornos até o momento. Projeções estimavam os custos de servidores da Anthropic para 2026 em US$ 20 bilhões, enquanto a OpenAI enfrentaria US$ 45 bilhões. Especialistas observam que tais acordos circulares, juntamente com investimentos como o da NVIDIA na OpenAI, impulsionam a expansão da IA, mas sobrecarregam recursos como data centers e suprimentos de memória RAM.

Artigos relacionados

Courtroom illustration of Anthropic suing the US DoD over AI supply-chain risk label, featuring executives, documents, and Claude AI elements.
Imagem gerada por IA

Anthropic sues US defense department over supply chain risk designation

Reportado por IA Imagem gerada por IA

Anthropic has filed a federal lawsuit against the US Department of Defense, challenging its recent label of the AI company as a supply-chain risk. The dispute stems from a contract disagreement over the use of Anthropic's Claude AI for military purposes, including restrictions on mass surveillance and autonomous weapons. The company argues the designation violates free speech and due process rights.

AI company Anthropic has sparked buzz on social media with a chart showing its revenue run rate surging from zero to $14 billion in just three years. This stands in stark contrast to the stagnant revenues of Indian IT companies over the same period.

Reportado por IA

Global investors are questioning the returns on massive tech spending in artificial intelligence. Christopher Wood, from Jefferies, identifies Anthropic as a standout in the evolving AI landscape. The AI boom has boosted US equities, but concerns grow over its sustainability.

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

Reportado por IA

US President Donald Trump has directed all federal agencies to immediately cease using Anthropic's AI tools amid a dispute over military applications. The move follows weeks of clashes between Anthropic and Pentagon officials regarding restrictions on AI for mass surveillance and autonomous weapons. A six-month phase-out period has been announced.

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's AI technology. The order follows a dispute with the Pentagon, where the company refused unconditional military use of its Claude models. Anthropic has vowed to challenge the Pentagon's ban in court.

Reportado por IA Verificado

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

 

 

 

Este site usa cookies

Usamos cookies para análise para melhorar nosso site. Leia nossa política de privacidade para mais informações.
Recusar