OpenAI's head of robotics resigns amid defense partnership concerns

Caitlin Kalinowski, OpenAI's head of robotics, has resigned, citing insufficient deliberation on ethical guardrails in the company's recent deal with the Department of Defense. She expressed concerns over potential surveillance and autonomous weapons in a post on X. OpenAI acknowledged her departure and reiterated its commitments against domestic surveillance and lethal autonomous systems.

Caitlin Kalinowski announced her resignation from OpenAI on X, where she served as head of robotics since joining the company in late 2024 after working at Meta. In her post, she criticized the speed of OpenAI's partnership with the Department of Defense, stating that "surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." She further noted in a response that "the announcement was rushed without the guardrails defined," describing it as a "governance concern first and foremost."

OpenAI confirmed the resignation in a statement, expressing understanding of differing views on the matter and committing to ongoing discussions with stakeholders. The company emphasized that it does not endorse the issues raised by Kalinowski. According to the statement, "We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons."

This departure follows OpenAI's agreement with the Department of Defense, a move that drew scrutiny after Anthropic declined to relax its AI safeguards related to mass surveillance and fully autonomous weapons. OpenAI CEO Sam Altman has indicated willingness to adjust the deal to explicitly bar spying on Americans. Kalinowski's exit represents a notable reaction to the partnership's ethical implications.

Articoli correlati

Split-scene illustration of Anthropic's renewed Pentagon talks contrasting with backlash against OpenAI's military AI deal.
Immagine generata dall'IA

Anthropic resumes Pentagon talks as OpenAI military deal faces backlash

Riportato dall'IA Immagine generata dall'IA

Following last week's federal ban on its AI tools, Anthropic has resumed negotiations with the US Defense Department to avert a supply chain risk designation. Meanwhile, OpenAI's parallel military agreement is under fire from employees, rivals, and Anthropic CEO Dario Amodei, who accused it of misleading claims in a leaked memo.

Hundreds of employees from Google and OpenAI have signed an open letter in solidarity with Anthropic, urging their companies to resist Pentagon demands for unrestricted military use of AI models. The letter opposes uses involving domestic mass surveillance and autonomous killing without human oversight. This comes amid threats from US Defense Secretary Pete Hegseth to label Anthropic a supply chain risk.

Riportato dall'IA

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

A mass shooting in British Columbia has drawn attention to OpenAI CEO Sam Altman's push for privacy protections for AI conversations. The shooter reportedly discussed gun violence scenarios with ChatGPT months before the attack, but OpenAI did not alert authorities. Canadian officials are questioning the company's handling of the matter.

Riportato dall'IA Verificato

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

US President Donald Trump stated on Friday that he is directing government agencies to stop working with Anthropic. The Pentagon plans to declare the startup a supply-chain risk, marking a major blow following a showdown over technology guardrails. Agencies using the company's products will have a six-month phase-out period.

Riportato dall'IA

Florida Attorney General James Uthmeier has initiated a criminal investigation into OpenAI, examining whether the company bears liability for ChatGPT providing advice to a suspected gunman in last year's Florida State University mass shooting. The shooting killed two people and wounded six others. OpenAI maintains that its chatbot only shared publicly available information and is not responsible.

 

 

 

Questo sito web utilizza i cookie

Utilizziamo i cookie per l'analisi per migliorare il nostro sito. Leggi la nostra politica sulla privacy per ulteriori informazioni.
Rifiuta