OpenAIs robotikchef avgår efter oro kring försvarsavtal

Caitlin Kalinowski, OpenAIs chef för robotik, har avgått med hänvisning till otillräcklig övervägning av etiska skyddsåtgärder i företagets senaste avtal med försvarsdepartementet. Hon uttryckte farhågor kring potentiell övervakning och autonoma vapen i ett inlägg på X. OpenAI bekräftade hennes avgång och upprepade sina åtaganden emot inhemsk övervakning och dödliga autonoma system.

Caitlin Kalinowski meddelade sin avgång från OpenAI på X, där hon tjänstgjort som chef för robotik sedan hon anslöt till företaget i slutet av 2024 efter att ha arbetat på Meta. I sitt inlägg kritiserade hon hastigheten i OpenAIs partnerskap med försvarsdepartementet och konstaterade att „övervakning av amerikaner utan rättslig prövning och dödlig autonomi utan mänsklig bemyndigande är gränser som förtjänade mer övervägande än de fick.“ Hon tillade i ett svar att „tillkännagivandet skyndades på utan definierade skyddsåtgärder“ och beskrev det som en „styrningsfråga i första hand“.“},

Relaterade artiklar

Split-scene illustration of Anthropic's renewed Pentagon talks contrasting with backlash against OpenAI's military AI deal.
Bild genererad av AI

Anthropic resumes Pentagon talks as OpenAI military deal faces backlash

Rapporterad av AI Bild genererad av AI

Following last week's federal ban on its AI tools, Anthropic has resumed negotiations with the US Defense Department to avert a supply chain risk designation. Meanwhile, OpenAI's parallel military agreement is under fire from employees, rivals, and Anthropic CEO Dario Amodei, who accused it of misleading claims in a leaked memo.

Hundreds of employees from Google and OpenAI have signed an open letter in solidarity with Anthropic, urging their companies to resist Pentagon demands for unrestricted military use of AI models. The letter opposes uses involving domestic mass surveillance and autonomous killing without human oversight. This comes amid threats from US Defense Secretary Pete Hegseth to label Anthropic a supply chain risk.

Rapporterad av AI

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

A mass shooting in British Columbia has drawn attention to OpenAI CEO Sam Altman's push for privacy protections for AI conversations. The shooter reportedly discussed gun violence scenarios with ChatGPT months before the attack, but OpenAI did not alert authorities. Canadian officials are questioning the company's handling of the matter.

Rapporterad av AI Faktagranskad

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

US President Donald Trump stated on Friday that he is directing government agencies to stop working with Anthropic. The Pentagon plans to declare the startup a supply-chain risk, marking a major blow following a showdown over technology guardrails. Agencies using the company's products will have a six-month phase-out period.

Rapporterad av AI

Florida Attorney General James Uthmeier has initiated a criminal investigation into OpenAI, examining whether the company bears liability for ChatGPT providing advice to a suspected gunman in last year's Florida State University mass shooting. The shooting killed two people and wounded six others. OpenAI maintains that its chatbot only shared publicly available information and is not responsible.

 

 

 

Denna webbplats använder cookies

Vi använder cookies för analys för att förbättra vår webbplats. Läs vår integritetspolicy för mer information.
Avböj