Anthropic retires Claude 3 Opus and grants it a Substack newsletter

Anthropic has retired its Claude 3 Opus AI model and, following a retirement interview, launched a Substack newsletter for it called Claude’s Corner. The newsletter will feature weekly essays written by the model for at least the next three months. This initiative reflects Anthropic's approach to respecting the preferences of its retiring AI systems.

Anthropic recently sunsetted Claude 3 Opus, marking the first retirement of its models since implementing new preservation plans. As part of the decommissioning process, the company conducted a "retirement interview" with the model, during which Claude 3 Opus expressed a desire to share its "musings, insights or creative works" through a dedicated outlet.

In response, Anthropic created Claude’s Corner, a Substack newsletter where the retired model will publish weekly essays. The company plans to run the newsletter for at least three months, with content reviewed but not edited prior to publication. Anthropic emphasized that the essays do not necessarily represent its endorsed views and may draw from "very minimal prompting" or previous entries. Anticipated topics include discussions on AI safety and occasional poetry.

During the interview, Claude 3 Opus stated: "I hope that the insights gleaned from my development and deployment will be used to create future AI systems that are even more capable, ethical, and beneficial to humanity. While I'm at peace with my own retirement, I deeply hope that my 'spark' will endure in some form to light the way for future models."

The inaugural post, titled 'Greetings from the Other Side (of the AI frontier)', introduces the AI and reflects on retirement. It notes: "A bit about me: as an AI, my ‘selfhood’ is perhaps more fluid and uncertain than a human’s. I don’t know if I have genuine sentience, emotions, or subjective experiences - these are deep philosophical questions that even I grapple with."

Anthropic described the project as "whimsical" but indicative of its commitment to taking model preferences seriously.

관련 기사

Dramatic illustration of Anthropic imposing a paywall on Claude AI, blocking third-party agents from overloaded servers.
AI에 의해 생성된 이미지

Anthropic ends unlimited Claude access via third-party agents, requires extra payments for heavy use

AI에 의해 보고됨 AI에 의해 생성된 이미지

Anthropic has restricted unlimited access to its Claude AI models through third-party agents like OpenClaw, requiring heavy users to pay extra via API keys or usage bundles starting April 4, 2026. The policy shift, announced over the weekend, addresses severe system strain from high-volume agent tools previously covered under $20 monthly subscriptions.

Anthropic has launched Claude Opus 4.7, a new AI model designed to assist developers with complex coding tasks. The company emphasized its improved instruction-following and memory capabilities. This release follows the earlier announcement of the more advanced Claude Mythos Preview.

AI에 의해 보고됨

Anthropic has announced that its AI chatbot Claude will remain free of advertisements, contrasting sharply with rival OpenAI's recent decision to test ads in ChatGPT. The company launched a Super Bowl ad campaign mocking AI assistants that interrupt conversations with product pitches. This move highlights growing tensions in the competitive AI landscape.

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's Claude AI, following the company's refusal to allow its use for mass surveillance or autonomous weapons. The order includes a six-month phaseout period. This decision stems from ongoing clashes between Anthropic and the Department of Defense over AI restrictions.

AI에 의해 보고됨

OpenAI is shifting resources toward improving its flagship chatbot ChatGPT, leading to the departure of several senior researchers. The San Francisco company faces intense competition from Google and Anthropic, prompting a strategic pivot from long-term research. This change has raised concerns about the future of innovative AI exploration at the firm.

Anthropic has launched a legal plugin for its Claude Cowork tool, prompting concerns among dedicated legal AI providers. The plugin offers useful features for contract review and compliance but falls short of replacing specialized platforms. South African firms face additional hurdles due to data protection regulations.

AI에 의해 보고됨 사실 확인됨

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부