Anthropic pledges ad-free Claude amid AI rivalry

Anthropic has announced that its AI chatbot Claude will remain free of advertisements, contrasting sharply with rival OpenAI's recent decision to test ads in ChatGPT. The company launched a Super Bowl ad campaign mocking AI assistants that interrupt conversations with product pitches. This move highlights growing tensions in the competitive AI landscape.

On February 4, 2026, Anthropic declared that its chatbot Claude would stay ad-free, emphasizing a commitment to user-focused interactions without commercial interruptions. In a blog post, the company stated, “There are many good places for advertising. A conversation with Claude is not one of them.” This stance directly challenges OpenAI, which began testing banner ads in January 2026 for free users and ChatGPT Go subscribers in the US. OpenAI specified that these ads appear at the bottom of responses, do not influence answers, and avoid sensitive topics like mental health and politics, while paid tiers remain ad-free.

Anthropic's Super Bowl commercial illustrates the issue through a humorous scenario: a man seeks workout advice from an AI fitness instructor, only for the assistant to insert a supplement advertisement, leaving him confused. The ad avoids naming OpenAI but clearly implies criticism. OpenAI CEO Sam Altman responded on X, calling the ads funny but inaccurate, noting, “We would obviously never run ads in the way Anthropic depicts them. We are not stupid and we know our users would reject that.”

The debate stems from financial pressures in the AI sector. OpenAI faces significant costs, expecting to burn $9 billion in 2025 while generating $13 billion in revenue, with only 5% of its 800 million weekly users subscribing. Altman had previously described ads in AI as “uniquely unsettling” in a 2024 interview. Anthropic, also unprofitable but progressing faster through enterprise contracts and tools like Claude Code—which has gained traction among developers, including at Microsoft—relies on subscriptions generating at least $1 billion.

Anthropic argues that ads could conflict with helpful advice, citing examples like sleep issues where an ad-supported AI might steer toward sales. “Users shouldn’t have to second-guess whether an AI is genuinely helping them or subtly steering the conversation towards something monetizable,” the company wrote. This positioning underscores differing business models in a fiercely competitive field, where AI coding agents like Claude Code challenge OpenAI's Codex.

相关文章

Smartphone displaying ChatGPT with a test advertisement banner for free users, illustrating OpenAI's new ad testing initiative.
AI 生成的图像

OpenAI to test ads in ChatGPT for free and Go tier users

由 AI 报道 AI 生成的图像

OpenAI has announced plans to begin testing advertisements in its ChatGPT app for free users and the new $8-per-month Go subscription tier in the United States. The company aims to diversify revenue amid significant financial pressures, while ensuring ads do not influence the AI's responses. Higher-paid tiers will remain ad-free.

Anthropic has upgraded its Claude AI chatbot's free plan by adding previously paid features, positioning it as an ad-free alternative to OpenAI's ChatGPT. The enhancements include file creation, connectors to third-party services, and custom skills, amid OpenAI's plans to introduce ads in its free tier. This move follows Anthropic's Super Bowl advertisements criticizing the ad strategy.

由 AI 报道

OpenAI has started testing advertisements in its ChatGPT chatbot for users on free and low-cost plans in the United States. Paid subscribers remain unaffected, while the company emphasizes privacy protections and user controls. This move aims to fund broader access to AI features amid industry competition.

US Defense Secretary Pete Hegseth has threatened Anthropic with severe penalties unless the company grants the military unrestricted access to its Claude AI model. The ultimatum came during a meeting with CEO Dario Amodei in Washington on Tuesday, coinciding with Anthropic's announcement to relax its Responsible Scaling Policy. The changes shift from strict safety tripwires to more flexible risk assessments amid competitive pressures.

由 AI 报道

Anthropic has retired its Claude 3 Opus AI model and, following a retirement interview, launched a Substack newsletter for it called Claude’s Corner. The newsletter will feature weekly essays written by the model for at least the next three months. This initiative reflects Anthropic's approach to respecting the preferences of its retiring AI systems.

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

由 AI 报道

Building on its January Cowork feature, Anthropic has launched a research preview for Claude Code and Cowork tools, enabling Pro and Max subscribers' Claude AI to directly control Mac desktops—pointing, clicking, scrolling, and navigating screens for tasks like opening files, using browsers, developer tools, and app interactions such as Google Calendar and Slack. Safeguards address security risks, amid competition from tools like OpenClaw.

 

 

 

此网站使用 cookie

我们使用 cookie 进行分析以改进我们的网站。阅读我们的 隐私政策 以获取更多信息。
拒绝