Anthropic restricts free Claude access via third-party tools

Anthropic has ended free access to its Claude AI for users of third-party applications like OpenClaw. Starting 3 p.m. ET on April 4, such users must purchase a usage bundle or obtain a Claude API key. The change addresses engineering constraints amid rising demand.

Boris Cherny, head of Claude Code at Anthropic, announced the policy shift on X. He stated that Claude subscriptions no longer support free usage through third-party tools. Users of apps like OpenClaw, an open-source AI assistant for automating tasks such as managing emails and calendars, now face new requirements to continue with Claude as their language model backend alongside options like ChatGPT or Google Gemini. Cherny explained the decision stems from capacity management. “We’ve been working hard to meet the increase in demand for Claude, and our subscriptions weren't built for the usage patterns of these third-party tools,” he wrote. “Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API.” Affected users can buy discounted usage bundles or switch to alternatives including xAI, Perplexity, or DeepSeek. Anthropic offers its own tool, Claude Cowork, for similar workflow tasks.

Verwandte Artikel

Illustration of Claude AI controlling a Mac desktop, with open apps like Slack and Calendar, highlighting new research preview features.
Bild generiert von KI

Anthropic's Claude AI Gains Full MacOS Desktop Control in Research Preview

Von KI berichtet Bild generiert von KI

Building on its January Cowork feature, Anthropic has launched a research preview for Claude Code and Cowork tools, enabling Pro and Max subscribers' Claude AI to directly control Mac desktops—pointing, clicking, scrolling, and navigating screens for tasks like opening files, using browsers, developer tools, and app interactions such as Google Calendar and Slack. Safeguards address security risks, amid competition from tools like OpenClaw.

Anthropic has upgraded its Claude AI chatbot's free plan by adding previously paid features, positioning it as an ad-free alternative to OpenAI's ChatGPT. The enhancements include file creation, connectors to third-party services, and custom skills, amid OpenAI's plans to introduce ads in its free tier. This move follows Anthropic's Super Bowl advertisements criticizing the ad strategy.

Von KI berichtet

Anthropic is temporarily doubling usage limits for its Claude AI chatbot during off-peak hours from March 13 to March 27. The promotion applies to Free, Pro, Max, and Team plan users, excluding Enterprise plans. It activates automatically across web, desktop, mobile, and integrated apps.

Das KI-Tool Claude Cowork von Anthropic hat einen starken Rückgang der Aktien von Infosys, TCS und anderen SaaS-Unternehmen verursacht. Diese Firmen verloren Hunderte Milliarden Dollar an Marktwert. Der Auslöser ist der Aufstieg der KI.

Von KI berichtet

Anthropic has confirmed the leak of more than 512,000 lines of source code for its Claude Code tool. The disclosure reveals disabled features hinting at future developments, including a persistent background agent called Kairos. Observers examining the code also found references to stealth modes and a virtual assistant named Buddy.

Die Musikrechtefirma BMG hat eine Klage gegen die KI-Firma Anthropic eingereicht. Sie behauptet, dass der Chatbot Claude unerlaubt Songtexte verwendet hat, um sie zu trainieren. In der Klage wird behauptet, dass die Rechtsverletzung bis zur Gründung von Anthropic zurückreicht und Werke von Künstlern wie Justin Bieber und Bruno Mars betrifft. BMG fordert Schadensersatz in Höhe von bis zu 150.000 Dollar pro verletztem Werk.

Von KI berichtet Fakten geprüft

The Pentagon has formally notified AI company Anthropic that it is deemed a “supply chain risk,” a rare designation that critics say is typically aimed at adversary-linked technology. The move follows a breakdown in negotiations over whether the U.S. military can use Anthropic’s Claude models for all lawful purposes, versus contractual limits the company says are needed to prevent fully autonomous weapons and mass domestic surveillance.

 

 

 

Diese Website verwendet Cookies

Wir verwenden Cookies für Analysen, um unsere Website zu verbessern. Lesen Sie unsere Datenschutzrichtlinie für weitere Informationen.
Ablehnen