Anthropic adds memory feature to Claude's free plan

Anthropic has extended its memory capability to the free tier of its Claude AI chatbot, allowing users to reference past conversations. The company also released a tool to import memories from competing chatbots like ChatGPT and Gemini. This update coincides with Claude's surge in popularity amid a dispute with the US Department of Defense.

Anthropic announced on March 2, 2026, that the memory feature for its Claude AI chatbot is now available on the free plan. Previously a paid option, the feature enables Claude to reference previous conversations to inform its responses. Anthropic first introduced memory capabilities last August, followed by compartmentalization of memories in the fall.

Users can enable memory during chats with Claude. If they choose to disable it, options include pausing the feature to preserve memories for later use or deleting them entirely from Anthropic's servers.

Complementing this, Anthropic launched a memory import tool on the same day. The tool extracts context and memories from other AI chatbots, generating a text prompt that users can copy and paste into Claude. Supported competitors include ChatGPT, Gemini, and Copilot. Assimilation takes about 24 hours, visible via the "See what Claude learned about you" button. In the "Manage memory" section, users can edit what Claude remembers. Anthropic notes that Claude prioritizes "work-related topics to enhance its effectiveness as a collaborator" and may not retain unrelated personal details.

Claude's popularity has grown rapidly, recently reaching the top spot in the App Store's free apps charts, surpassing ChatGPT. This rise aligns with Anthropic's ongoing contract dispute with the US government over AI safeguards. On Friday, US Defense Secretary Pete Hegseth designated Anthropic a "supply chain risk" after the company refused a Pentagon contract involving mass surveillance against Americans and fully autonomous weapons. Anthropic has vowed to challenge the label. Meanwhile, OpenAI is reportedly assuming Anthropic's former role with the Department of Defense, prompting some users to boycott ChatGPT and cancel subscriptions.

Verwandte Artikel

Dramatic illustration of Anthropic imposing a paywall on Claude AI, blocking third-party agents from overloaded servers.
Bild generiert von KI

Anthropic ends unlimited Claude access via third-party agents, requires extra payments for heavy use

Von KI berichtet Bild generiert von KI

Anthropic has restricted unlimited access to its Claude AI models through third-party agents like OpenClaw, requiring heavy users to pay extra via API keys or usage bundles starting April 4, 2026. The policy shift, announced over the weekend, addresses severe system strain from high-volume agent tools previously covered under $20 monthly subscriptions.

Anthropic has upgraded its Claude AI chatbot's free plan by adding previously paid features, positioning it as an ad-free alternative to OpenAI's ChatGPT. The enhancements include file creation, connectors to third-party services, and custom skills, amid OpenAI's plans to introduce ads in its free tier. This move follows Anthropic's Super Bowl advertisements criticizing the ad strategy.

Von KI berichtet

Anthropic has announced that its AI chatbot Claude will remain free of advertisements, contrasting sharply with rival OpenAI's recent decision to test ads in ChatGPT. The company launched a Super Bowl ad campaign mocking AI assistants that interrupt conversations with product pitches. This move highlights growing tensions in the competitive AI landscape.

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's Claude AI, following the company's refusal to allow its use for mass surveillance or autonomous weapons. The order includes a six-month phaseout period. This decision stems from ongoing clashes between Anthropic and the Department of Defense over AI restrictions.

Von KI berichtet

Anthropic has limited access to its Claude Mythos Preview AI model due to its superior ability to detect and exploit software vulnerabilities, while launching Project Glasswing—a consortium with over 45 tech firms including Apple, Google, and Microsoft—to collaboratively patch flaws and bolster defenses. The announcement follows recent data leaks at the firm.

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

Von KI berichtet Fakten geprüft

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

 

 

 

Diese Website verwendet Cookies

Wir verwenden Cookies für Analysen, um unsere Website zu verbessern. Lesen Sie unsere Datenschutzrichtlinie für weitere Informationen.
Ablehnen