Professional using Anthropic's Claude AI Cowork feature on MacBook to automatically organize files and generate reports, as shown in a realistic office scene.
Professional using Anthropic's Claude AI Cowork feature on MacBook to automatically organize files and generate reports, as shown in a realistic office scene.
Bild generiert von KI

Anthropic launches Cowork feature for Claude AI

Bild generiert von KI

Anthropic has introduced Cowork, a new tool that extends its Claude AI to handle general office tasks by accessing user folders on Mac computers. Designed for non-developers, it allows plain-language instructions to organize files, create reports, and more. The feature is available as a research preview for Claude Max subscribers.

Anthropic announced Cowork on January 12, 2026, building on the success of its Claude Code tool, which has been popular among developers since fall 2024 for automating programming tasks. Cowork adapts this agentic capability for broader use, integrated into the macOS Claude desktop app. Users grant access to a specific folder and issue instructions in everyday language to perform tasks like reorganizing downloads, renaming files for clarity, or generating spreadsheets from receipt screenshots and invoices.

Examples include compiling expense reports from photos of receipts or synthesizing reports from stacks of digital notes. As Felix Rieseberg, a member of Anthropic's technical staff, explained, "The inspiration for Cowork was watching what people did with Claude Code. We built it for coding, but people started using it for everything -- taxes, managing receipts, organizing files, random life admin." The tool also supports connections to third-party apps via Anthropic's Connectors framework, such as Canva, and works with the Claude Chrome extension for browser-based tasks.

Anthropic emphasizes that Claude cannot access or edit anything without explicit permission. However, risks include destructive actions from vague prompts, such as unintended file deletions, and potential prompt-injection attacks where malicious inputs could hijack the AI. To mitigate these, the company recommends clear instructions and using Cowork only on nonsensitive data. It has implemented defenses against injections, but warns of possible unintended consequences during this beta phase.

Currently limited to Mac users with the $100-per-month Claude Max subscription, Cowork is in research preview, with others able to join a waitlist. This launch follows Anthropic's recent Claude for Healthcare announcement and aligns with the growing trend of agentic AI, seen in tools like ChatGPT Agent and Google's Gemini models, which enable AIs to perform real-world tasks autonomously.

Was die Leute sagen

Reactions on X to Anthropic's Cowork launch are predominantly positive, with users praising its extension of Claude Code's agentic capabilities to non-developers for tasks like file organization and report creation. Enthusiasm centers on autonomy, safety features like VM isolation and user approvals, and rapid development (built in 1.5 weeks). Some express skepticism over privacy risks from folder access, Mac-only availability, subscription requirements, and potential job impacts. Neutral posts provide detailed summaries and comparisons.

Verwandte Artikel

Illustration of Claude AI controlling a Mac desktop, with open apps like Slack and Calendar, highlighting new research preview features.
Bild generiert von KI

Anthropic's Claude AI Gains Full MacOS Desktop Control in Research Preview

Von KI berichtet Bild generiert von KI

Building on its January Cowork feature, Anthropic has launched a research preview for Claude Code and Cowork tools, enabling Pro and Max subscribers' Claude AI to directly control Mac desktops—pointing, clicking, scrolling, and navigating screens for tasks like opening files, using browsers, developer tools, and app interactions such as Google Calendar and Slack. Safeguards address security risks, amid competition from tools like OpenClaw.

Anthropic hat ein Legal-Plugin für sein Claude Cowork-Tool lanciert, was Bedenken bei spezialisierten Legal-AI-Anbietern auslöst. Das Plugin bietet nützliche Funktionen für Vertragsprüfungen und Compliance, reicht aber nicht aus, um spezialisierte Plattformen zu ersetzen. Südafrikanische Unternehmen stehen vor zusätzlichen Hürden durch Datenschutzvorschriften.

Von KI berichtet

On February 5, 2026, Anthropic and OpenAI simultaneously launched products shifting users from chatting with AI to managing teams of AI agents. Anthropic introduced Claude Opus 4.6 with agent teams for developers, while OpenAI unveiled Frontier and GPT-5.3-Codex for enterprise workflows. These releases coincide with a $285 billion drop in software stocks amid fears of AI disrupting traditional SaaS vendors.

Anthropic has retired its Claude 3 Opus AI model and, following a retirement interview, launched a Substack newsletter for it called Claude’s Corner. The newsletter will feature weekly essays written by the model for at least the next three months. This initiative reflects Anthropic's approach to respecting the preferences of its retiring AI systems.

Von KI berichtet

Google has introduced a new command-line interface tool for its Workspace suite, aimed at simplifying integration with AI systems like OpenClaw. The tool bundles APIs from products such as Gmail, Drive, and Calendar, though it is not an officially supported product. This release emphasizes ease of use for both human developers and AI agents.

Anthropic has launched the Anthropic Institute, a new research initiative, and opened its first Public Policy office in Washington, DC, this spring. These steps follow the AI company's recent federal lawsuit against the US government over a Defense Department supply chain risk designation tied to a contract dispute.

Von KI berichtet Fakten geprüft

After Anthropic CEO Dario Amodei said in late February that the company would not allow its Claude model to be used for mass domestic surveillance or fully autonomous weapons, senior Pentagon officials said they have no intention of using AI for domestic surveillance and insist that private firms cannot set binding limits on how the U.S. military employs AI tools.

 

 

 

Diese Website verwendet Cookies

Wir verwenden Cookies für Analysen, um unsere Website zu verbessern. Lesen Sie unsere Datenschutzrichtlinie für weitere Informationen.
Ablehnen