Anthropic
Anthropic restricts Claude Mythos AI release and launches Project Glasswing over cybersecurity risks
Reported by AI Image generated by AI
Anthropic has limited access to its Claude Mythos Preview AI model due to its superior ability to detect and exploit software vulnerabilities, while launching Project Glasswing—a consortium with over 45 tech firms including Apple, Google, and Microsoft—to collaboratively patch flaws and bolster defenses. The announcement follows recent data leaks at the firm.
Anthropic has reportedly agreed to pay Google $200 billion over the next five years for access to chips and cloud servers. The deal, reported by The Information, follows an earlier agreement granting the Claude AI creator access to Google's infrastructure. It highlights the massive investments fueling the AI sector.
Reported by AI
Following last week's unveiling that sparked global alarms, Anthropic has restricted its powerful Mythos AI—adept at finding cybersecurity vulnerabilities—to select firms under Project Glasswing, including Amazon Web Services, Apple, and Google, after an accidental leak raised national security concerns.
Building on its January Cowork feature, Anthropic has launched a research preview for Claude Code and Cowork tools, enabling Pro and Max subscribers' Claude AI to directly control Mac desktops—pointing, clicking, scrolling, and navigating screens for tasks like opening files, using browsers, developer tools, and app interactions such as Google Calendar and Slack. Safeguards address security risks, amid competition from tools like OpenClaw.
Reported by AI
Music rights company BMG has filed a lawsuit against AI firm Anthropic, alleging unauthorized use of song lyrics to train its Claude chatbot. The complaint claims infringement dates back to Anthropic's founding and involves works by artists including Justin Bieber and Bruno Mars. BMG seeks damages up to $150,000 per infringed work.
The Python Software Foundation has secured $1.5 million from Anthropic, the company behind Claude AI, for a two-year partnership focused on enhancing Python ecosystem security. This follows the foundation's rejection of similar funding from the US government last year over concerns about diversity, equity, and inclusion policies. The investment aims to protect the Python Package Index from supply chain attacks and support ongoing operations.
Reported by AI
A new Anthropic research paper reveals that large language models exhibit some introspective awareness of their internal processes, but this ability is highly inconsistent and unreliable. Published on November 3, 2025, the study titled 'Emergent Introspective Awareness in Large Language Models' uses innovative methods to test AI self-description. Despite occasional successes, failures of introspection remain the norm.
Anthropic expands Claude AI connectors to lifestyle apps
April 17, 2026 19:24Anthropic launches Claude Design research preview
April 16, 2026 04:27Anthropic releases Claude Opus 4.7 AI model
April 08, 2026 23:36Anthropic launches Claude Managed Agents for AI builders
April 07, 2026 18:43Linux Foundation announces AI security initiative with tech partners
March 16, 2026 04:30Anthropic doubles Claude usage limits during off-peak hours
March 13, 2026 00:22Anthropic previews interactive visuals for Claude chatbot
March 03, 2026 00:25Anthropic adds memory feature to Claude's free plan
February 26, 2026 21:33Anthropic retires Claude 3 Opus and grants it a Substack newsletter
January 25, 2026 03:21Anthropic details Linux container for Claude Cowork AI assistant