Anthropic
Anthropic restricts Claude Mythos AI release and launches Project Glasswing over cybersecurity risks
AI에 의해 보고됨 AI에 의해 생성된 이미지
Anthropic has limited access to its Claude Mythos Preview AI model due to its superior ability to detect and exploit software vulnerabilities, while launching Project Glasswing—a consortium with over 45 tech firms including Apple, Google, and Microsoft—to collaboratively patch flaws and bolster defenses. The announcement follows recent data leaks at the firm.
Anthropic has reportedly agreed to pay Google $200 billion over the next five years for access to chips and cloud servers. The deal, reported by The Information, follows an earlier agreement granting the Claude AI creator access to Google's infrastructure. It highlights the massive investments fueling the AI sector.
AI에 의해 보고됨
Following last week's unveiling that sparked global alarms, Anthropic has restricted its powerful Mythos AI—adept at finding cybersecurity vulnerabilities—to select firms under Project Glasswing, including Amazon Web Services, Apple, and Google, after an accidental leak raised national security concerns.
Building on its January Cowork feature, Anthropic has launched a research preview for Claude Code and Cowork tools, enabling Pro and Max subscribers' Claude AI to directly control Mac desktops—pointing, clicking, scrolling, and navigating screens for tasks like opening files, using browsers, developer tools, and app interactions such as Google Calendar and Slack. Safeguards address security risks, amid competition from tools like OpenClaw.
AI에 의해 보고됨
Music rights company BMG has filed a lawsuit against AI firm Anthropic, alleging unauthorized use of song lyrics to train its Claude chatbot. The complaint claims infringement dates back to Anthropic's founding and involves works by artists including Justin Bieber and Bruno Mars. BMG seeks damages up to $150,000 per infringed work.
The Python Software Foundation has secured $1.5 million from Anthropic, the company behind Claude AI, for a two-year partnership focused on enhancing Python ecosystem security. This follows the foundation's rejection of similar funding from the US government last year over concerns about diversity, equity, and inclusion policies. The investment aims to protect the Python Package Index from supply chain attacks and support ongoing operations.
AI에 의해 보고됨
A new Anthropic research paper reveals that large language models exhibit some introspective awareness of their internal processes, but this ability is highly inconsistent and unreliable. Published on November 3, 2025, the study titled 'Emergent Introspective Awareness in Large Language Models' uses innovative methods to test AI self-description. Despite occasional successes, failures of introspection remain the norm.
Anthropic expands Claude AI connectors to lifestyle apps
2026년 04월 17일 19시 24분Anthropic launches Claude Design research preview
2026년 04월 16일 04시 27분Anthropic releases Claude Opus 4.7 AI model
2026년 04월 08일 23시 36분Anthropic launches Claude Managed Agents for AI builders
2026년 04월 07일 18시 43분Linux Foundation announces AI security initiative with tech partners
2026년 03월 16일 04시 30분Anthropic doubles Claude usage limits during off-peak hours
2026년 03월 13일 00시 22분Anthropic previews interactive visuals for Claude chatbot
2026년 03월 03일 00시 25분Anthropic adds memory feature to Claude's free plan
2026년 02월 26일 21시 33분Anthropic retires Claude 3 Opus and grants it a Substack newsletter
2026년 01월 25일 03시 21분Anthropic details Linux container for Claude Cowork AI assistant