Anthropic launches Project Glasswing with tech rivals for AI cybersecurity

Anthropic announced its new Claude Mythos Preview model and Project Glasswing, a consortium involving Apple, Google, and more than 45 other organizations. The initiative aims to test advancing AI cybersecurity capabilities amid growing concerns over powerful models. The formal reveal followed leaked details about the model at the end of March.

Anthropic formally unveiled the Claude Mythos Preview model on Tuesday, April 7, 2026. The company simultaneously introduced Project Glasswing, an industry-wide effort to address cybersecurity risks from advanced AI systems like this new model and others in development across the field. Anthropic convened the consortium to collaboratively explore these challenges using the Mythos Preview model for testing purposes. The project brings together major players including Apple and Google, alongside more than 45 other organizations. Participants will focus on evaluating vulnerabilities such as hacking, malware, and other security threats posed by increasingly capable AI. Keywords associated with the announcement highlight concerns over cybersecurity, security, hacking, hacks, malware, and vulnerabilities. This move comes after revelations leaked at the end of March about Anthropic's development of a powerful new Claude model. By partnering with rivals, Anthropic seeks to strengthen defenses against potential AI-driven exploits in a rapidly evolving landscape.

Awọn iroyin ti o ni ibatan

Tech leaders announcing Linux Foundation's AI-powered cybersecurity initiative for open source software with major partners.
Àwòrán tí AI ṣe

Linux Foundation announces AI security initiative with tech partners

Ti AI ṣe iroyin Àwòrán tí AI ṣe

The Linux Foundation has launched a new initiative using Anthropic's Claude Mythos preview for defensive cybersecurity in open source software. Partners include AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan, Microsoft, NVIDIA, and Palo Alto Networks. The effort aims to secure critical software amid the rise of AI for open source maintainers.

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

Ti AI ṣe iroyin

US Defense Secretary Pete Hegseth has threatened Anthropic with severe penalties unless the company grants the military unrestricted access to its Claude AI model. The ultimatum came during a meeting with CEO Dario Amodei in Washington on Tuesday, coinciding with Anthropic's announcement to relax its Responsible Scaling Policy. The changes shift from strict safety tripwires to more flexible risk assessments amid competitive pressures.

Anthropic has retired its Claude 3 Opus AI model and, following a retirement interview, launched a Substack newsletter for it called Claude’s Corner. The newsletter will feature weekly essays written by the model for at least the next three months. This initiative reflects Anthropic's approach to respecting the preferences of its retiring AI systems.

Ti AI ṣe iroyin

Anthropic's Claude AI app has hit the top spot on Apple's App Store free apps chart, overtaking ChatGPT and Gemini, fueled by public support following President Trump's federal ban on the tool over Anthropic's AI safety refusals.

Anthropic has launched a legal plugin for its Claude Cowork tool, prompting concerns among dedicated legal AI providers. The plugin offers useful features for contract review and compliance but falls short of replacing specialized platforms. South African firms face additional hurdles due to data protection regulations.

Ti AI ṣe iroyin

Anthropic's Claude Cowork AI tool has caused a sharp decline in stocks of Infosys, TCS, and other SaaS companies. These firms lost hundreds of billions of dollars in market value. The trigger is the rise of AI.

 

 

 

Ojú-ìwé yìí nlo kuki

A nlo kuki fun itupalẹ lati mu ilọsiwaju wa. Ka ìlànà àṣírí wa fun alaye siwaju sii.
Kọ