Takaisin artikkeleihin

Anthropic launches Claude Sonnet 4.5 AI model

30. syyskuuta 2025
Raportoinut AI

Anthropic has released its latest AI model, Claude Sonnet 4.5, claiming it excels in real-world applications. The model demonstrated sustained focus for up to 30 hours on complex, multistep tasks. Independent benchmarks, including one from OpenAI, show it outperforming rivals in practical job scenarios.

Anthropic, a leading AI research company, announced the availability of Claude Sonnet 4.5, positioning it as a top performer for practical uses like coding, computer interaction, and agent-based systems. According to Anthropic's reports, the model maintained focus for 30 hours while handling multistep tasks, a significant advancement in AI endurance for prolonged workflows.

The release highlights Claude Sonnet 4.5's strengths in real-world agents, where it enables more reliable automation of complex processes. TechRadar described it as "the best AI model in the world for real-world agents, coding, and computer use," emphasizing its immediate availability to users via Anthropic's platforms.

Further validation came from an OpenAI study, which tested AI models on real-world job tasks. In these evaluations, Claude outperformed competitors including GPT-5, Gemini, and Grok, particularly in scenarios requiring practical application and sustained performance. TechRepublic covered the launch, noting its implications for developers and businesses seeking robust AI tools.

Background on Anthropic underscores its focus on safe and interpretable AI systems. Founded by former OpenAI executives, the company has iteratively improved its Claude series, with Sonnet 4.5 building on prior versions to address limitations in long-duration task handling. No specific release date beyond "now available" was detailed across sources, but the September 2025 timing from Ars Technica aligns with ongoing AI advancements.

Quotes from Anthropic representatives were not directly provided in the sources, but the company's statements emphasize reliability: the model is designed for "multistep tasks" without losing coherence over extended periods. While benchmarks show clear leads, sources agree on its edge in coding and agent tasks, though broader implications for AI ethics and deployment remain areas for future scrutiny.

This launch occurs amid intensifying competition in AI, where models are increasingly judged by real-world utility rather than raw benchmarks. Anthropic's approach prioritizes focused, agentic capabilities, potentially influencing how enterprises integrate AI into daily operations.

Static map of article location