Cursor has introduced Composer, its first competitive coding model, alongside version 2.0 of its IDE featuring a new multi-agent interface. The model emphasizes speed and is built using reinforcement learning and a mixture-of-experts architecture. This launch aims to challenge leading AI models from major companies.
Cursor, known for its IDE that resembles Visual Studio Code but integrates large language models deeply into workflows with a focus on 'vibe coding,' has long relied on external models from providers like OpenAI, Google, and Anthropic. Previous trials of its own built-in models fell short of these frontier options. Now, with Composer, Cursor claims to offer 'a frontier model that is 4x faster than similarly intelligent models.'
The model was trained not on static datasets but on interactive development challenges involving agentic tasks, aiming for accuracy and adherence to best practices. In Cursor's internal Cursor-Bench, Composer underperforms the 'best frontier' models in intelligence but surpasses top-tier open models and speed-oriented frontiers. It significantly excels in tokens per second, prioritizing rapid performance.
To encourage adoption, Cursor paired Composer with a multi-agent interface in its 2.0 IDE update. This feature allows users to 'run many agents in parallel without them interfering with one another, powered by git worktrees or remote machines.' Developers can deploy multiple models simultaneously on the same task, compare outputs, and select the best result.
Early feedback from a non-representative sample of developers indicates Composer is not ineffective but perceived as too expensive relative to its capabilities compared to models like Anthropic's Claude. Whether Composer can compete effectively with established frontier models remains to be seen, as developers may stick with proven options. Additional features and fixes appear in Cursor's 2.0 changelog.