A CNET experiment compared Google's Gemini 3 Pro and Gemini 2.5 Flash models for vibe coding, a casual approach to generating code via AI chat. The thinking model proved easier and more comprehensive, while the fast model required more manual intervention. Results suggest the choice of model significantly affects the development experience.
Vibe coding involves using AI chatbots like Gemini, Claude, or ChatGPT to create functional code based on high-level ideas, making programming accessible to non-experts. In a recent test, the author explored this method by building a web app displaying horror movie posters with clickable details, adapting a suggested "Trophy Display Case" project.
Using Gemini 3 Pro, the more advanced reasoning model, the process unfolded over nearly 20 iterations. This model broke down complex tasks, such as integrating movie data, and offered unprompted suggestions like a 3D wheel effect or random movie picker to enhance the app. It handled errors transparently, explaining issues like embedding YouTube trailers—which ultimately led to a simpler link-based solution—and fixed problems like a non-functional exit button after multiple attempts. Gemini 3 Pro consistently provided full code rewrites after changes, simplifying updates for users.
In contrast, Gemini 2.5 Flash prioritized speed but demanded more user effort. It suggested manually acquiring images and details rather than automating via The Movie Database API, unless specifically asked. Even then, it struggled: after adding an API key, it populated mostly incorrect posters and required further fixes. Updates came as isolated code snippets, instructing users to replace sections manually, which could disrupt the casual vibe. When asked to rewrite the entire code, it called the request "a huge ask."
Both models produced workable results, but Gemini 3 Pro elevated the project with deeper reasoning and proactive help, while Flash's shortcuts necessitated vigilant prompting. Google has since updated to Gemini 3 Flash, but the core trade-off remains: depth versus efficiency in AI-assisted coding.