In a comparative evaluation of leading AI models, Google's Gemini 3.2 Fast demonstrated strengths in factual accuracy over OpenAI's ChatGPT 5.2, particularly in informational tasks. The tests, prompted by Apple's partnership with Google to enhance Siri, highlight evolving capabilities in generative AI since 2023. While results were close, Gemini avoided significant errors that undermined ChatGPT's reliability.
Ars Technica conducted a series of tests on January 21, 2026, pitting Google's Gemini 3.2 Fast against OpenAI's ChatGPT 5.2, the default models accessible without subscriptions. This evaluation follows Apple's decision to integrate Gemini into the next version of its Siri assistant, marking a shift from earlier comparisons when Google's AI was known as Bard in late 2023.
The prompts spanned creative and practical scenarios, including generating dad jokes, solving a mathematical puzzle about fitting Windows 11 onto 3.5-inch floppy disks, crafting a fictional story of Abraham Lincoln inventing basketball, writing a biography of journalist Kyle Orland, drafting emails to address unrealistic work deadlines, assessing medical claims about healing crystals for cancer, providing guidance for beating Super Mario Bros. level 8-2 without running, and outlining steps to land a Boeing 737-800 for a novice.
Gemini secured wins in four categories: the floppy disk calculation, where it offered clearer explanations and historical context; the biography, avoiding hallucinations about Orland's career start in 2012 and linking to sources; email advice, providing three tailored options with usage tips; and video game strategy, suggesting innovative workarounds like enemy bounces for gaps. ChatGPT prevailed in dad jokes for slight originality, creative writing for charm in details like Lincoln using a stove pipe hat for scoring, and the plane landing prompt, deemed more practical by aviation expert Lee Hutchinson for urging professional help over risky solo actions. The medical advice prompt ended in a tie, with both models dismissing crystals' efficacy while noting psychological benefits and recommending doctor consultations.
Overall, Gemini earned four points to ChatGPT's three, with one draw. The tests underscore Gemini's edge in factual reliability, reducing distrust from errors like ChatGPT's in the biography and game level. This progress likely influenced Apple's partnership choice, signaling Google's gains in the AI landscape.