A study applying Chile's university entrance exam, PAES 2026, to AI models shows several systems scoring high enough for selective programs like Medicine and Civil Engineering. Google's Gemini led with averages near 950 points, outperforming rivals like ChatGPT. The experiment underscores AI progress and raises questions about standardized testing efficacy.
A study by Professor Jonathan Vásquez, Ph.D. in Computer Science from the University of Valparaíso, and Sebastián Cisterna, MBA from Harvard and professor at Universidad Adolfo Ibáñez, assessed AI models' performance on the PAES 2026. The researchers simulated responses to official tests, determining accessible careers as if they were real applicants.
Google led with Gemini 3 Flash, averaging 957.38 points and scoring 1,000 in History and Social Sciences, Biology, Physics, Reading Competency, and Math Competency 1. Its Pro version averaged near 950 points, qualifying for any career in Chilean universities. 'Gemini surpassed' ChatGPT, the authors noted, with lighter models showing unexpected maturity.
All models achieved 100% in History and Social Sciences, a standard that was exceptional in 2025. OpenAI's GPT-5.2 Extended Reasoning performed well in Language and Sciences, accessing fields like Journalism or Psychology, but lagged in Math M2 for complex engineering. GPT-5.2 Instant suited social sciences and education.
Chinese model DeepSeek excelled in cost-efficiency: up to 14 times cheaper in fast versions and 30 in reasoning modes, with an 880-point average for programs like Pedagogy or Nursing, but not top Medicine spots.
Cisterna observed that 'more reasoning' modes didn't always outperform faster ones, challenging expectations. The authors stress AIs optimize prior data, not 'learn' like humans, questioning tests' ability to measure human skills in an automation era: 'The question is no longer just what career an AI could study, but how well current selection metrics reflect expected human competencies'.