A Guardian report has revealed that OpenAI's latest AI model, GPT-5.2, draws from Grokipedia, an xAI-powered online encyclopedia, when addressing sensitive issues like the Holocaust and Iranian politics. While the model is touted for professional tasks, tests question its source reliability. OpenAI defends its approach by emphasizing broad web searches with safety measures.
OpenAI launched GPT-5.2 in December as its "most advanced frontier model for professional work," designed to handle tasks like creating spreadsheets and complex operations. However, investigations by the Guardian have highlighted potential flaws in its information sourcing. The report details how the model, accessed via ChatGPT, referenced Grokipedia for responses on contentious subjects, including ties between the Iranian government and telecommunications firm MTN-Irancell, as well as queries involving British historian Richard Evans, who testified as an expert witness in a libel case against Holocaust denier David Irving.
Notably, Grokipedia did not appear as a source when the model was prompted about media bias against Donald Trump or similar politically charged topics. Grokipedia, developed by xAI and released before GPT-5.2, has faced its own scrutiny. It has been criticized for including citations from neo-Nazi forums, and a study by US researchers identified citations to "questionable" and "problematic" sources in the AI-generated encyclopedia.
In response to the Guardian's findings, OpenAI stated that GPT-5.2 "searches the web for a broad range of publicly available sources and viewpoints," while applying "safety filters to reduce the risk of surfacing links associated with high-severity harms." This incident underscores ongoing challenges in ensuring the accuracy and neutrality of AI outputs, particularly on historical and geopolitical matters. The Guardian's tests, conducted shortly after the model's release, suggest that while advanced capabilities are improving, source vetting remains a critical area for refinement.