In 2025, a New Scientist journalist's freedom of information request revealed UK Technology Secretary Peter Kyle's official ChatGPT conversations, establishing a legal precedent for accessing government AI interactions. This world-first disclosure sparked international interest and highlighted the need for transparency in public sector AI adoption. However, subsequent requests faced increasing resistance from authorities.
The story began in January 2025, when New Scientist journalist Jack Marley read an interview in Politics Home with Peter Kyle, the UK's technology secretary at the time. Kyle mentioned frequently engaging in conversations with ChatGPT, the AI chatbot his department regulated. Intrigued by whether such interactions fell under freedom of information (FOI) laws, Marley submitted a request for Kyle's chat history.
FOI legislation typically covers public body documents like emails, but private data such as search queries has often been exempt. In this case, March 2025 saw the Department for Science, Industry and Technology (DSIT) release a selection of Kyle's official-capacity chats with ChatGPT. These formed the basis of an exclusive New Scientist article exposing the exchanges.
The disclosure surprised experts. Tim Turner, a Manchester-based data protection specialist, remarked, “I’m surprised that you got them.” The release marked a global first, drawing inquiries from researchers in Canada and Australia on replicating similar FOI requests.
By April 2025, another request revealed that Feryal Clark, the UK minister for artificial intelligence, had not used ChatGPT in her official role, despite advocating its benefits. Yet, governments grew more cautious. Marley's follow-up FOI bid for DSIT's internal responses to the story—including emails and Microsoft Teams messages—was rejected as vexatious, citing excessive time to process.
This precedent arrives as the UK civil service increasingly integrates ChatGPT-like tools, reportedly saving up to two weeks annually per user through efficiency gains. However, AI's potential for inaccuracies, known as hallucinations, underscores the value of oversight. Transparency ensures accountability in how governments deploy such technologies, balancing innovation with public scrutiny.