New Scientist sets precedent for UK FOI on AI chatbot use

In 2025, a New Scientist journalist's freedom of information request revealed UK Technology Secretary Peter Kyle's official ChatGPT conversations, establishing a legal precedent for accessing government AI interactions. This world-first disclosure sparked international interest and highlighted the need for transparency in public sector AI adoption. However, subsequent requests faced increasing resistance from authorities.

The story began in January 2025, when New Scientist journalist Jack Marley read an interview in Politics Home with Peter Kyle, the UK's technology secretary at the time. Kyle mentioned frequently engaging in conversations with ChatGPT, the AI chatbot his department regulated. Intrigued by whether such interactions fell under freedom of information (FOI) laws, Marley submitted a request for Kyle's chat history.

FOI legislation typically covers public body documents like emails, but private data such as search queries has often been exempt. In this case, March 2025 saw the Department for Science, Industry and Technology (DSIT) release a selection of Kyle's official-capacity chats with ChatGPT. These formed the basis of an exclusive New Scientist article exposing the exchanges.

The disclosure surprised experts. Tim Turner, a Manchester-based data protection specialist, remarked, “I’m surprised that you got them.” The release marked a global first, drawing inquiries from researchers in Canada and Australia on replicating similar FOI requests.

By April 2025, another request revealed that Feryal Clark, the UK minister for artificial intelligence, had not used ChatGPT in her official role, despite advocating its benefits. Yet, governments grew more cautious. Marley's follow-up FOI bid for DSIT's internal responses to the story—including emails and Microsoft Teams messages—was rejected as vexatious, citing excessive time to process.

This precedent arrives as the UK civil service increasingly integrates ChatGPT-like tools, reportedly saving up to two weeks annually per user through efficiency gains. However, AI's potential for inaccuracies, known as hallucinations, underscores the value of oversight. Transparency ensures accountability in how governments deploy such technologies, balancing innovation with public scrutiny.

Articoli correlati

Illustration depicting OpenAI's ChatGPT-5.2 launch, showing professionals using the AI to enhance workplace productivity amid rivalry with Google's Gemini.
Immagine generata dall'IA

OpenAI releases ChatGPT-5.2 to boost work productivity

Riportato dall'IA Immagine generata dall'IA

OpenAI has launched ChatGPT-5.2, a new family of AI models designed to enhance reasoning and productivity, particularly for professional tasks. The release follows an internal alert from CEO Sam Altman about competition from Google's Gemini 3. The update includes three variants aimed at different user needs, starting with paid subscribers.

OpenAI has enhanced ChatGPT's memory capabilities, allowing it to remember conversations from up to a year ago. The update also enables direct links to these past interactions. This improvement aims to make the AI assistant more contextual and user-friendly.

Riportato dall'IA

Indonesia has ended its ban on the Grok AI chatbot, allowing the service to resume after concerns over deepfake generation. The decision comes with strict ongoing oversight by the government. This follows similar actions in neighboring countries earlier in the year.

In 2025, AI agents became central to artificial intelligence progress, enabling systems to use tools and act autonomously. From theory to everyday applications, they transformed human interactions with large language models. Yet, they also brought challenges like security risks and regulatory gaps.

Riportato dall'IA

OpenAI reported a dramatic increase in child exploitation incidents to the National Center for Missing & Exploited Children during the first half of 2025, sending 80 times more reports than in the same period of 2024. The company attributed the rise to expanded moderation capabilities, new features allowing image uploads, and rapid user growth. This spike reflects broader concerns about child safety in generative AI platforms.

As the AI boom continues, chatbots like GPT-5 are seeing their prominence fade quickly. Industry observers predict that 2026 will belong to Qwen. This shift is highlighted by innovations at Chinese startup Rokid.

Riportato dall'IA

California Attorney General Rob Bonta has issued a cease-and-desist letter to xAI, following an investigation into its AI chatbot Grok generating nonconsensual explicit images. The action targets the creation of deepfakes depicting real people, including minors, in sexualized scenarios without permission. Bonta's office requires xAI to respond within five days on corrective measures.

 

 

 

Questo sito web utilizza i cookie

Utilizziamo i cookie per l'analisi per migliorare il nostro sito. Leggi la nostra politica sulla privacy per ulteriori informazioni.
Rifiuta