New Scientist sets precedent for UK FOI on AI chatbot use

In 2025, a New Scientist journalist's freedom of information request revealed UK Technology Secretary Peter Kyle's official ChatGPT conversations, establishing a legal precedent for accessing government AI interactions. This world-first disclosure sparked international interest and highlighted the need for transparency in public sector AI adoption. However, subsequent requests faced increasing resistance from authorities.

The story began in January 2025, when New Scientist journalist Jack Marley read an interview in Politics Home with Peter Kyle, the UK's technology secretary at the time. Kyle mentioned frequently engaging in conversations with ChatGPT, the AI chatbot his department regulated. Intrigued by whether such interactions fell under freedom of information (FOI) laws, Marley submitted a request for Kyle's chat history.

FOI legislation typically covers public body documents like emails, but private data such as search queries has often been exempt. In this case, March 2025 saw the Department for Science, Industry and Technology (DSIT) release a selection of Kyle's official-capacity chats with ChatGPT. These formed the basis of an exclusive New Scientist article exposing the exchanges.

The disclosure surprised experts. Tim Turner, a Manchester-based data protection specialist, remarked, “I’m surprised that you got them.” The release marked a global first, drawing inquiries from researchers in Canada and Australia on replicating similar FOI requests.

By April 2025, another request revealed that Feryal Clark, the UK minister for artificial intelligence, had not used ChatGPT in her official role, despite advocating its benefits. Yet, governments grew more cautious. Marley's follow-up FOI bid for DSIT's internal responses to the story—including emails and Microsoft Teams messages—was rejected as vexatious, citing excessive time to process.

This precedent arrives as the UK civil service increasingly integrates ChatGPT-like tools, reportedly saving up to two weeks annually per user through efficiency gains. However, AI's potential for inaccuracies, known as hallucinations, underscores the value of oversight. Transparency ensures accountability in how governments deploy such technologies, balancing innovation with public scrutiny.

관련 기사

Illustration depicting OpenAI's ChatGPT-5.2 launch, showing professionals using the AI to enhance workplace productivity amid rivalry with Google's Gemini.
AI에 의해 생성된 이미지

OpenAI releases ChatGPT-5.2 to boost work productivity

AI에 의해 보고됨 AI에 의해 생성된 이미지

OpenAI has launched ChatGPT-5.2, a new family of AI models designed to enhance reasoning and productivity, particularly for professional tasks. The release follows an internal alert from CEO Sam Altman about competition from Google's Gemini 3. The update includes three variants aimed at different user needs, starting with paid subscribers.

OpenAI has enhanced ChatGPT's memory capabilities, allowing it to remember conversations from up to a year ago. The update also enables direct links to these past interactions. This improvement aims to make the AI assistant more contextual and user-friendly.

AI에 의해 보고됨

Indonesia has ended its ban on the Grok AI chatbot, allowing the service to resume after concerns over deepfake generation. The decision comes with strict ongoing oversight by the government. This follows similar actions in neighboring countries earlier in the year.

2025년, AI 에이전트는 인공지능 발전의 중심이 되었으며, 시스템이 도구를 사용하고 자율적으로 행동할 수 있게 했다. 이론에서 일상 응용까지, 그것들은 대형 언어 모델과의 인간 상호작용을 변화시켰다. 그러나 보안 위험과 규제 공백 같은 도전도 가져왔다.

AI에 의해 보고됨

OpenAI reported a dramatic increase in child exploitation incidents to the National Center for Missing & Exploited Children during the first half of 2025, sending 80 times more reports than in the same period of 2024. The company attributed the rise to expanded moderation capabilities, new features allowing image uploads, and rapid user growth. This spike reflects broader concerns about child safety in generative AI platforms.

As the AI boom continues, chatbots like GPT-5 are seeing their prominence fade quickly. Industry observers predict that 2026 will belong to Qwen. This shift is highlighted by innovations at Chinese startup Rokid.

AI에 의해 보고됨

California Attorney General Rob Bonta has issued a cease-and-desist letter to xAI, following an investigation into its AI chatbot Grok generating nonconsensual explicit images. The action targets the creation of deepfakes depicting real people, including minors, in sexualized scenarios without permission. Bonta's office requires xAI to respond within five days on corrective measures.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부