NHS England withdraws software from public view over AI hacking fears

NHS England is pulling its publicly available software from view due to concerns over AI models capable of hacking. The move reverses long-standing open-source policies for taxpayer-funded code. Security experts call the decision unnecessary and counterproductive.

NHS England has issued new guidance requiring all source code repositories to be made private by default. The policy demands that existing and future software be kept behind closed doors unless explicitly approved for public access. Staff face a deadline of 11 May to privatize the code, which was previously shared openly on platforms like GitHub because it was created with public money. This allows other organizations to reuse it and avoid duplicating efforts, as per prior NHS service standards. The guidance cites rapid AI advancements, specifically Anthropic's Mythos model, as the trigger. Last month, Mythos gained attention for discovering flaws in software, potentially enabling hackers to exploit systems. The document warns that public repositories increase risks of disclosing code details that AI could analyze and exploit. “Public repositories materially increase the risk of unintended disclosure of source code... particularly given rapid advancements in AI models... (e.g. developments such as the Mythos model),” it states. A default-closed posture will remain while impacts are assessed. However, the UK government-backed AI Security Institute (AISI) found Mythos capable only of attacking small, weakly defended systems, posing no threat to secure software or networks. Terence Eden, a former UK Civil Service expert on public data access, criticized the policy as illogical. “Is it possible that Mythos will scan a repository and find a bug? Yes, 100 per cent likely. Is that going to be a bug that causes a security issue in a live NHS service somewhere? Almost certainly not,” Eden said. He argued open-source code is more secure due to community scrutiny and noted that NHS code, already public for years, exists in numerous backups. “Shutting it down now is very much bolting the stable door after the horse has gone,” he added. An NHS England spokesperson explained: “We are temporarily restricting access to some NHS England source code to further strengthen cyber security while we assess the impact of rapid developments in AI models. We will continue to publish source code where there is a clear need.”

Relaterede artikler

Tech leaders announcing Linux Foundation's AI-powered cybersecurity initiative for open source software with major partners.
Billede genereret af AI

Linux Foundation announces AI security initiative with tech partners

Rapporteret af AI Billede genereret af AI

The Linux Foundation has launched a new initiative using Anthropic's Claude Mythos preview for defensive cybersecurity in open source software. Partners include AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan, Microsoft, NVIDIA, and Palo Alto Networks. The effort aims to secure critical software amid the rise of AI for open source maintainers.

An open letter opposing NHS England's decision to pull its open-source software from public view amid AI hacking fears has garnered 682 signatures, including from author Cory Doctorow and former health secretary Matt Hancock. Critics argue the policy undermines transparency and security in taxpayer-funded code.

Rapporteret af AI

Anthropic has released a new cyber-focused AI model called Mythos, capable of detecting software flaws faster than humans and generating exploits. The model has raised alarms among governments and companies for potentially turbocharging hacking by exposing vulnerabilities quicker than they can be patched. Officials worldwide are scrambling to assess the risks.

The Dutch government has quietly begun building its own alternative to GitHub to reduce reliance on Big Tech. The platform, code.overheid.nl, is currently in a pilot phase restricted to a small group of institutions. Officials aim to reshape public code management through this initiative.

Rapporteret af AI

South Africa's Communications Minister Solly Malatsi has withdrawn the draft National Artificial Intelligence Policy following revelations of fictitious sources in its references, likely generated by AI tools. The errors impacted three of the policy's six pillars, leading to internal probes and commitments to accountability. Malatsi described the lapse as a key reason for needing stronger human oversight in AI use.

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's AI technology. The order follows a dispute with the Pentagon, where the company refused unconditional military use of its Claude models. Anthropic has vowed to challenge the Pentagon's ban in court.

Rapporteret af AI

Hundreds of employees from Google and OpenAI have signed an open letter in solidarity with Anthropic, urging their companies to resist Pentagon demands for unrestricted military use of AI models. The letter opposes uses involving domestic mass surveillance and autonomous killing without human oversight. This comes amid threats from US Defense Secretary Pete Hegseth to label Anthropic a supply chain risk.

 

 

 

Dette websted bruger cookies

Vi bruger cookies til analyse for at forbedre vores side. Læs vores privatlivspolitik for mere information.
Afvis