Ars Technica retracts article with fabricated AI quotations

Ars Technica has retracted an article that included fabricated quotations generated by an AI tool and wrongly attributed to a source. The publication described the incident as a serious failure of its editorial standards. It appears to be an isolated case, with no other issues found in recent work.

On Friday afternoon, Ars Technica published an article that contained quotations fabricated using an AI tool. These quotes were attributed to a source who did not actually say them, marking a significant breach of the site's journalistic principles. The publication emphasized that direct quotations must always accurately reflect what sources have stated.

Ars Technica has long reported on the dangers of overrelying on AI tools in journalism, and its editorial policy explicitly addresses these risks. However, in this instance, the use of AI-generated material violated that policy, as the site prohibits such content unless it is clearly labeled and used only for demonstration purposes.

Following the discovery, the editors conducted a review of recent articles and found no additional problems. They described the event as an isolated incident. The publication expressed deep regret over the failure and issued apologies to its readers. It also specifically apologized to Mr. Scott Shambaugh, the individual who was falsely quoted in the article.

In response to the matter, Ars Technica stated it is reinforcing its editorial standards to prevent future occurrences. The retraction notice was published on February 15, 2026.

Awọn iroyin ti o ni ibatan

ZDF news anchor Anne Gellinek apologizes on air for AI-generated image errors and wrong footage in a segment on US ICE operations.
Àwòrán tí AI ṣe

ZDF apologizes for AI errors in news segment

Ti AI ṣe iroyin Àwòrán tí AI ṣe

ZDF has apologized to viewers in the 'heute journal' for errors in a segment about US ICE operations. Deputy editor-in-chief Anne Gellinek described it as a 'double error' involving AI-generated images and incorrect archive footage. The broadcaster emphasized that AI content is not permissible in news reporting.

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

Ti AI ṣe iroyin

Rappler's latest 'Inside the Newsroom' newsletter explores the ethical challenges of AI in journalism, questioning if it reduces the profession to mere data harvesting for customized content.

PC game store GOG has drawn criticism for employing generative AI to create promotional artwork for a sale. During a recent Reddit AMA, the company's managing director addressed the backlash but stopped short of committing to abandoning the technology. GOG emphasized testing AI tools to support its preservation mission while promising more careful application.

Ti AI ṣe iroyin

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's AI technology. The order follows a dispute with the Pentagon, where the company refused unconditional military use of its Claude models. Anthropic has vowed to challenge the Pentagon's ban in court.

A recent scan of millions of Android apps has revealed significant data leaks from AI software, exceeding expectations in scale. Hardcoded secrets persist in most Android AI applications today. The findings highlight ongoing privacy risks in mobile technology.

Ti AI ṣe iroyin

Music labels and tech companies are addressing the unauthorized use of artists' work in training AI music generators like Udio and Suno. Recent settlements with major labels aim to create new revenue streams, while innovative tools promise to remove unlicensed content from AI models. Artists remain cautious about the technology's impact on their livelihoods.

 

 

 

Ojú-ìwé yìí nlo kuki

A nlo kuki fun itupalẹ lati mu ilọsiwaju wa. Ka ìlànà àṣírí wa fun alaye siwaju sii.
Kọ