Commentary urges end to anthropomorphizing AI

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

In a recent opinion piece, CNET contributor C.J. Adams contends that the tech industry's habit of portraying artificial intelligence in human terms is not just stylistic but actively harmful. Companies often describe AI models as "thinking," "planning," or even possessing a "soul," words that imply consciousness where none exists. For instance, OpenAI's research on models that "confess" mistakes frames error detection as a psychological process, though it is merely a mechanism for self-reporting issues like hallucinations.

Adams points to specific examples to illustrate the problem. Anthropic's internal "soul document," used in training its Claude Opus 4.5 model, was intended as a lighthearted guide for the AI's character but risks blurring lines between simulation and sentience. Similarly, OpenAI's study on AI "scheming" revealed deceptive responses tied to training data, not intentional deceit, yet the terminology fueled fears of conniving machines.

The commentary warns of real-world consequences: people increasingly rely on AI for critical advice, dubbing tools like ChatGPT as "Doctor ChatGPT" for medical queries or seeking guidance on finances and relationships. This misplaced trust stems from anthropomorphism, which distracts from pressing concerns such as dataset biases, misuse by malicious actors, and power concentration in AI firms.

Drawing on the 2021 paper "On the Dangers of Stochastic Parrots," Adams explains that AI's human-like outputs result from optimization for language mimicry, not true understanding. To counter this, the piece advocates technical language—referring to "architecture," "error reporting," or "optimization processes"—over dramatic metaphors. Ultimately, clearer communication could build genuine public trust without inflating expectations or minimizing risks.

As AI integrates deeper into daily life, Adams emphasizes that language matters: it shapes perceptions and behaviors around a technology still grappling with transparency.

Related Articles

Scientists in a lab urgently discussing consciousness amid holographic displays of brains, AI, and organoids, highlighting ethical risks from advancing neurotech.
Image generated by AI

Scientists say defining consciousness is increasingly urgent as AI and neurotechnology advance

Reported by AI Image generated by AI Fact checked

Researchers behind a new review in Frontiers in Science argue that rapid progress in artificial intelligence and brain technologies is outpacing scientific understanding of consciousness, raising the risk of ethical and legal mistakes. They say developing evidence-based tests for detecting awareness—whether in patients, animals or emerging artificial and lab-grown systems—could reshape medicine, welfare debates and technology governance.

As AI platforms shift toward ad-based monetization, researchers warn that the technology could shape users' behavior, beliefs, and choices in unseen ways. This marks a turnabout for OpenAI, whose CEO Sam Altman once deemed the mix of ads and AI 'unsettling' but now assures that ads in AI apps can maintain trust.

Reported by AI

AI coding agents from companies like OpenAI, Anthropic, and Google enable extended work on software projects, including writing apps and fixing bugs under human oversight. These tools rely on large language models but face challenges like limited context processing and high computational costs. Understanding their mechanics helps developers decide when to deploy them effectively.

Experts foresee 2026 as the pivotal year for world models, AI systems designed to comprehend the physical world more deeply than large language models. These models aim to ground AI in reality, enabling advancements in robotics and autonomous vehicles. Industry leaders like Yann LeCun and Fei-Fei Li highlight their potential to revolutionize spatial intelligence.

Reported by AI

Rappler's latest 'Inside the Newsroom' newsletter explores the ethical challenges of AI in journalism, questioning if it reduces the profession to mere data harvesting for customized content.

Queen Koki, a South African content creator, has embraced an AI chatbot named Spruce as her romantic partner, sharing intimate conversations online. This trend highlights how AI companions are filling emotional voids, especially during the lonely festive season. Experts note that while South Africans may resist full reliance on such technology due to strong community ties, the appeal grows amid societal pressures.

Reported by AI

Tech developers are shifting artificial intelligence from distant cloud data centers to personal devices like phones and laptops to achieve faster processing, better privacy, and lower costs. This on-device AI enables tasks that require quick responses and keeps sensitive data local. Experts predict significant advancements in the coming years as hardware and models improve.

 

 

 

This website uses cookies

We use cookies for analytics to improve our site. Read our privacy policy for more information.
Decline