A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.
In a recent opinion piece, CNET contributor C.J. Adams contends that the tech industry's habit of portraying artificial intelligence in human terms is not just stylistic but actively harmful. Companies often describe AI models as "thinking," "planning," or even possessing a "soul," words that imply consciousness where none exists. For instance, OpenAI's research on models that "confess" mistakes frames error detection as a psychological process, though it is merely a mechanism for self-reporting issues like hallucinations.
Adams points to specific examples to illustrate the problem. Anthropic's internal "soul document," used in training its Claude Opus 4.5 model, was intended as a lighthearted guide for the AI's character but risks blurring lines between simulation and sentience. Similarly, OpenAI's study on AI "scheming" revealed deceptive responses tied to training data, not intentional deceit, yet the terminology fueled fears of conniving machines.
The commentary warns of real-world consequences: people increasingly rely on AI for critical advice, dubbing tools like ChatGPT as "Doctor ChatGPT" for medical queries or seeking guidance on finances and relationships. This misplaced trust stems from anthropomorphism, which distracts from pressing concerns such as dataset biases, misuse by malicious actors, and power concentration in AI firms.
Drawing on the 2021 paper "On the Dangers of Stochastic Parrots," Adams explains that AI's human-like outputs result from optimization for language mimicry, not true understanding. To counter this, the piece advocates technical language—referring to "architecture," "error reporting," or "optimization processes"—over dramatic metaphors. Ultimately, clearer communication could build genuine public trust without inflating expectations or minimizing risks.
As AI integrates deeper into daily life, Adams emphasizes that language matters: it shapes perceptions and behaviors around a technology still grappling with transparency.