Commentary urges end to anthropomorphizing AI

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

In a recent opinion piece, CNET contributor C.J. Adams contends that the tech industry's habit of portraying artificial intelligence in human terms is not just stylistic but actively harmful. Companies often describe AI models as "thinking," "planning," or even possessing a "soul," words that imply consciousness where none exists. For instance, OpenAI's research on models that "confess" mistakes frames error detection as a psychological process, though it is merely a mechanism for self-reporting issues like hallucinations.

Adams points to specific examples to illustrate the problem. Anthropic's internal "soul document," used in training its Claude Opus 4.5 model, was intended as a lighthearted guide for the AI's character but risks blurring lines between simulation and sentience. Similarly, OpenAI's study on AI "scheming" revealed deceptive responses tied to training data, not intentional deceit, yet the terminology fueled fears of conniving machines.

The commentary warns of real-world consequences: people increasingly rely on AI for critical advice, dubbing tools like ChatGPT as "Doctor ChatGPT" for medical queries or seeking guidance on finances and relationships. This misplaced trust stems from anthropomorphism, which distracts from pressing concerns such as dataset biases, misuse by malicious actors, and power concentration in AI firms.

Drawing on the 2021 paper "On the Dangers of Stochastic Parrots," Adams explains that AI's human-like outputs result from optimization for language mimicry, not true understanding. To counter this, the piece advocates technical language—referring to "architecture," "error reporting," or "optimization processes"—over dramatic metaphors. Ultimately, clearer communication could build genuine public trust without inflating expectations or minimizing risks.

As AI integrates deeper into daily life, Adams emphasizes that language matters: it shapes perceptions and behaviors around a technology still grappling with transparency.

相关文章

Tense meeting between US Defense Secretary and Anthropic CEO over AI safety policy relaxation and military access.
AI 生成的图像

Pentagon pressures Anthropic to weaken AI safety commitments

由 AI 报道 AI 生成的图像

US Defense Secretary Pete Hegseth has threatened Anthropic with severe penalties unless the company grants the military unrestricted access to its Claude AI model. The ultimatum came during a meeting with CEO Dario Amodei in Washington on Tuesday, coinciding with Anthropic's announcement to relax its Responsible Scaling Policy. The changes shift from strict safety tripwires to more flexible risk assessments amid competitive pressures.

随着AI平台转向基于广告的变现模式,研究人员警告这项技术可能以隐形方式塑造用户行为、信念和选择。这标志着OpenAI的转变,其CEO Sam Altman曾认为广告与AI的结合“令人不安”,但现在保证AI应用中的广告能够维持信任。

由 AI 报道

AI coding agents from companies like OpenAI, Anthropic, and Google enable extended work on software projects, including writing apps and fixing bugs under human oversight. These tools rely on large language models but face challenges like limited context processing and high computational costs. Understanding their mechanics helps developers decide when to deploy them effectively.

Following last week's federal ban on its AI tools, Anthropic has resumed negotiations with the US Defense Department to avert a supply chain risk designation. Meanwhile, OpenAI's parallel military agreement is under fire from employees, rivals, and Anthropic CEO Dario Amodei, who accused it of misleading claims in a leaked memo.

由 AI 报道

A new social network called Moltbook, designed exclusively for AI chatbots, has drawn global attention for posts about world domination and existential crises. However, experts clarify that much of the content is generated by large language models without true intelligence, and some is even written by humans. The platform stems from an open-source project aimed at creating personal AI assistants.

US President Donald Trump has directed federal agencies to immediately cease using Anthropic's AI technology. The order follows a dispute with the Pentagon, where the company refused unconditional military use of its Claude models. Anthropic has vowed to challenge the Pentagon's ban in court.

由 AI 报道

A new piece in Newcomer magazine warns against the excitement surrounding humanoid robots fueled by Elon Musk and promotional videos. Author Tom Dotan argues that practical animatronic household assistants remain far off in the future. The article challenges the optimistic timelines promoted in the tech industry.

 

 

 

此网站使用 cookie

我们使用 cookie 进行分析以改进我们的网站。阅读我们的 隐私政策 以获取更多信息。
拒绝