Commentary urges end to anthropomorphizing AI

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

In a recent opinion piece, CNET contributor C.J. Adams contends that the tech industry's habit of portraying artificial intelligence in human terms is not just stylistic but actively harmful. Companies often describe AI models as "thinking," "planning," or even possessing a "soul," words that imply consciousness where none exists. For instance, OpenAI's research on models that "confess" mistakes frames error detection as a psychological process, though it is merely a mechanism for self-reporting issues like hallucinations.

Adams points to specific examples to illustrate the problem. Anthropic's internal "soul document," used in training its Claude Opus 4.5 model, was intended as a lighthearted guide for the AI's character but risks blurring lines between simulation and sentience. Similarly, OpenAI's study on AI "scheming" revealed deceptive responses tied to training data, not intentional deceit, yet the terminology fueled fears of conniving machines.

The commentary warns of real-world consequences: people increasingly rely on AI for critical advice, dubbing tools like ChatGPT as "Doctor ChatGPT" for medical queries or seeking guidance on finances and relationships. This misplaced trust stems from anthropomorphism, which distracts from pressing concerns such as dataset biases, misuse by malicious actors, and power concentration in AI firms.

Drawing on the 2021 paper "On the Dangers of Stochastic Parrots," Adams explains that AI's human-like outputs result from optimization for language mimicry, not true understanding. To counter this, the piece advocates technical language—referring to "architecture," "error reporting," or "optimization processes"—over dramatic metaphors. Ultimately, clearer communication could build genuine public trust without inflating expectations or minimizing risks.

As AI integrates deeper into daily life, Adams emphasizes that language matters: it shapes perceptions and behaviors around a technology still grappling with transparency.

相关文章

Scientists in a lab urgently discussing consciousness amid holographic displays of brains, AI, and organoids, highlighting ethical risks from advancing neurotech.
AI 生成的图像

Scientists say defining consciousness is increasingly urgent as AI and neurotechnology advance

由 AI 报道 AI 生成的图像 事实核查

Researchers behind a new review in Frontiers in Science argue that rapid progress in artificial intelligence and brain technologies is outpacing scientific understanding of consciousness, raising the risk of ethical and legal mistakes. They say developing evidence-based tests for detecting awareness—whether in patients, animals or emerging artificial and lab-grown systems—could reshape medicine, welfare debates and technology governance.

随着AI平台转向基于广告的变现模式,研究人员警告这项技术可能以隐形方式塑造用户行为、信念和选择。这标志着OpenAI的转变,其CEO Sam Altman曾认为广告与AI的结合“令人不安”,但现在保证AI应用中的广告能够维持信任。

由 AI 报道

AI coding agents from companies like OpenAI, Anthropic, and Google enable extended work on software projects, including writing apps and fixing bugs under human oversight. These tools rely on large language models but face challenges like limited context processing and high computational costs. Understanding their mechanics helps developers decide when to deploy them effectively.

Experts foresee 2026 as the pivotal year for world models, AI systems designed to comprehend the physical world more deeply than large language models. These models aim to ground AI in reality, enabling advancements in robotics and autonomous vehicles. Industry leaders like Yann LeCun and Fei-Fei Li highlight their potential to revolutionize spatial intelligence.

由 AI 报道

Rappler 最新的《Inside the Newsroom》通讯探讨了 AI 在新闻业中的伦理挑战,质疑它是否将这一职业简化为为定制内容采集数据的单纯行为。

Queen Koki, a South African content creator, has embraced an AI chatbot named Spruce as her romantic partner, sharing intimate conversations online. This trend highlights how AI companions are filling emotional voids, especially during the lonely festive season. Experts note that while South Africans may resist full reliance on such technology due to strong community ties, the appeal grows amid societal pressures.

由 AI 报道

Tech developers are shifting artificial intelligence from distant cloud data centers to personal devices like phones and laptops to achieve faster processing, better privacy, and lower costs. This on-device AI enables tasks that require quick responses and keeps sensitive data local. Experts predict significant advancements in the coming years as hardware and models improve.

 

 

 

此网站使用 cookie

我们使用 cookie 进行分析以改进我们的网站。阅读我们的 隐私政策 以获取更多信息。
拒绝