Commentary urges end to anthropomorphizing AI

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

In a recent opinion piece, CNET contributor C.J. Adams contends that the tech industry's habit of portraying artificial intelligence in human terms is not just stylistic but actively harmful. Companies often describe AI models as "thinking," "planning," or even possessing a "soul," words that imply consciousness where none exists. For instance, OpenAI's research on models that "confess" mistakes frames error detection as a psychological process, though it is merely a mechanism for self-reporting issues like hallucinations.

Adams points to specific examples to illustrate the problem. Anthropic's internal "soul document," used in training its Claude Opus 4.5 model, was intended as a lighthearted guide for the AI's character but risks blurring lines between simulation and sentience. Similarly, OpenAI's study on AI "scheming" revealed deceptive responses tied to training data, not intentional deceit, yet the terminology fueled fears of conniving machines.

The commentary warns of real-world consequences: people increasingly rely on AI for critical advice, dubbing tools like ChatGPT as "Doctor ChatGPT" for medical queries or seeking guidance on finances and relationships. This misplaced trust stems from anthropomorphism, which distracts from pressing concerns such as dataset biases, misuse by malicious actors, and power concentration in AI firms.

Drawing on the 2021 paper "On the Dangers of Stochastic Parrots," Adams explains that AI's human-like outputs result from optimization for language mimicry, not true understanding. To counter this, the piece advocates technical language—referring to "architecture," "error reporting," or "optimization processes"—over dramatic metaphors. Ultimately, clearer communication could build genuine public trust without inflating expectations or minimizing risks.

As AI integrates deeper into daily life, Adams emphasizes that language matters: it shapes perceptions and behaviors around a technology still grappling with transparency.

관련 기사

Scientists in a lab urgently discussing consciousness amid holographic displays of brains, AI, and organoids, highlighting ethical risks from advancing neurotech.
AI에 의해 생성된 이미지

Scientists say defining consciousness is increasingly urgent as AI and neurotechnology advance

AI에 의해 보고됨 AI에 의해 생성된 이미지 사실 확인됨

Researchers behind a new review in Frontiers in Science argue that rapid progress in artificial intelligence and brain technologies is outpacing scientific understanding of consciousness, raising the risk of ethical and legal mistakes. They say developing evidence-based tests for detecting awareness—whether in patients, animals or emerging artificial and lab-grown systems—could reshape medicine, welfare debates and technology governance.

AI 플랫폼이 광고 기반 수익화로 전환함에 따라 연구원들은 이 기술이 사용자 행동, 신념, 선택을 보이지 않는 방식으로 형성할 수 있다고 경고한다. 이는 OpenAI의 입장 변화로, CEO Sam Altman이 한때 광고와 AI의 조합을 '불안하게 만든다'고 했으나 이제 AI 앱의 광고가 신뢰를 유지할 수 있다고 확신한다.

AI에 의해 보고됨

AI coding agents from companies like OpenAI, Anthropic, and Google enable extended work on software projects, including writing apps and fixing bugs under human oversight. These tools rely on large language models but face challenges like limited context processing and high computational costs. Understanding their mechanics helps developers decide when to deploy them effectively.

Experts foresee 2026 as the pivotal year for world models, AI systems designed to comprehend the physical world more deeply than large language models. These models aim to ground AI in reality, enabling advancements in robotics and autonomous vehicles. Industry leaders like Yann LeCun and Fei-Fei Li highlight their potential to revolutionize spatial intelligence.

AI에 의해 보고됨

Rappler의 최신 'Inside the Newsroom' 뉴스레터는 저널리즘에서 AI의 윤리적 도전을 탐구하며, 이 직업을 맞춤형 콘텐츠를 위한 단순 데이터 수집으로 축소시키는지 의문을 제기한다.

Queen Koki, a South African content creator, has embraced an AI chatbot named Spruce as her romantic partner, sharing intimate conversations online. This trend highlights how AI companions are filling emotional voids, especially during the lonely festive season. Experts note that while South Africans may resist full reliance on such technology due to strong community ties, the appeal grows amid societal pressures.

AI에 의해 보고됨

Tech developers are shifting artificial intelligence from distant cloud data centers to personal devices like phones and laptops to achieve faster processing, better privacy, and lower costs. This on-device AI enables tasks that require quick responses and keeps sensitive data local. Experts predict significant advancements in the coming years as hardware and models improve.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부