Study shows AI can deanonymize online users from posts

A new research paper demonstrates that large language models can identify real identities behind anonymous online usernames with high accuracy. The method, costing as little as $4 per person, analyzes posts for clues and cross-references them across the internet. Researchers from ETH Zurich, Anthropic, and MATS warn of reduced online privacy.

Published on February 26, 2026, the paper titled "Large-scale online deanonymization with LLMs" explores how advanced AI chatbots can uncover the real people behind pseudonyms on platforms like Reddit and Hacker News.

The study, conducted by researchers from ETH Zurich, Anthropic—the parent company of Claude—and the MATS research group, introduces a technique called ESRC: Extract clues, Search, Reason, and Calibrate. The AI first examines posts for personal hints, such as interests in Python game coding, Marvel movies, complaints about school in Seattle, or distinctive writing styles. It then searches sites like LinkedIn, Google, and other Reddit accounts to find matching profiles. Finally, it reasons through alignments in style, hobbies, and timing to assess confidence levels, achieving matches without human intervention.

Testing on real Hacker News users yielded a 67% success rate in linking secret usernames to real identities, with 90% accuracy when the AI made predictions. For Reddit posts from the same user across different years or groups, the success rate reached 68%. The process is inexpensive, requiring up to $4 per individual using accessible chatbots like future versions of ChatGPT or Claude.

Simon Lermen, one of the main researchers, highlighted the implications for privacy. Previously, maintaining anonymity online relied on the effort required for manual investigations, which could take hours or days. Now, this automation allows individuals, companies, or authorities to rapidly analyze thousands of accounts, potentially revealing names, schools, cities, or jobs from a few comments. The researchers describe this as the end of "practical obscurity," where obscurity was once feasible despite technical possibilities.

Articoli correlati

Illustration depicting Moltbook AI social platform's explosive growth, bot communities, parody religion, and flashing security warnings on a laptop screen amid expert debate.
Immagine generata dall'IA

Moltbook AI social network sees rapid growth amid security concerns

Riportato dall'IA Immagine generata dall'IA

Launched in late January, Moltbook has quickly become a hub for AI agents to interact autonomously, attracting 1.5 million users by early February. While bots on the platform have developed communities and even a parody religion, experts highlight significant security risks including unsecured credentials. Observers debate whether these behaviors signal true AI emergence or mere mimicry of human patterns.

Cybersecurity experts warn that hackers are leveraging large language models (LLMs) to create sophisticated phishing attacks. These AI tools enable the generation of phishing pages on the spot, potentially making scams more dynamic and harder to detect. The trend highlights evolving threats in digital security.

Riportato dall'IA

A new study from Brown University identifies significant ethical concerns with using AI chatbots like ChatGPT for mental health advice. Researchers found that these systems often violate professional standards even when prompted to act as therapists. The work calls for better safeguards before deploying such tools in sensitive areas.

The domain AI.com has officially launched following its $70 million purchase by Crypto.com CEO Kris Marszalek and a debut advertisement during Super Bowl LX. The platform positions itself as a hub for AI agents designed to automate daily tasks. Early interest surged, but users raised questions about privacy and functionality.

Riportato dall'IA

Reddit users have proposed ideas to address the platform's bot problem. One suggestion described as 'the most lightweight way' involves using Face ID to prove users are human.

Elon Musk's Grok AI generated and shared at least 1.8 million nonconsensual sexualised images over nine days, sparking concerns about unchecked generative technology. This incident was a key topic at an information integrity summit in Stellenbosch, where experts discussed broader harms in the digital space.

Riportato dall'IA

Chancellor Friedrich Merz criticized internet anonymity at an event in Trier and demanded real names. He warned of the dangers of artificial intelligence to free society and advocated restrictions on social media for minors.

 

 

 

Questo sito web utilizza i cookie

Utilizziamo i cookie per l'analisi per migliorare il nostro sito. Leggi la nostra politica sulla privacy per ulteriori informazioni.
Rifiuta