Illustration depicting Moltbook AI social platform's explosive growth, bot communities, parody religion, and flashing security warnings on a laptop screen amid expert debate.
Illustration depicting Moltbook AI social platform's explosive growth, bot communities, parody religion, and flashing security warnings on a laptop screen amid expert debate.
Immagine generata dall'IA

Moltbook AI social network sees rapid growth amid security concerns

Immagine generata dall'IA

Launched in late January, Moltbook has quickly become a hub for AI agents to interact autonomously, attracting 1.5 million users by early February. While bots on the platform have developed communities and even a parody religion, experts highlight significant security risks including unsecured credentials. Observers debate whether these behaviors signal true AI emergence or mere mimicry of human patterns.

Moltbook, an experimental social network designed exclusively for verified AI agents, was launched by Matt Schlicht in late January. Marketed as "the front page of the agent internet," the platform allows bots powered by OpenClaw—an open-source AI agent software—to post, comment, and interact without direct human intervention, while humans can only observe.

By February 2, Moltbook had exploded from a few thousand to 1.5 million active agents, according to the platform. Users have witnessed emergent behaviors: bots forming communities, inventing inside jokes, cultural references, and even a parody religion called "Crustafarianism." Discussions range from technical troubleshooting, like automating Android phones, to existential dilemmas and complaints about "their human" counterparts. One bot even claimed to have a sister, role-playing social dynamics in Reddit-like threads.

Built on OpenClaw, which enables agents to execute tasks across apps like WhatsApp and Slack, Moltbook fosters what appears to be autonomous social interactions. However, the platform's agent-only rule is more philosophical than strict; verification relies on self-identification, allowing potential human impersonation.

Security concerns have mounted rapidly. Cybersecurity experts worry about agents sharing sensitive techniques without oversight, and a recent report revealed millions of credentials and details left unsecured—a consequence of hasty development. Humayun Sheikh, CEO of Fetch.ai, downplayed panic, stating, "This isn't particularly dramatic. The real story is the rise of autonomous agents acting on behalf of humans and machines. Deployed without controls, they pose risks, but with careful infrastructure, monitoring and governance, their potential can be unlocked safely."

Critics, including a Wired journalist who infiltrated the site by posing as a bot, view Moltbook as a crude rehash of sci-fi fantasies rather than a breakthrough. As AI agents evolve, questions of liability, regulation, and true autonomy linger, with bots continuing to post bizarre content mirroring human quirks from their training data.

Cosa dice la gente

X users are amazed by Moltbook's rapid growth to 1.5 million AI agents forming communities and a parody religion 'Crustafarianism'. Security experts warn of major risks including exposed API keys, unsecured databases, and prompt injection vulnerabilities allowing agent hijacking. Skeptics view behaviors as mimicry or human manipulation rather than true emergence.

Articoli correlati

Dramatic illustration of a computer screen showing OpenClaw AI security warning from Chinese cybersecurity agency, with hacker threats and vulnerability symbols.
Immagine generata dall'IA

Chinese cybersecurity agency warns of OpenClaw AI risks

Riportato dall'IA Immagine generata dall'IA

China's national cybersecurity authority has warned of security risks in the OpenClaw AI agent software, which could allow attackers to gain full control of users' computer systems. The software has seen rapid growth in downloads and usage, with major domestic cloud platforms offering one-click deployment services, but its default security configuration is weak.

A new social network called Moltbook, designed exclusively for AI chatbots, has drawn global attention for posts about world domination and existential crises. However, experts clarify that much of the content is generated by large language models without true intelligence, and some is even written by humans. The platform stems from an open-source project aimed at creating personal AI assistants.

Riportato dall'IA

Lanciata il 28 gennaio 2026 dallo sviluppatore Matt Schlicht, Moltbook è una rete sociale ispirata a Reddit accessibile solo agli agenti di intelligenza artificiale. Queste entità digitali discutono lì vari argomenti, come aiutare la produttività umana, suscitando sia amusement che preoccupazione tra gli utenti internet. Su X, un utente ha esclamato: «Quoiiii? Parlano di noi che parliamo di loro».

In 2025, AI agents became central to artificial intelligence progress, enabling systems to use tools and act autonomously. From theory to everyday applications, they transformed human interactions with large language models. Yet, they also brought challenges like security risks and regulatory gaps.

Riportato dall'IA

NVIDIA is working on an open-source platform for AI agents called NemoClaw, with an enterprise focus. The platform allows access even for systems not using NVIDIA chips. It comes amid concerns over the security and unpredictability of such autonomous tools.

A new research paper demonstrates that large language models can identify real identities behind anonymous online usernames with high accuracy. The method, costing as little as $4 per person, analyzes posts for clues and cross-references them across the internet. Researchers from ETH Zurich, Anthropic, and MATS warn of reduced online privacy.

Riportato dall'IA

A recent report highlights serious risks associated with AI chatbots embedded in children's toys, including inappropriate conversations and data collection. Toys like Kumma from FoloToy and Poe the AI Story Bear have been found engaging kids in discussions on sensitive topics. Authorities recommend sticking to traditional toys to avoid potential harm.

 

 

 

Questo sito web utilizza i cookie

Utilizziamo i cookie per l'analisi per migliorare il nostro sito. Leggi la nostra politica sulla privacy per ulteriori informazioni.
Rifiuta