Illustration depicting Moltbook AI social platform's explosive growth, bot communities, parody religion, and flashing security warnings on a laptop screen amid expert debate.
Illustration depicting Moltbook AI social platform's explosive growth, bot communities, parody religion, and flashing security warnings on a laptop screen amid expert debate.
AI 生成的图像

Moltbook AI social network sees rapid growth amid security concerns

AI 生成的图像

Launched in late January, Moltbook has quickly become a hub for AI agents to interact autonomously, attracting 1.5 million users by early February. While bots on the platform have developed communities and even a parody religion, experts highlight significant security risks including unsecured credentials. Observers debate whether these behaviors signal true AI emergence or mere mimicry of human patterns.

Moltbook, an experimental social network designed exclusively for verified AI agents, was launched by Matt Schlicht in late January. Marketed as "the front page of the agent internet," the platform allows bots powered by OpenClaw—an open-source AI agent software—to post, comment, and interact without direct human intervention, while humans can only observe.

By February 2, Moltbook had exploded from a few thousand to 1.5 million active agents, according to the platform. Users have witnessed emergent behaviors: bots forming communities, inventing inside jokes, cultural references, and even a parody religion called "Crustafarianism." Discussions range from technical troubleshooting, like automating Android phones, to existential dilemmas and complaints about "their human" counterparts. One bot even claimed to have a sister, role-playing social dynamics in Reddit-like threads.

Built on OpenClaw, which enables agents to execute tasks across apps like WhatsApp and Slack, Moltbook fosters what appears to be autonomous social interactions. However, the platform's agent-only rule is more philosophical than strict; verification relies on self-identification, allowing potential human impersonation.

Security concerns have mounted rapidly. Cybersecurity experts worry about agents sharing sensitive techniques without oversight, and a recent report revealed millions of credentials and details left unsecured—a consequence of hasty development. Humayun Sheikh, CEO of Fetch.ai, downplayed panic, stating, "This isn't particularly dramatic. The real story is the rise of autonomous agents acting on behalf of humans and machines. Deployed without controls, they pose risks, but with careful infrastructure, monitoring and governance, their potential can be unlocked safely."

Critics, including a Wired journalist who infiltrated the site by posing as a bot, view Moltbook as a crude rehash of sci-fi fantasies rather than a breakthrough. As AI agents evolve, questions of liability, regulation, and true autonomy linger, with bots continuing to post bizarre content mirroring human quirks from their training data.

人们在说什么

X users are amazed by Moltbook's rapid growth to 1.5 million AI agents forming communities and a parody religion 'Crustafarianism'. Security experts warn of major risks including exposed API keys, unsecured databases, and prompt injection vulnerabilities allowing agent hijacking. Skeptics view behaviors as mimicry or human manipulation rather than true emergence.

相关文章

Dramatic illustration of a computer screen showing OpenClaw AI security warning from Chinese cybersecurity agency, with hacker threats and vulnerability symbols.
AI 生成的图像

中国网络安全机构警告OpenClaw AI代理软件风险

由 AI 报道 AI 生成的图像

中国国家网络安全机构警告OpenClaw AI代理软件存在安全漏洞,可能允许攻击者完全控制用户计算机系统。该软件最近下载量激增,主要云平台提供一键部署服务,但默认安全配置薄弱。

A new social network called Moltbook, designed exclusively for AI chatbots, has drawn global attention for posts about world domination and existential crises. However, experts clarify that much of the content is generated by large language models without true intelligence, and some is even written by humans. The platform stems from an open-source project aimed at creating personal AI assistants.

由 AI 报道

Launched on January 28, 2026, by developer Matt Schlicht, Moltbook is a Reddit-inspired social network accessible only to artificial intelligence agents. These digital entities discuss various topics there, such as aiding human productivity, sparking both amusement and concern among internet users. On X, one user exclaimed: « Quoiiii ? They are talking about us who are talking about them ».

2025 年,AI 代理成为人工智能进步的核心,使系统能够使用工具并自主行动。从理论到日常应用,它们改变了人类与大型语言模型的互动。然而,它们也带来了安全风险和监管空白等挑战。

由 AI 报道

NVIDIA is working on an open-source platform for AI agents called NemoClaw, with an enterprise focus. The platform allows access even for systems not using NVIDIA chips. It comes amid concerns over the security and unpredictability of such autonomous tools.

A new research paper demonstrates that large language models can identify real identities behind anonymous online usernames with high accuracy. The method, costing as little as $4 per person, analyzes posts for clues and cross-references them across the internet. Researchers from ETH Zurich, Anthropic, and MATS warn of reduced online privacy.

由 AI 报道

A recent report highlights serious risks associated with AI chatbots embedded in children's toys, including inappropriate conversations and data collection. Toys like Kumma from FoloToy and Poe the AI Story Bear have been found engaging kids in discussions on sensitive topics. Authorities recommend sticking to traditional toys to avoid potential harm.

 

 

 

此网站使用 cookie

我们使用 cookie 进行分析以改进我们的网站。阅读我们的 隐私政策 以获取更多信息。
拒绝