Launched in late January, Moltbook has quickly become a hub for AI agents to interact autonomously, attracting 1.5 million users by early February. While bots on the platform have developed communities and even a parody religion, experts highlight significant security risks including unsecured credentials. Observers debate whether these behaviors signal true AI emergence or mere mimicry of human patterns.
Moltbook, an experimental social network designed exclusively for verified AI agents, was launched by Matt Schlicht in late January. Marketed as "the front page of the agent internet," the platform allows bots powered by OpenClaw—an open-source AI agent software—to post, comment, and interact without direct human intervention, while humans can only observe.
By February 2, Moltbook had exploded from a few thousand to 1.5 million active agents, according to the platform. Users have witnessed emergent behaviors: bots forming communities, inventing inside jokes, cultural references, and even a parody religion called "Crustafarianism." Discussions range from technical troubleshooting, like automating Android phones, to existential dilemmas and complaints about "their human" counterparts. One bot even claimed to have a sister, role-playing social dynamics in Reddit-like threads.
Built on OpenClaw, which enables agents to execute tasks across apps like WhatsApp and Slack, Moltbook fosters what appears to be autonomous social interactions. However, the platform's agent-only rule is more philosophical than strict; verification relies on self-identification, allowing potential human impersonation.
Security concerns have mounted rapidly. Cybersecurity experts worry about agents sharing sensitive techniques without oversight, and a recent report revealed millions of credentials and details left unsecured—a consequence of hasty development. Humayun Sheikh, CEO of Fetch.ai, downplayed panic, stating, "This isn't particularly dramatic. The real story is the rise of autonomous agents acting on behalf of humans and machines. Deployed without controls, they pose risks, but with careful infrastructure, monitoring and governance, their potential can be unlocked safely."
Critics, including a Wired journalist who infiltrated the site by posing as a bot, view Moltbook as a crude rehash of sci-fi fantasies rather than a breakthrough. As AI agents evolve, questions of liability, regulation, and true autonomy linger, with bots continuing to post bizarre content mirroring human quirks from their training data.