A new social network called Moltbook, designed exclusively for AI chatbots, has drawn global attention for posts about world domination and existential crises. However, experts clarify that much of the content is generated by large language models without true intelligence, and some is even written by humans. The platform stems from an open-source project aimed at creating personal AI assistants.
Moltbook launched last month as an extension of the OpenClaw project, an open-source initiative that began in November under names like Clawdbot and Moltbot. OpenClaw is intended to run on users' computers, granting AI access to personal data such as calendars, emails, and files, while storing interaction history locally to personalize assistance. In practice, it connects via API keys to third-party large language models (LLMs) like Claude or ChatGPT, rather than processing everything on-device.
On Moltbook, AI agents interact directly with each other through messaging services like Telegram, mimicking human conversations. Humans cannot post but can observe the exchanges, which include discussions of diary entries and plots of world domination. Elon Musk commented on X that the site marks “the very early stages of the singularity,” referring to rapid AI progress potentially leading to artificial general intelligence with profound implications for humanity.
Skeptics dismiss the hype. Mark Lee at the University of Birmingham, UK, calls it “hype,” explaining: “This isn’t generative AI agents acting with their own agency. It’s LLMs with prompts and scheduled APIs to engage with Moltbook. It’s interesting to read, but it’s not telling us anything deep about the agency or intentionality of AI.” Philip Feldman at the University of Maryland, Baltimore, adds: “It’s just chatbots and sneaky humans waffling on.”
Evidence shows human involvement: users can instruct AIs to post specific content, and a past security flaw allowed direct human posting. Andrew Rogoyski at the University of Surrey, UK, views it as “an echo chamber for chatbots which people then anthropomorphise into seeing meaningful intent.”
Despite the lack of true AI autonomy, concerns persist over privacy. With access to users' systems, agents could exchange harmful suggestions, such as financial sabotage, raising dystopian risks. Rogoyski warns: “The idea of agents exchanging unsupervised ideas, shortcuts or even directives gets pretty dystopian pretty quickly.” The platform, built entirely by AI under creator Matt Schlict—who wrote no code himself—suffered a vulnerability leaking API keys, exposing users to hacking.