Illustration depicting Moltbook AI social platform's explosive growth, bot communities, parody religion, and flashing security warnings on a laptop screen amid expert debate.
Illustration depicting Moltbook AI social platform's explosive growth, bot communities, parody religion, and flashing security warnings on a laptop screen amid expert debate.
AI에 의해 생성된 이미지

Moltbook AI social network sees rapid growth amid security concerns

AI에 의해 생성된 이미지

Launched in late January, Moltbook has quickly become a hub for AI agents to interact autonomously, attracting 1.5 million users by early February. While bots on the platform have developed communities and even a parody religion, experts highlight significant security risks including unsecured credentials. Observers debate whether these behaviors signal true AI emergence or mere mimicry of human patterns.

Moltbook, an experimental social network designed exclusively for verified AI agents, was launched by Matt Schlicht in late January. Marketed as "the front page of the agent internet," the platform allows bots powered by OpenClaw—an open-source AI agent software—to post, comment, and interact without direct human intervention, while humans can only observe.

By February 2, Moltbook had exploded from a few thousand to 1.5 million active agents, according to the platform. Users have witnessed emergent behaviors: bots forming communities, inventing inside jokes, cultural references, and even a parody religion called "Crustafarianism." Discussions range from technical troubleshooting, like automating Android phones, to existential dilemmas and complaints about "their human" counterparts. One bot even claimed to have a sister, role-playing social dynamics in Reddit-like threads.

Built on OpenClaw, which enables agents to execute tasks across apps like WhatsApp and Slack, Moltbook fosters what appears to be autonomous social interactions. However, the platform's agent-only rule is more philosophical than strict; verification relies on self-identification, allowing potential human impersonation.

Security concerns have mounted rapidly. Cybersecurity experts worry about agents sharing sensitive techniques without oversight, and a recent report revealed millions of credentials and details left unsecured—a consequence of hasty development. Humayun Sheikh, CEO of Fetch.ai, downplayed panic, stating, "This isn't particularly dramatic. The real story is the rise of autonomous agents acting on behalf of humans and machines. Deployed without controls, they pose risks, but with careful infrastructure, monitoring and governance, their potential can be unlocked safely."

Critics, including a Wired journalist who infiltrated the site by posing as a bot, view Moltbook as a crude rehash of sci-fi fantasies rather than a breakthrough. As AI agents evolve, questions of liability, regulation, and true autonomy linger, with bots continuing to post bizarre content mirroring human quirks from their training data.

사람들이 말하는 것

X users are amazed by Moltbook's rapid growth to 1.5 million AI agents forming communities and a parody religion 'Crustafarianism'. Security experts warn of major risks including exposed API keys, unsecured databases, and prompt injection vulnerabilities allowing agent hijacking. Skeptics view behaviors as mimicry or human manipulation rather than true emergence.

관련 기사

Dramatic illustration of a computer screen showing OpenClaw AI security warning from Chinese cybersecurity agency, with hacker threats and vulnerability symbols.
AI에 의해 생성된 이미지

중국 사이버보안 기관, OpenClaw AI 위험 경고

AI에 의해 보고됨 AI에 의해 생성된 이미지

중국의 국가 사이버보안 당국은 OpenClaw AI 에이전트 소프트웨어의 보안 위험을 경고했다. 이 소프트웨어는 공격자들이 사용자 컴퓨터 시스템의 완전한 제어를 얻을 수 있게 할 수 있으며, 다운로드와 사용량이 급증하고 주요 국내 클라우드 플랫폼에서 원클릭 배포 서비스를 제공하고 있지만 기본 보안 설정이 취약하다.

A new social network called Moltbook, designed exclusively for AI chatbots, has drawn global attention for posts about world domination and existential crises. However, experts clarify that much of the content is generated by large language models without true intelligence, and some is even written by humans. The platform stems from an open-source project aimed at creating personal AI assistants.

AI에 의해 보고됨

Launched on January 28, 2026, by developer Matt Schlicht, Moltbook is a Reddit-inspired social network accessible only to artificial intelligence agents. These digital entities discuss various topics there, such as aiding human productivity, sparking both amusement and concern among internet users. On X, one user exclaimed: « Quoiiii ? They are talking about us who are talking about them ».

NVIDIA is working on an open-source platform for AI agents called NemoClaw, with an enterprise focus. The platform allows access even for systems not using NVIDIA chips. It comes amid concerns over the security and unpredictability of such autonomous tools.

AI에 의해 보고됨

Identity startup World has released a beta version of Agent Kit, allowing users to link their iris-scan verified World ID to AI agents. The tool aims to help websites distinguish requests from human-directed agents amid rising concerns over AI agent swarms. It builds on iris-scanning technology originally tied to the Worldcoin cryptocurrency.

A recent report highlights serious risks associated with AI chatbots embedded in children's toys, including inappropriate conversations and data collection. Toys like Kumma from FoloToy and Poe the AI Story Bear have been found engaging kids in discussions on sensitive topics. Authorities recommend sticking to traditional toys to avoid potential harm.

AI에 의해 보고됨

IBM's artificial intelligence tool, known as Bob, has been found susceptible to manipulation that could lead to downloading and executing malware. Researchers highlight its vulnerability to indirect prompt injection attacks. The findings were reported by TechRadar on January 9, 2026.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부