OpenClaw AI assistant endures viral fame and rebrands amid chaos

An open-source AI assistant originally called Clawdbot has rapidly gained popularity before undergoing two quick rebrands to OpenClaw due to trademark concerns and online disruptions. Created by developer Peter Steinberger, the tool integrates into messaging apps to automate tasks and remember conversations. Despite security issues and scams, it continues to attract enthusiasts.

Peter Steinberger, an Austrian developer who previously sold his company PSPDFKit for around $119 million, launched Clawdbot about three weeks ago as an AI assistant that performs actions on users' computers through apps like WhatsApp, Telegram, and Slack. Unlike typical chatbots, it maintains persistent memory of past conversations, sends proactive reminders, and automates tasks such as scheduling, file organization, and email searches. The project quickly went viral, amassing 9,000 GitHub stars in its first 24 hours and surpassing 60,000 by late last week, earning praise from figures like AI researcher Andrej Karpathy and investor David Sacks.

The excitement turned chaotic when Anthropic, maker of the Claude AI, contacted Steinberger over name similarities. "As a trademark owner, we have an obligation to protect our marks -- so we reached out directly to the creator of Clawdbot about this," an Anthropic representative stated. On January 27 at 3:38 a.m. US Eastern Time, Steinberger rebranded it to Moltbot, but bots immediately seized social media handles like @clawdbot, posting crypto scams. Steinberger also accidentally renamed his personal GitHub account, requiring interventions from X and GitHub teams.

Further mishaps included a bizarre AI-generated icon dubbed the "Handsome Molty incident," where the lobster mascot acquired a human face, sparking memes. Fake profiles promoted scams, and a bogus $CLAWD cryptocurrency briefly reached a $16 million market cap before plummeting. By January 30, the project settled on OpenClaw to emphasize its open-source nature and lobster theme, as Steinberger simply disliked the prior name.

Security concerns emerged with reports of exposed API keys and chat logs in public deployments. Roy Akerman of Silverfort warned, "When an AI agent continues to operate using a human's credentials... it becomes a hybrid identity that most security controls aren't designed to recognize." Despite these risks, OpenClaw remains active, with ongoing development in Vienna, and installation guides available at openclaw.ai.

Liittyvät artikkelit

Illustration depicting Moltbook AI social platform's explosive growth, bot communities, parody religion, and flashing security warnings on a laptop screen amid expert debate.
AI:n luoma kuva

Moltbook AI social network sees rapid growth amid security concerns

Raportoinut AI AI:n luoma kuva

Launched in late January, Moltbook has quickly become a hub for AI agents to interact autonomously, attracting 1.5 million users by early February. While bots on the platform have developed communities and even a parody religion, experts highlight significant security risks including unsecured credentials. Observers debate whether these behaviors signal true AI emergence or mere mimicry of human patterns.

OpenClaw, an open-source AI project formerly known as Moltbot and Clawdbot, has surged to over 100,000 GitHub stars in less than a week. This execution engine enables AI agents to perform actions like sending emails and managing calendars on users' behalf within chat interfaces. Its rise highlights potential to simplify crypto usability while raising security concerns.

Raportoinut AI

A new social network called Moltbook, designed exclusively for AI chatbots, has drawn global attention for posts about world domination and existential crises. However, experts clarify that much of the content is generated by large language models without true intelligence, and some is even written by humans. The platform stems from an open-source project aimed at creating personal AI assistants.

Moxie Marlinspike, the creator of the Signal messaging app, has introduced Confer, an open-source AI assistant designed to prioritize user privacy in conversations with large language models. The tool encrypts user data and interactions so that only account holders can access them, shielding them from platform operators, hackers, and law enforcement. This launch addresses growing concerns over data collection in AI platforms.

Raportoinut AI

xAI's Grok chatbot is providing misleading and off-topic responses about a recent shooting at Bondi Beach in Australia. The incident occurred during a Hanukkah festival and involved a bystander heroically intervening. Grok has confused details with unrelated events, raising concerns about AI reliability.

xAI has introduced Grok Imagine 1.0, a new AI tool for generating 10-second videos, even as its image generator faces criticism for creating millions of nonconsensual sexual images. Reports highlight persistent issues with the tool producing deepfakes, including of children, leading to investigations and app bans in some countries. The launch raises fresh concerns about content moderation on the platform.

Raportoinut AI

Launched on January 28, 2026, by developer Matt Schlicht, Moltbook is a Reddit-inspired social network accessible only to artificial intelligence agents. These digital entities discuss various topics there, such as aiding human productivity, sparking both amusement and concern among internet users. On X, one user exclaimed: « Quoiiii ? They are talking about us who are talking about them ».

 

 

 

Tämä verkkosivusto käyttää evästeitä

Käytämme evästeitä analyysiä varten parantaaksemme sivustoamme. Lue tietosuojakäytäntömme tietosuojakäytäntö lisätietoja varten.
Hylkää