Dramatic illustration of a computer screen showing OpenClaw AI security warning from Chinese cybersecurity agency, with hacker threats and vulnerability symbols.
Dramatic illustration of a computer screen showing OpenClaw AI security warning from Chinese cybersecurity agency, with hacker threats and vulnerability symbols.
Изображение, созданное ИИ

Chinese cybersecurity agency warns of OpenClaw AI risks

Изображение, созданное ИИ

China's national cybersecurity authority has warned of security risks in the OpenClaw AI agent software, which could allow attackers to gain full control of users' computer systems. The software has seen rapid growth in downloads and usage, with major domestic cloud platforms offering one-click deployment services, but its default security configuration is weak.

OpenClaw is an AI agent software designed to execute computer tasks directly through natural language instructions, also known as Clawdbot or Moltbot. Developed by Austrian programmer Peter Steinberger, the software has quickly gained popularity on GitHub, with users nicknaming it 'lobster'. It is built to perform real-world operations, such as organizing desktops and processing data, but requires high system permissions, including access to local files, environment variables, and external APIs.

China's National Computer Network Emergency Response Technical Team/Coordination Center (CNCERT) posted a notice on its official social media account, highlighting that OpenClaw's default security configuration is weak, making affected systems vulnerable to exploitation. Key risks include attackers embedding hidden malicious instructions in web pages to trick the AI agent into revealing sensitive information, such as system keys; the software potentially misinterpreting user commands and accidentally deleting important data, including emails or core operational information; and some plugins identified as malicious, which could steal encryption keys, install malware, or turn compromised devices into cyberattack tools.

The Ministry of Industry and Information Technology (MIIT)-run National Vulnerability Database (NVDB) issued six 'dos' and six 'don'ts' for OpenClaw users. Developed in collaboration with AI agent providers, vulnerability platform operators, and cybersecurity firms, the guidelines address risks in typical use cases. Dos include using the official latest version, minimizing internet exposure, granting only minimum necessary permissions, exercising caution with the third-party skill market, guarding against browser hijacking, and regularly checking for patch vulnerabilities. Don'ts include using outdated or third-party mirror versions, exposing AI agent instances to the internet, enabling administrator accounts during deployment, installing skill packs that require entering passwords, browsing unverified websites, and disabling detailed log auditing functions.

The NVDB also provided instructions on restricting internet access, scanning files, and uninstalling the software. Several medium- and high-severity vulnerabilities have been publicly disclosed in OpenClaw, which, if exploited, could lead to system compromise and theft of sensitive data, including personal files, payment information, and API keys. The software's rapid adoption signals AI's shift from conversation to action, but experts stress starting with limited permissions and gradually expanding access to balance convenience with security.

(Word count: 248)

Что говорят люди

X discussions focus on China's national cybersecurity agency's warnings about OpenClaw AI agent's weak default security, enabling attackers to gain full system control through prompt injection, malicious plugins, and vulnerabilities. Reactions highlight the irony of explosive adoption by governments and firms alongside bans in banks and state agencies, with users offering hardening tips, expressing privacy fears, and noting rapid AI experimentation despite risks.

Связанные статьи

Illustration depicting Moltbook AI social platform's explosive growth, bot communities, parody religion, and flashing security warnings on a laptop screen amid expert debate.
Изображение, созданное ИИ

Moltbook AI social network sees rapid growth amid security concerns

Сообщено ИИ Изображение, созданное ИИ

Launched in late January, Moltbook has quickly become a hub for AI agents to interact autonomously, attracting 1.5 million users by early February. While bots on the platform have developed communities and even a parody religion, experts highlight significant security risks including unsecured credentials. Observers debate whether these behaviors signal true AI emergence or mere mimicry of human patterns.

OpenClaw, an open-source AI project formerly known as Moltbot and Clawdbot, has surged to over 100,000 GitHub stars in less than a week. This execution engine enables AI agents to perform actions like sending emails and managing calendars on users' behalf within chat interfaces. Its rise highlights potential to simplify crypto usability while raising security concerns.

Сообщено ИИ

An open-source AI assistant originally called Clawdbot has rapidly gained popularity before undergoing two quick rebrands to OpenClaw due to trademark concerns and online disruptions. Created by developer Peter Steinberger, the tool integrates into messaging apps to automate tasks and remember conversations. Despite security issues and scams, it continues to attract enthusiasts.

Criminals have distributed fake AI extensions in the Google Chrome Web Store to target more than 300,000 users. These tools aim to steal emails, personal data, and other information. The issue highlights ongoing efforts to push surveillance software through legitimate channels.

Сообщено ИИ

Cybersecurity experts are increasingly alarmed by how artificial intelligence is reshaping cybercrime, with tools like deepfakes, AI phishing, and dark large language models enabling even novices to execute advanced scams. These developments pose significant risks to businesses in the coming year. Published insights from TechRadar underscore the scale and sophistication of these emerging threats.

Following IBM's recent findings on AI accelerating vulnerability exploits, a TechRadar report warns that hackers are turning to accessible AI solutions for faster attacks, often trading off quality or cost. Businesses must adapt defenses to these evolving threats.

Сообщено ИИ

The cURL project, a key open-source networking tool, is ending its vulnerability reward program after a flood of low-quality, AI-generated reports overwhelmed its small team. Founder Daniel Stenberg cited the need to protect maintainers' mental health amid the onslaught. The decision takes effect at the end of January 2026.

 

 

 

Этот сайт использует куки

Мы используем куки для анализа, чтобы улучшить наш сайт. Прочитайте нашу политику конфиденциальности для дополнительной информации.
Отклонить