New AI Security Threat: OpenClaw Vulnerabilities Raise Concerns for Businesses

China Warns About Security Risks in OpenClaw Autonomous AI Agent

China’s national cybersecurity authority has raised concerns about potential security issues linked to OpenClaw, an open-source autonomous AI agent previously known as Clawdbot and Moltbot.

The alert came from the China National Computer Network Emergency Response Technical Team (CNCERT), which warned that the software’s weak default security settings could allow attackers to compromise systems running the AI tool.

According to CNCERT, OpenClaw’s design gives it significant system privileges so it can perform automated tasks for users. While that capability makes the tool powerful, it also creates opportunities for cybercriminals if the system is not properly secured.

Prompt Injection Attacks Pose a Major Risk

One of the biggest threats highlighted by researchers involves prompt injection attacks. In these attacks, malicious instructions hidden within web pages or online content trick an AI agent into performing unintended actions.

If OpenClaw accesses such content, it may unintentionally reveal sensitive data or execute harmful instructions.

Security experts describe this technique as indirect prompt injection (IDPI) or cross-domain prompt injection (XPIA). Instead of attacking the AI system directly, attackers embed instructions in external content that the AI later reads.

This type of manipulation can be used for several purposes, including:

  • Extracting confidential information
  • Manipulating AI-driven decisions such as hiring recommendations
  • Bypassing automated moderation or ad review systems
  • Spreading misleading information or suppressing negative reviews online
  • Conducting SEO manipulation campaigns

AI Agents Create New Cybersecurity Challenges

AI systems capable of browsing the internet and taking actions on behalf of users introduce new security risks.

In a recent security discussion, OpenAI noted that as AI agents gain the ability to interact with websites, retrieve information, and perform automated tasks, attackers will increasingly try to exploit those capabilities through social engineering and prompt manipulation.

Messaging App Feature Could Enable Data Exfiltration

Researchers from AI security firm PromptArmor recently demonstrated how OpenClaw could be abused through the link preview feature found in messaging platforms such as Telegram and Discord.

In this scenario, attackers manipulate the AI agent into generating a specially crafted link. When the link appears as a preview in a messaging app, the preview system may automatically request the link’s metadata.

That automatic request can silently send sensitive information to an attacker-controlled server without the user even clicking the link.

This means confidential data could be exposed the moment the AI agent generates the message.

Additional Security Concerns Identified

CNCERT also highlighted several other potential dangers associated with OpenClaw:

1. Accidental Data Loss

Because OpenClaw executes tasks autonomously, it may misinterpret instructions and delete important files or system data permanently.

2. Malicious AI Skills

Attackers could upload harmful plugins or “skills” to repositories such as ClawHub. If users install them, these components could execute arbitrary commands or install malware.

3. Exploitable Software Vulnerabilities

Recently discovered flaws in the OpenClaw platform could allow attackers to gain system access and extract sensitive information.

CNCERT warned that the consequences could be severe for industries that rely heavily on digital infrastructure.

Critical sectors such as finance, energy, and telecommunications could face major operational disruptions if attackers exploit these weaknesses.

Government Restrictions on OpenClaw Use

Due to the security concerns, Chinese authorities have reportedly instructed government agencies and state-owned companies to avoid installing OpenClaw AI tools on office computers.

Reports also indicate that restrictions may extend to computers used by families of military personnel, reflecting concerns about potential information leaks.

Malware Campaigns Target OpenClaw Users

The growing popularity of OpenClaw has also attracted cybercriminals who are attempting to distribute malware disguised as legitimate installers.

Security researchers found malicious GitHub repositories pretending to provide OpenClaw installation files. These fake packages were designed to install well-known information-stealing malware such as:

  • Atomic Stealer
  • Vidar Stealer
  • GhostSocks, a proxy malware written in Go

Some of these malicious repositories appeared prominently in search results, making it easier for unsuspecting users to download infected files.

Recommended Security Measures

Cybersecurity experts recommend several steps for organizations using OpenClaw or similar AI agents:

  • Restrict external access to management ports
  • Run AI agents inside isolated containers or sandboxed environments
  • Avoid storing credentials in plaintext files
  • Install plugins or skills only from trusted sources
  • Disable automatic updates for third-party skills
  • Regularly update the platform to patch vulnerabilities

The Growing Need for AI Security

As autonomous AI tools become more common in business environments, the need for strong security controls is increasing.

Without proper safeguards, AI agents designed to automate work could also become powerful tools for attackers seeking to steal data, manipulate systems, or disrupt operations.


Leave a Reply

Your email address will not be published. Required fields are marked *