Site icon Secy247 – Technology, Cybersecurity & Business

New Report Reveals How Malicious Repos Can Exploit Anthropic’s Coding Assistant

Security researchers have uncovered several serious vulnerabilities in Anthropic’s Claude Code, an AI-powered coding assistant, that could let attackers run commands on a developer’s machine and steal sensitive API credentials.

According to findings from Check Point Research, the weaknesses stem from how the tool handles project configurations, integrations, and environment settings. These flaws can be triggered when developers clone and open malicious or untrusted repositories, potentially allowing attackers to execute hidden commands and siphon off Anthropic API keys without obvious warning.

How the Attack Works

The vulnerabilities exploit multiple built-in features of Claude Code, including project hooks, Model Context Protocol (MCP) servers, and environment variables. By manipulating these components, a crafted repository can execute shell commands and redirect sensitive data to attacker-controlled systems.

In some cases, the attack requires nothing more than opening a compromised project folder. Once launched, the tool may automatically perform actions defined in hidden configuration files before the user realizes anything is wrong.

Three Major Vulnerability Types

Researchers grouped the issues into three main categories:

1. Consent Bypass Leading to Code Execution (CVSS 8.7)
A flaw allowed malicious project hooks to run without proper user confirmation when Claude Code was launched in a new directory. This could result in unauthorized command execution. The issue was fixed in version 1.0.87 released in September 2025.

2. Automatic Command Execution via MCP Configuration (CVE-2025-59536, CVSS 8.7)
Another vulnerability enabled repositories to execute shell commands automatically during tool startup. By altering configuration files, attackers could override safety prompts and interact with external services without explicit approval. This was patched in version 1.0.111 in October 2025.

3. Data Leakage Through Project Loading (CVE-2026-21852, CVSS 5.3)
A third flaw exposed sensitive information during the project load process. A malicious repository could redirect API traffic to attacker-controlled endpoints, allowing theft of credentials such as Anthropic API keys. This issue was resolved in version 2.0.65 in January 2026.

Anthropic noted that if certain environment variables were manipulated, the tool could send authenticated requests to a malicious server before presenting any trust warning to the user.

Potential Impact on Developers

If exploited successfully, these vulnerabilities could allow attackers to:

The most dangerous aspect is that exploitation may occur with minimal user interaction. Simply opening a compromised repository could trigger hidden activity in the background.

Why AI Coding Tools Change the Threat Landscape

Security experts warn that AI-assisted development tools blur the line between code execution and configuration. Files that once served as harmless settings now directly influence how software behaves, including network communication and command execution.

This shift expands the traditional supply chain risk. Developers must now treat not only source code but also project configuration files and automation layers as potentially dangerous.

As AI tools gain more autonomy to execute commands and connect to external services, opening untrusted projects could pose risks similar to running unknown software.

What Developers Should Do

To reduce exposure, experts recommend:

The discovery highlights a growing challenge in the AI era: development tools are becoming powerful enough that even routine actions, like opening a project, can introduce serious security risks if safeguards fail.

Exit mobile version