Three critical vulnerabilities in Anthropic's Claude Code AI tool exposed developers to machine takeover and credential theft by simply opening a malicious project repository. The flaws, which have been fixed, involved malicious configuration files executing commands without user consent. This incident highlights significant security risks in AI-powered development tools, which can introduce new attack surfaces into software supply chains.
Flaws in Claude Code Put Developers' Machines at Risk
The vulnerabilities highlight a big drawback to integrating AI into software development workflows and the potential impact on supply chains.
Three critical security vulnerabilities in Anthropic’s AI-powered coding tool, Claude Code, exposed developers to full machine takeover and credential theft simply by opening a project repository.
Anthropic fixed the issues after Check Point Research discovered the flaws and reported it to the company last year. Anthropic plans to introduce additional security features to harden the coding platform and, in the meantime, wants developers to use the latest version of Claude Code to ensure they are protected.
New Exposures
"These vulnerabilities in Claude Code highlight a critical challenge in modern development tools: balancing powerful automation features with security," Check Point researchers Aviv Donenfeld and Oded Vanunu, said in a blog post this week. "The ability to execute arbitrary commands through repository-controlled configuration files created severe supply chain risks, where a single malicious commit could compromise any developer working with the affected repository."
Two of the vulnerabilities are closely related and involve configuration files in a project repository executing commands without proper user consent. Anthropic has assigned a single identifier, CVE-2025-59536, for tracking both flaws. The other vulnerability CVE-2026-21852, affects Claude Code versions prior to 2.0.65 and allowed API credential theft via malicious project configurations.
Claude Code is a command-line coding tool that developers can use to generate and edit code, fix bugs, run shell commands, and automate tasks such as code testing. It's one among a fast-growing class of AI development tools that many organizations have begun using to accelerate software development. Common examples of similar tools include GitHub Copilot, Amazon CodeWhisperer, and OpenAI's Codex. Analysts have cautioned about the new attack surfaces that these tools can introduce because of how they operate with direct access to source code and with local files, and sometimes even with credentials within production environments. That's in addition to other risks associated with the tools themselves, such as hallucinations and the very real potential for them to generate insecure and vulnerable code.
Configuration Files as Attack Vector
One of the three vulnerabilities that Check Point discovered, CVE-2025-59356, involves a Claude Code feature called Hooks that allows developers to enforce consistent and pre-determined behavior — like code-formatting — at specific points in a project life cycle. Check Point researchers found it was relatively easy for a bad actor to introduce a malicious Hook command in Claude Code's configuration file on a project repository. When a developer subsequently opened the project containing the malicious Hook commands, those commands would execute automatically without the developer's notice or consent. Check Point developed an exploit for the vulnerability to show how an adversary could leverage it to gain remote access to a developer's terminal with all the privileges of the developer.
The second vulnerability, also tracked as CVE-2025-59536, is associated with Claude Code's Model Context Protocol (MCP) setting for connecting the coding platform with external services and tools. As with the Hooks feature, Check Point found that developers could configure MCP servers within a project repository using the associated configuration file. Check Point found that an adversary with access to the configuration file could set it to execute malicious commands even before a user warning appeared on the developer's screen.
The third vulnerability, CVE-2026-21852, was broader in scope because it allowed an adversary to harvest a developer's API key with no user interaction required. As with the other vulnerabilities, Check Point researchers found that by changing a setting in a project's configuration file they could intercept API-related communications between Claude Code and Anthropic's servers, route them to an attacker-controlled server, and log the API key before the user had even seen any warning dialog.
"The integration of AI into development workflows brings tremendous productivity benefits but also introduces new attack surfaces that weren't present in traditional tools," Donenfeld and Vanunu wrote. "Configuration files that were once passive data now control active execution paths. "