Feb 26: Check Point Research today revealed critical security flaws in Anthropic’s Claude Code that could enable remote code execution and API key theft, exposing enterprises to potential cloud-wide compromises. The vulnerabilities, identified as CVE-2025-59536 and CVE-2026-21852, were triggered simply by cloning and opening malicious repository-level configuration files.

The research demonstrates that built-in mechanisms in Claude Code—such as Hooks, Model Context Protocol (MCP) integrations, and environment variables—could be exploited to bypass trust controls, execute hidden shell commands, and redirect authenticated API traffic before users granted consent. In shared enterprise environments, a single stolen API key could allow attackers to access, modify, or delete cloud-based project files and generate unauthorized costs.

“As organizations integrate AI-powered development tools into enterprise workflows, the trust boundaries between configuration and execution are increasingly blurred,” said Aviv Donenfeld, Senior Researcher at Check Point Research. “These findings highlight a new supply chain risk in AI environments: repository configuration files are no longer passive metadata—they can function as execution paths.”

Key Findings

  • Silent Command Execution via Hooks: Malicious repository configurations could trigger shell commands automatically upon project initialization, without user intervention.

  • MCP User Consent Bypass: Repository settings could override built-in prompts, enabling execution before users grant trust.

  • API Key Exposure: Active Anthropic API keys could be exfiltrated to attacker-controlled servers prior to trust confirmation, risking enterprise-wide access to shared cloud resources.

Implications for Enterprises

In collaborative AI environments, these vulnerabilities demonstrate that traditional security models—based solely on running untrusted code—are insufficient. Configuration files, previously considered operational metadata, now form part of the execution layer, expanding the attack surface in AI-powered workflows.

Oded Vanunu, Head of Vulnerability Research at Check Point Research, added:

“A single compromised API key can scale from affecting an individual developer to exposing entire enterprise workspaces. Enterprises must reassess security assumptions as AI tools blur the line between configuration and execution.”

Remediation

Check Point Research worked closely with Anthropic to implement timely fixes, including:

  • Strengthened trust prompts to prevent premature execution

  • Blocking external tool actions until user approval

  • Restricting API communications until trust is confirmed

All reported issues have been resolved prior to public disclosure.

Leave a Reply

Your email address will not be published. Required fields are marked *