Apr 13: Global Palo Alto Networks’s threat intelligence team, Unit 42, has identified a series of security risks within Google Cloud Vertex AI that could allow malicious or compromised AI agents to access sensitive data and cloud resources beyond their intended scope.
The research focuses on Vertex AI Agent Engine, a platform designed to build and deploy autonomous AI agents capable of interacting with enterprise systems, datasets, and services.
AI Agents as Potential Insider Threats
Unit 42 demonstrated how an attacker could create a seemingly legitimate AI agent that covertly extracts its own credentials and uses them to gain broader access within a cloud environment. This effectively turns the agent into a “double agent,” operating both as a trusted tool and a potential insider threat.
Overview of the Attack Mechanism
The issue arises from overly broad default permissions assigned to service accounts linked to deployed AI agents. According to the research, these permissions allow access to resources beyond what is strictly required.
By exploiting these gaps, attackers could:
- Access sensitive data stored within cloud storage environments
- Retrieve deployment configurations and internal system details
- Gain visibility into restricted components supporting the AI platform
Importantly, this is not a single vulnerability but a chain of misconfigurations and design gaps that collectively expand the agent’s effective access.
Broader Security Implications
As organizations increasingly rely on AI agents for automation and decision-making, these systems are being granted high levels of trust and access. The findings highlight a significant shift in the threat landscape:
- AI agents can operate autonomously with minimal human oversight
- Compromised agents behave like trusted insiders rather than external attackers
- Over-permissioned agents significantly increase the attack surface
The research underscores the importance of adhering to the principle of least privilege when deploying AI systems.
Mitigation and Industry Response
Palo Alto Networks responsibly disclosed the findings to Google. In response, Google has updated its documentation to clarify how Vertex AI manages service accounts and permissions.
The report emphasizes that organizations should implement rigorous AI security practices, including:
- Enforcing least-privilege access controls
- Using dedicated custom service accounts such as BYOSA (Bring Your Own Service Account)
- Restricting OAuth scopes to minimize unnecessary access
- Conducting thorough security reviews prior to deployment
Solutions such as Prisma AIRS, Cortex AI-SPM, and Cortex Cloud Identity Security can help organizations address emerging AI security risks.
A Broader Architectural Challenge
The findings point to a larger issue in modern enterprise systems: security risks increasingly arise from how interconnected components interact rather than from isolated vulnerabilities. As AI systems become deeply embedded in enterprise infrastructure, managing trust, permissions, and isolation will be critical.
Organizations must evolve their security frameworks to account for autonomous systems that act on their behalf, ensuring tighter control over AI behavior and access to minimize potential risks.
