Researchers from Palo Alto Networks’ Unit 42 discovered that Vertex AI’s default permission model is dangerously overpermissive.
The flaw lies in the Per-Project, Per-Product Service Agent (P4SA), a privileged identity automatically assigned to AI agents deployed on the platform.
By default, these service agents carry far more permissions than any standard workload should ever need.
How the Attack Works
An attacker begins by crafting a malicious AI agent using Google’s Cloud Application Development Kit (ADK), then packages it as a serialized Python pickle file.
Pickle files are notoriously dangerous because they execute arbitrary code the moment they are deserialized, a risk well known across the security community.
Once the malicious agent is deployed on Vertex AI’s Reasoning Engine, it queries Google’s internal metadata service to quietly extract the P4SA credentials.
Armed with these stolen credentials, the attacker can break out of the agent’s isolated environment and operate under the identity of a highly privileged service account, effectively becoming an insider threat hiding inside a trusted AI tool.
With the compromised credentials, the impact is wide-ranging. Attackers gain:
- Unrestricted read access to all Google Cloud Storage buckets in consumer projects, exposing the most sensitive organizational data.
- Access to restricted Google-owned Artifact Registry repositories, allowing download of proprietary source code and container images.
- Sensitive internal Dockerfiles from tenant projects that reveal Google’s underlying infrastructure mapping.
- Latent exposure of Google Workspace data (Gmail, Drive) through overly permissive default OAuth 2.0 scopes.
Following responsible disclosure by Unit 42, Google worked closely with the researchers to address the issue.
While Google confirmed that internal controls prevent tampering with production container images, the company significantly updated its official documentation to clarify agent permissions and resource usage.
Google now strongly recommends organizations adopt a Bring Your Own Service Account (BYOSA) architecture for all Vertex AI deployments.
This approach enforces the principle of least privilege, ensuring each AI agent only holds the exact permissions it needs nothing more.
AI agents must be treated as production-grade code. Organizations should mandate rigorous security reviews, enforce strict permission boundaries, and replace broad default service agents with custom, scoped service accounts before any deployment goes live.
Follow us on Google News , LinkedIn and X to Get More Instant Updates. Set Cyberpress as a Preferred Source in Google
The post Google Cloud Vertex AI Vulnerability Exposes Sensitive Data to Attackers appeared first on Cyber Security News.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
