These 4 AI governance tips help counter shadow agents

Anton Chuvakin
Security Advisor, Office of the CISO
Marina Kaganovich
Executive Trust Lead, Office of the CISO
Get original CISO insights in your inbox
The latest on security from Google Cloud's Office of the CISO, twice a month.
SubscribeThe real threat actor in 2026 isn't as much a shadowy figure in a hoodie, but rather the supposedly helpful AI agent your lead developer just gave administrator access to automate what they called “some boring stuff." We are leaping from the era of shadow IT to shadow AI to the wild west of shadow agents: Autonomous systems that do more than just suggest text — they act, and not always in the way intended.
Where shadow IT was about data leakage, someone putting a spreadsheet where it shouldn't be, shadow AI added intellectual property exposure and hallucinations to the mix. Shadow agents go even further by introducing autonomous action, goal hijacking, and remote code execution (RCE) risks.
Historically, banning new technology rarely succeeds, and banning AI is even riskier. Prohibiting the use of AI agents can backfire, as employees bypass blockades to find alternative and often less secure ways to automate tasks, such as by running shadow agents. These workarounds can create significant blind spots and make data access harder to monitor.
The catch is that agentic IAM is split into two primary models that sometimes mix with each other: Workload agents automate core enterprise business processes, while workforce agents improve employee productivity for individual tasks.
Part of mitigating shadow agent risk is embracing strong AI governance protocols. For agent security to be effective, security and business leaders should govern and deploy their agentic tools with the same rigor applied to human-managed accounts, cloud infrastructure, and enterprise software.
Figuring out how to empower teams with AI while maintaining security and compliance is a vital part of this equation, so it shouldn’t come as a surprise that we recommend starting agent governance with identity and access management (IAM). IAM was supremely important before the rise of generative AI in 2023, and the rise of agents has made it even more important than before.
The catch is that agentic IAM is split into two primary models that sometimes mix with each other: Workload agents automate core enterprise business processes, while workforce agents improve employee productivity for individual tasks.
Despite the complications, there is a clear path forward for AI governance to strengthen agentic security. At Google Cloud’s Office of the CISO, we recommend the following four tips for updating, planning, and implementing governance for AI agents in the enterprise, drawing on core security principles and programmatic best practices.
1. Define the AI agent’s sphere of influence
Before an agent does anything, rigorously define its operational scope using agent permission controls, outlining what it can do (the APIs it can call, the systems it can touch, the data it can modify) and where it can take those actions (dev, test, and production environments.)
This approach requires extending the concept of least privilege by dynamically aligning the agent’s permissions with its specific purpose and current user intent. Alignment measures should help ensure agents behave consistently with the principal’s values, intentions, and interests.
All agents also should have defined, unambiguous goals and measurable success criteria that are tailored to their specific roles to help ensure they operate in your defined ethical and compliance boundaries. Complex, multi-agent orchestration may introduce compounded risks stemming from interactions between probabilistic components.
To design multi-agent systems that can help prevent potential cascading failures, we recommend identifying integration points and implementing runtime policy enforcement and mechanisms, such as rollback infrastructure, that can automatically halt AI operations across systems when unexpected behavior is detected.
Manage and regulate component interactions through policy-as-code. On Google Cloud, you can use Policy Controller and Config Connector to help enforce these policies as code and govern component interactions deterministically. Agent autonomy risks may cause unforeseen harm, so agent powers should be limited.
Misconfigurations in access provisioning continues to be a major pain point, which will only be amplified when organizations are faced with managing human and AI agent access rights. Our minimum viable secure platform checklist offers specific guidance on how to reduce and eliminate misconfiguration vulnerabilities.
2. Establish agent ID with clear attribution
Think of your AI agent as a new employee who learns really fast but has zero inherent loyalty and even less common sense. You wouldn't give a newbie the keys to the kingdom, and so agents need tightly scoped identities.
Grant only the permissions that agents absolutely need for their tasks, and for the shortest required time. Every action performed by an agent must be unequivocally attributable to that agent instance. You want to avoid anonymous agents and abandoned agents running amok.
Note that this does not mean that an AI agent is treated like a human employee in IAM; an agent identity carries both human user and workload characteristics. In Google Cloud environments, use Workload Identity Federation and granular IAM roles to achieve this scoped, non-human identity for your agents.
When multiple agents (and humans) interact with systems, you should know who did what for accountability and incident response. We encourage granular IAM controls for agentic AI.
We also recommend assigning unique identifiers to agents, also known as agent IDs, to help trace all actions, decisions, and outcomes back to the responsible entity — human or agent — for auditability. Identity policies should clearly attribute agent actions through composite identities, linking the agent to the human user directing it. Resource access should be confirmed and attributable back to the human user.
3. Control resources, use rate limiting, and prevent excessive consumption
Implementing effective controls requires enforcing strict limitations on agent powers, aligning them dynamically with the agent's purpose and acceptable risk tolerance. These constraints should impose definable and enforceable maximum permission levels that govern access to compute resources, tool usage, and the frequency of external interaction.
Strict agent limitations are essential to prevent excessive financial costs, and system disruption. To keep agentic AI systems from unexpectedly driving up energy and server costs, we recommend implementing comprehensive governance structures to track cost drivers, including token consumption and resource use.
Without guardrails, an agent can inadvertently trigger a denial-of-service (DoS) attack against your own resources — either by inflating cloud expenses through infinite loops or by overwhelming downstream enterprise systems..
AI application developers should establish clear bounds on how often agents act and manage agent spend by tracking message consumption. We also suggest establishing rate-limiting alerts that require human authorization beyond certain spend thresholds to better manage agentic financial risk.
4. Shift detection to infer and interrupt
Strong AI governance creates an opportunity for security teams to get away from the reactive detect and respond posture, traditionally measured by mean time to detect and mean time to respond, to a proactive infer and interrupt model. Because agents can handle alert triage in seconds, the health of a security program no longer relies solely on speed metrics.
Instead, AI encourages the human defender’s role to evolve into that of a strategic validator, a human over the loop who has been freed from triage toil to focus on critical tasks that require a human analyst’s insight, such as forensics and high-stakes decisions.
This new detection strategy should be behavioral and risk-based, using hypothesis-driven cognitive agents to build risk profiles through layers of contextual stacking. These agents can use composite detection concepts to significantly reduce SOAR volume by associating detections within time windows, and combining them with correlation logic before escalating to human responders.
Get started with governance
Navigating the complexities of agentic AI requires security and governance teams to take a proactive, strategic approach based in education and empowerment. By developing clear governance policies, establishing a culture of accountability, and providing robust training on the specific risks of agentic AI — like unintended data exposure, compliance violations, and process vulnerabilities — organizations can transform a potential threat into a powerful asset.
To learn more about implementing strong AI governance, check out our in-depth report on delivering secure and trusted AI.



