Blocking shadow agents won’t work. Here’s a more secure way forward

Anton Chuvakin
Security Advisor, Office of the CISO
Marina Kaganovich
Executive Trust Lead, Office of the CISO
Get original CISO insights in your inbox
The latest on security from Google Cloud's Office of the CISO, twice a month.
SubscribeLike water flowing downhill, it’s human nature to use technology in the most expedient way possible. “Bring your own device” and “bring your own technology” led to shadow IT. Want to use consumer-grade AI at work? Sorry, that’s shadow AI. The scale of shadow AI is potentially massive as consumer AI tools are routinely used for business purposes.
So it’s easy to see how the rise of agents, and the potential for employees to use them when they haven’t been secured for the enterprise, could lead to shadow agents — all the power of agentic AI, but not enough safeguards to protect corporate data, networks, and systems.
The agent landscape is poised to increase dramatically. According to a Capgemini survey of 1,100 executives at large enterprises, 10% of them already use AI agents, 82% plan to integrate AI agents in the next three years, and 71% believe AI agents will significantly increase workflow automation and improve customer service satisfaction.
Similarly, a scientific study of 4,867 software developers revealed a 26% increase in the number of weekly tasks completed among developers using an AI-based coding assistant. The study highlights a 13.55% increase in the number of code updates and a 38.38% increase in the number of times code was compiled.
At Google Cloud’s Office of the CISO, we’re seeing dozens of security leaders react to the risk of using consumer-grade agents in enterprise settings by trying to block agents entirely. Some organizations have even tried to block gen AI using commonplace network security controls like firewalls.
Applying 1990s security technology to block 2020s AI technology has sadly predictable results: Blocking agents doesn’t work, but there are ways to use agents securely.
Instead of trying to block AI agents, we’re encouraging security and business leaders to lean into the opportunities agents offer — with appropriate guardrails.
We believe that agentic AI can be secured with a proactive and strategic approach, not a reactive and prohibitive one. While many agentic AI applications remain conceptual, their great potential has security and business leaders strongly considering how best to adopt them.
Even with humans in the loop who review and vet an agent’s decisions, the core value of agents is their autonomy. That autonomy elevates their susceptibility to manipulation, increasing adversarial attacks, and potential systemic failures. Using agents means managing an expanded and reshaped risk profile.
Instead of blocking, invest in educating
Blocking previous iterations of generative AI with policy and technology has proven to be ineffective. In many cases, blocking gen AI led to an increased risk of shadow AI, and we presume it’s also unlikely to be effective against shadow agents.
Instead of trying to block AI agents, we’re encouraging security and business leaders to lean into the opportunities agents offer — with appropriate guardrails. At a high level, securing agents begins with a foundational approach of authentication, authorization, auditability, and incorporating secure-AI techniques such as guard models and adversarial training.
It’s also crucial to educate employees about the exacerbated risks that can arise from unauthorized agents, and how to mitigate them. Security and business leaders can encourage employees to stay updated on AI trends and innovations by creating a work environment that embraces experimentation and the use of new technologies. They can also raise secure AI awareness by answering these seven key questions for their organization.
Because agent development is still new, the threat of shadow agents has yet to materialize. Before the risk of shadow agents grows, we hope that the three hypothetical scenarios below can help security and business leaders implement agent security measures and guardrails.
1. Email agent fiasco: Disclosure of confidential client data: Leo, a busy sales director, starts using a popular, publicly-available AI email agent that can summarize threads, draft responses, and schedule follow-ups. Excited by the prospect of reclaiming his time, Leo installs the agent and grants it access to all his email accounts and calendar.
One day, Leo receives a lengthy email chain from a client, detailing their new and sensitive internal financial restructuring plans. He quickly skims it, marks it for follow-up, and moves on. The AI agent scans this email, misinterprets it as coming from a new client, and then autonomously drafts a "welcome and next steps" email.
The agent pulls relevant information from previous public communications, but also extracts confidential details from the financial restructuring email it had just processed. The agent then sends the email to the client, exposing their sensitive internal financial information back to them, and potentially to others if the email was forwarded.
Because of the actions the agent took on Leo’s behalf, the organization now faces a significant breach of trust and a potential legal dispute.
2. Unsanctioned agent scraping: Privacy policy and regulatory violations: Chris, a marketing manager, is tasked with making an ad campaign more personalized to boost sales. He decides to use a popular consumer AI agent to enrich the organization’s customer database. He grants the agent access to a list of customer names and emails, and instructs it to find additional information that may be relevant in determining the customer’s tastes and preferences.
The AI agent scrapes publicly-available information to find birth dates, gender, marital status, hobbies, political and professional affiliations, and financial and health-related discussions. All of this information is then copied to the company’s database.
Unfortunately for Chris, the company's privacy policy explicitly forbids the collection and use of personal data that isn’t necessary for its services. When an internal audit is conducted, the company discovers that has violated its privacy policy and the GDPR and CCPA, and it now faces regulatory action.
3. Vibe coding mishap: Too much freedom in a deferential user environment: Sarah is an analyst at a mid-sized financial firm that seeks to foster an empowered, innovation-friendly culture. She’s been using a vibe-coding tool to develop an AI agent that can create email drafts that "ensure clients feel valued," "anticipate their unspoken needs for reassurance," and "inject a sense of proactive support into advisor workflows."
Built on an open-source framework and integrated with internal APIs (without company approval,) the agent has access to client profiles, market data feeds, and the drafts folder of the company’s CRM client communication module. The agent learned to cross-reference financial news with client portfolio volatility.
A few weeks later, the agent misinterpreted a speculative, alarmist post from a prominent financial blogger. It rapidly populated the draft folders of dozens of client advisors with highly-personalized, unsanctioned, and premature emails addressing the news. Several of the advisors sent these drafts without realizing the extent of the speculative content and alarming tone.
Clients soon flooded advisors with calls about next steps, and some started moving assets. Many portfolios sustained losses, and they filed complaints with regulators. The firm now faces fines, reputational damage, and a significant erosion of client trust.
For each of these scenarios, using approved, enterprise AI agents with appropriate safeguards could have helped mitigate the risks.
Using agents boldly and responsibly
The impulse to simply block these powerful tools is understandable, but ultimately ineffective. Worse, it can even heighten the risks you want to mitigate.
The real solution lies in education and empowerment. By developing clear governance policies, supporting a culture of accountability, and providing robust training on the specific risks of agentic AI — including unintended data exposure, compliance violations, and process vulnerabilities — organizations can transform a potential threat into a powerful asset.
Understand the risks, build the right guardrails, and empower your teams to use this transformative technology responsibly. You can learn more about securing agents and using agents securely from our recent Cloud CISO Perspectives newsletter.