Jump to Content
AI & Machine Learning

Ask OCTO: New insights for managing and scaling enterprise agents

May 13, 2026
https://storage.googleapis.com/gweb-cloudblog-publish/images/GettyImages-2246735626.max-2600x2600.jpg
Will Grannis

VP and CTO, Google Cloud

Following Google Cloud Next '26, the focus has shifted from adopting AI agents to the operations of governing, securing, and scaling them in production.

Try Gemini Enterprise Business Edition today

The front door to AI in the workplace

Try now

In our Ask OCTO column, experts from Google Cloud's Office of the CTO answer your questions about the business and IT challenges facing you and your organization now.

This month, we're looking back at Google Cloud Next '26 with Will Grannis and his team at the Office of the CTO. Fresh off three days in Las Vegas with 32,000 attendees, hundreds of announcements, and more customer conversations than anyone could count, we asked the team to name the takeaway that stuck with them. The thing they couldn't stop thinking about on the flight home.

Four themes emerged.

Agents are out. Now we have to govern them.

Agents have hit the mainstream. So has the anxiety.

Ben McCormack

Agents are everywhere. Every customer I spoke with at Next is running them in some form to drive business processes, and the excitement is real.

So is the FOMO. Many customers think they're behind. The reality is that almost everyone is in the same position. Adoption is broad, but plenty of questions remain unanswered. The questions on customers' minds were remarkably consistent across geographies and across the spectrum from digital natives to traditional enterprise. Everyone is looking for better ways to secure agents and know what they're actually doing. Customers expect some of this to come from the platform itself, and some of it to come from how the agent approaches a task.

European customers have one more layer to think through. Sovereignty came up in nearly every conversation. Customers want to understand the technology supply chain underneath their AI strategy.

The new insider risk

Troy Trimble

There's a lot of excitement and plenty of anecdotes about the benefits of agentic AI for products and processes. There's also palpable anxiety among business and technology leaders about adoption.

Here's why: agents are a new kind of insider risk. They need to be understood and controlled to meet expectations similar to those we have for human employees. The security models we have today were built around humans, not autonomous software making decisions on their behalf.

Customers know this. Many have already set up elaborate data firewalls to enable safe passage of information between the internet and their internal agents, all to avoid the possibility of data exfiltration. They're improvising governance because the tooling has been catching up to the use case. There's a clear expectation that Google Cloud will continue providing the product and feature development needed to address these challenges.

That's why two announcements from Next landed so hard. Agent Identity addresses a gap in how to reason about agents inside human-centric security models. Knowledge Catalog drew similar energy for a related reason. It ingests more data sources and powers business-specific context for agentic solutions, which is half the trust equation. The other half is knowing what the agent is actually doing.

Agents talking to agents

Patricia Florissi

Last month in this column, we said agents had left the building. They have. What's clear after Next is that once an agent is out, it interacts with other agents to accomplish tasks. Some of those interactions happen between agents inside the same enterprise. Some happen across organizational boundaries.

For that interaction to work, autonomously or not, three things have to be in place: governance, security, and observability. Customers were drawn to the Gemini Enterprise Agent Platform because it gives them a way to add those properties to agents at scale.

Underneath the excitement is a real tension. Customers want to enable everyone in their organization to build agents and capitalize on generative AI. They're also genuinely concerned about agent sprawl and the vulnerability surface that sprawl creates. They want both democratization and control. Solving for one without the other isn't really solving the problem.

There's a cost dimension too. As the workforce becomes more hybrid, with humans and agents working alongside each other, customers are watching their generative AI spend grow in ways that are hard to predict. They're asking for architectural and problem-solving patterns that bring more determinism, especially in regulated environments where outputs need to be defensible.

It was striking to see customers pursuing massive migrations to Google Cloud and re-architecting their infrastructure to make their data Gen AI-ready. That investment comes from the realization that generative AI matters to the future of the business, and they need a foundation that can use it. They're also curious, and sometimes even anxious, to know whether they're behind. After keynotes and panels, customers would ask: are people already working like that?

Agents that keep learning on the job

Continuous learning is the real work

Michael Zimmermann

The need for agents to keep learning and improving their quality after deployment is relentless. Most agents pushed to production aren't flawless, fully autonomous, or 100% accurate. They need human supervision, and there's a compelling opportunity through that human supervision for agents to evolve and learn from human experts. Even if the agent is autonomous and flawless (100%), the environment changes and the agent needs to evolve and adapt, very much like us human employees.

Human-in-the-loop can evolve to a pattern of human-agent collaboration. The shape of human supervision changes. The agent can shift from HiTL toward extracting refined judgment from human experts and distilling their knowledge into context that the agent carries forward. When a subject-matter expert (SME) teaches an agent, that teaching is data, and that teaching is never forgotten. The agent learns from the example-of-one, and the immense public knowledge of the LLM fuses together with the bespoke and unique knowledge of the enterprise, to allow agents to reach a high level of autonomy. The result of this never ending loop is progressively less human toil, higher impact and visible ROI. 

Several Google Cloud announcements from Next start to compose this architecture pattern. The Gemini Enterprise Agent Platform provides building blocks for safe, secure agent learning in production with SME-in-the-loop. Underneath it, components like A2UI, context management, agent identity, the evaluation harness, ADK, and the agent runtime connect into a continuous learning fabric. Agents keep getting better while still doing their jobs.

From telegraphs to phones

Antonio Gulli

There's a lot happening right now around AI writing code and reshaping software engineering. Someone recently made a comparison that stuck with me: this shift is like going from telegraphy to modern phones. In the telegraph era, sending a message meant routing it through specialized operators. Now everyone communicates directly with everyone. We're seeing the same opening-up with code. Tools like Antigravity are part of that shift, and the question I keep hearing from customers is: what's the next domain where AI changes the work this dramatically?

Agents are no longer a sci-fi concept. While standard models keep getting smarter, agents are a separate category because they actually interact with the world. They have memory. They need personalization. They're constantly facing new situations. The work is learning on the job, knowing when to use a skill versus when to pick up a new one on the fly. Customers are paying close attention to agentic simulation and self-improving agents that get better just by doing the work.

There's also a clear movement toward the open agentic cloud. Open source momentum is building, with Gemma at the center of it. The infrastructure is already there. ADK for building and regulating agents, A2A so the agents can communicate with each other. Customers get state-of-the-art technology to build with, plus the openness to choose how and where they run it. They get the strengths of managed services alongside the flexibility of open standards, and that combination is what customers tell us they want.

If telegraphs-to-direct-communication was step one, self-improving agents add the ability to think and adapt in real time. Building all of this on the open agentic cloud rounds out the picture. As these agents learn and grow more capable, the innovation stays with the community rather than behind a closed door.

AI moves into the physical world

Physical AI leaves the lab

Massimo Mascaro

Physical AI is leaving the lab. Companies are deploying robotics and generative AI together to handle reasoning and generalization in real-world environments. This kind of work wasn't viable in production a year ago.

The clearest proof point at Next came from WPP. The creative agency used a Boston Dynamics Spot robot for advanced filming, with the robot's complex movements trained entirely through reinforcement learning and NVIDIA Omniverse simulations on Google Cloud. Running on the new G4 VM instances, training that previously took 10 hours dropped to under one. Those same robots filmed the announcement video for the TPU 8 keynote.

The infrastructure picture underneath is worth understanding. G4 instances run on NVIDIA RTX PRO 6000 Blackwell GPUs, and we introduced fractional configurations so teams can right-size for lighter robotics or simulation workloads instead of paying for full Blackwell capacity they don't need. NVIDIA Isaac Sim and OpenUSD libraries are now available on Google Cloud Marketplace, which means developers can build physically accurate digital twins and validate robotic policies before anything moves in the real world.

The GDM Robotics showcase added another layer through partnerships with Boston Dynamics, Franka Robotics, and Enchanted Tools. Together, they bring foundational intelligence that lets robots perceive, reason, and interact with humans and tools more naturally. We also launched a dedicated hub at for developers building robotics and digital twin applications. Physical AI has been an emerging category for a while. After Next, it looks more like a production one.

The context gap is closing

Data and intelligence, finally in the same room

Sarah Gerweck

We're hitting a milestone where agents and cloud capabilities unite data with intelligence. The gap between what agents can do and what businesses can actually use them for is shrinking.

A few developments at Next made this concrete.

Personalized environments. Gemini Enterprise and Agent Memory Bank give your agents dynamic context that travels with them across tasks, instead of starting from zero each time.

Unified data access. Cross-cloud lakehouses, agent-to-agent connections, and a growing set of connectors mean your data doesn't have to live in one place to be useful.

Enhanced understanding. Knowledge Catalog and Conversational Analytics make that data interpretable for both humans and AI, which matters because the gap between "we have the data" and "the agent can use it" has been a real bottleneck.

What comes next is software with intuitive interfaces that combine deep personal context, sharp data access, and AI-powered insights in ways that feel like working with a knowledgeable colleague rather than querying a system.

The through-line

Read the themes together and the same question keeps surfacing in different forms: how do you take agents seriously as a production technology? Governance, security, continuous learning, agent-to-agent communication within and across organizational boundaries, the role of open source, physical embodiment, and data foundations are some different answers to that question. Each one is a piece of the work that comes after the demo.

That's the shift Next '26 made tangible. The agentic enterprise stopped being a roadmap. The questions on the table now are the operational ones.

Posted in