Announcing Model Context Protocol (MCP) support for Google services
Michael Bachman
VP/GM, Google Cloud
Anna Berenberg
Engineering Fellow, Google Cloud
With the recent launch of Gemini 3, we have the state-of-the-art reasoning to help you learn, build, and plan anything. But for AI to truly be an “agent”, to pursue goals and solve real-world problems on behalf of users, it needs more than just intelligence; it needs to reliably work with tools and data.
Anthropic’s Model Context Protocol (MCP), often likened to a "USB-C for AI", has quickly become a common standard to connect AI models with data and tools. MCP enables AI applications to execute the complex multi-step tasks it takes to solve real world problems. However, implementing Google’s existing community-built servers often requires developers to identify, install, and manage individual, local MCP servers or deploy open-source solutions–placing the burden on developers, and often leading to fragile implementations.
Today we’re announcing the release of fully-managed, remote MCP servers. Google’s existing API infrastructure is now enhanced to support MCP, providing a unified layer across all Google and Google Cloud services. Developers can now simply point their AI agents or standard MCP clients like Gemini CLI to a globally-consistent and enterprise-ready endpoint for Google and Google Cloud services.
Crucially, we are extending this capability to your broader enterprise stack through Apigee, allowing you to leverage the purpose-built APIs your organization uses for specific data flows and business logic. Customers can now expose and govern their own developer-built APIs, as well as external third-party APIs, as discoverable tools for agents. Read more about Apigee’s announcement here.
We are incrementally releasing MCP support for all our services, starting with:
1. Google Maps: Grounding AI in the real world
Maps Grounding Lite, available through Google Maps Platform, connects AI agents to trusted geospatial data, offering access to fresh information on places, weather forecasts, and routing details such as distance and travel time. This allows developers to build agents that can accurately answer real-world location and travel queries without hallucinating. For example, an AI assistant can use Grounding Lite to respond to queries such as, "How far is the nearest park from this rental?”, “What should I pack for the weather in Los Angeles this weekend?”, or “Could you recommend kid-friendly restaurants near our hotel?”
2. BigQuery: Reasoning over enterprise data
The BigQuery MCP server enables agents to natively interpret schemas and execute queries against enterprise data without the security risks or latency of moving data into context windows. It provides direct access to BigQuery features like forecasting while ensuring data remains in-place and governed.
3. Google Compute Engine (GCE): Autonomous infrastructure management
By exposing capabilities like provisioning and resizing as discoverable tools, this server empowers agents to autonomously manage infrastructure workflows. Agents can handle everything from initial builds to day-2 operations, such as dynamically adapting to workload demands.
4. Google Kubernetes Engine (GKE): Autonomous container operations
The GKE MCP server exposes a structured, discoverable interface that allows agents to interact reliably with both GKE and Kubernetes APIs, eliminating the need to parse brittle text output or string-together complex CLI commands. This unified surface allows agents, operating autonomously or with human-in-the-loop guardrails, to diagnose issues, remediate failures, and optimize costs.
Built-in security and observability
We are bringing order to this ecosystem with a unified approach to discovery and governance. With the new Cloud API Registry and Apigee API Hub, developers can find trusted MCP tools from Google and their own organizations, respectively. We pair this ease of discovery with rigorous control: administrators can manage access via Google Cloud IAM, rely on audit logging for observability, and utilize Google Cloud Model Armor to defend against advanced agentic threats such as indirect prompt injection.
"Google's support for MCP across such a diverse range of products, combined with their close collaboration on the specification, will help more developers build agentic AI applications. As adoption grows among leading platforms, it brings us closer to agentic AI that works seamlessly across the tools and services people already use."- David Soria Parra, Co-creator of MCP & Member of Technical Staff, Anthropic
Let’s see an example of these new MCP servers in action:
Imagine an agent that will help identify an ideal location for retail. Using Agent Development Kit (ADK), you can build a natural-language agent backed by Gemini 3 Pro, that connects to BigQuery to forecast revenue based sales data, while simultaneously, the agent cross-references Google Maps to scout for complementary businesses and validate delivery routes, all via standard, managed MCP servers.


To truly unlock the potential of agentic AI, your agents need access to your entire application stack, from containers to your relational databases. In the next few months, we will be rolling out MCP support for additional services including:
- Projects, Compute, and Storage: Cloud Run, Cloud Storage, Cloud Resource Manager
- Databases and Analytics: AlloyDB, Cloud SQL, Spanner, Looker, Pub/Sub, Dataplex Universal Catalog
- Security: Google Security Operations (SecOps)
- Cloud operations: Cloud Logging, Cloud Monitoring
- Google services: Developer Knowledge API, Android Management API
- And many more
The key to the agentic future
With these new and extended MCP capabilities, we are ensuring developers and agents can easily interact with data and take actions too. Google is committed to leading the AI revolution not just by building the best models, but also by building the best ecosystem for those models and agents to thrive. As a founding member of Agentic AI Foundation, we will continue to contribute to the evolution of MCP through the open source community. By giving agents the best method to connect to the world, we are freeing developers to focus on what’s next.



