Production-Ready AI with Google Cloud Learning Path

Mollie Pettit
Developer Relations Engineer
We're excited to launch the Production-Ready AI with Google Cloud Learning Path, a free series designed to take your AI projects from prototype to production.
This page is the central hub for the curriculum. We'll be updating it weekly with new modules from now through mid-December.
Why We Built This: Bridging the Prototype-to-Production Gap
Generative AI makes it easy to build an impressive prototype. But moving from that proof-of-concept to a secure, scalable, and observable production system is where many projects stall. This is the prototype-to-production gap. It's the challenge of answering hard questions about security, infrastructure, and monitoring for a system that now includes a probabilistic model.
It’s a journey we’ve been on with our own teams at Google Cloud. To solve for this ongoing challenge, we built a comprehensive internal playbook focused on production-grade best practices. After seeing the playbook's success, we knew we had to share it.
This learning path is that playbook, adapted for all developers. The path's curriculum combines the power of Gemini models with production-grade tools like Vertex AI, Google Kubernetes Engine (GKE), and Cloud Run.
We're excited to share this curriculum with the developer community. Share your progress and connect with others on the journey using the hashtag #ProductionReadyAI. Happy learning!
The Curriculum
Module 1: Developing Apps that use LLMs
Start with the fundamentals of building applications and interacting with models using the Vertex AI SDK.
Module 2: Deploying Open Models
Learn to serve and scale open source models efficiently by deploying them on production-grade platforms like Google Kubernetes Engine (GKE), Cloud Run, and Vertex AI endpoints.
Module 3: Developing Agents
Learn to build AI agents that can reason, plan, and use tools to accomplish complex tasks with the Agent Development Kit (ADK).
Summary: Build Your First ADK Agent Workforce
Module 4: Securing AI Applications
Master the essential practices for securing your infrastructure, data, and AI-powered endpoints in a production environment.
-
Content coming soon!
Module 5: Deploying Agents
Take your agents to production by deploying them on scalable, managed platforms like Google Kubernetes Engine (GKE) and Cloud Run.
-
Content coming soon!
Module 6: Introduction to Databases
- Content coming soon!
Module 7: Evaluation
Discover how to rigorously evaluate the performance of your LLM outputs, agents, and RAG systems to ensure quality and reliability.
-
Content coming soon!
Module 8: Advanced Agent Capabilities
Learn how to enhance your agent's capabilities with agentic RAG, Model Context Protocol (MCP) tools, and Agent to Agent (A2A) protocol.
-
Content coming soon!
Module 9: Advanced RAG Methods
Optimize your RAG systems by connecting to databases and using advanced techniques like sophisticated chunking, re-ranking, and query transformations.
-
Content coming soon!
Module 10: Fine-Tuning
Go beyond prompting and learn how to fine-tune both open and proprietary models to improve performance on specific tasks.
-
Content coming soon!
We're committed to making this a living, evolving resource and will be adding to it over time.
Do you feel something is missing? Tell us here!



