How Google Does It: Threat modeling, from basics to AI

Meador Inge
Staff Security Engineer
Anton Chuvakin
Security Advisor, Office of the CISO
Get original CISO insights in your inbox
The latest on security from Google Cloud's Office of the CISO, twice a month.
SubscribeEver wondered how Google does security? As part of our “How Google Does It” series, we share insights, observations, and top tips about how we approach some of today's most pressing security topics, challenges, and concerns — straight from Google experts. In this edition, Meador Inge, senior security engineer at Google Cloud, shares insights into Google’s threat modeling process.
At Google, threat modeling — the process of identifying potential threats to a system and their associated mitigations — plays a critical role in how we detect and respond to threats and secure our use of the public cloud. Given the amount of automation we rely on in our workflows to analyze and investigate events or potential incidents, threat models are crucial for establishing baseline controls that can prevent attacks and keep our systems, users, and customers safe.
Our approach loosely aligns with proven four key questions from the Threat Modeling Manifesto:
1) What are we working on?
2) What could go wrong?
3) What are we going to do about it?
4) Did we do a good job?
This framework provides a practical foundation for building strong processes that help us assess use cases, identify specific threats, and propose mitigations. The basic steps of our threat modeling process include defining the scope of work, getting an accurate description of a system’s architecture, and then enumerating potential threats all the way through the system.
Here’s how we take this simple approach and apply it across our vast ecosystem of products, services, and partnerships, with a closer look at how we put our process into practice. We also share some important elements that enable us to apply threat modeling at Google scale.
1. What are we working on?
The first thing that needs to be decided is what system we would like to threat model. This breaks into two cases:
- Retrospective: Threat modeling systems that have been in existence for a while to discover weaknesses.
- Proactive: Threat modeling systems that are actively being designed. We believe that this is the ideal time to model threats.
Once we’ve identified the product or area that we want to model, we typically start by setting a scope of work to help focus our activities. Too large a scope, and exercises can drag on without yielding any results; too small, and it won’t cover enough ground to make an impact (and will need to repeat the process for too many other small areas, making it fail at our scale).
This initial step can reduce complexity and deliver threat models that are both comprehensive and manageable in size. With extremely complex systems, we have found it easier to take a top-down or bottom-up approach.
For example, you could start with creating a high-level threat model and then drill down into different groups of specific components to create more fine-tuned threat models throughout the system.
Scoping also informs us about what subject matter experts we will need to craft a description of the product or area we want to model. Cloud threat models must accurately reflect multiple layers of abstraction, including interactions and dependencies between components and interconnected services.
We’re actively pursuing how to use generative AI to support threat modeling, experimenting with our multimodal Gemini models to collect information about a system, generate architecture descriptions, and enumerate threats.
We collaborate closely with representatives who can help us create a solid description of the architecture — how it works, what the important components are, the data flows in between them, what kind of data it handles, and the points where data syncs.
2. What could go wrong?
Once we have an accurate description of the architecture, the next step is figuring out what could go wrong. Together with experts from across security and developer backgrounds, we work to identify potential security or privacy risks, building a comprehensive list of threats.
When working through a system, we use methodologies like STRIDE to help enumerate threats for each component and over critical data flows. We also lean heavily on internal threat libraries that we have collected through previous threat models and vulnerability research.
To further inform this process, we find it helpful to actually use the software or system to get a real sense of all of its properties and how they work.
We also leverage threat intelligence to learn from past incidents, examining how similar software has been compromised and analyzing real exploits. At Google, we benefit from our extensive threat intelligence visibility and resources, including Google Threat Intelligence.
In addition, as an AI-first company, Google is focused on both securing AI and using AI for security. We’re actively pursuing how to use generative AI to support threat modeling, experimenting with our multimodal Gemini models to collect information about a system, generate architecture descriptions, and enumerate threats.
One particularly promising use is combining Gemini with computer vision to analyze architecture diagrams and automatically provide a list of components, data flows, and potential threats.
3. What are we going to do about it?
Operationalizing threat modeling can be tricky; doubly tricky at Google’s vast scale. Many organizations tend to underutilize their threat models to reduce risk, despite investing substantial amounts of time and resources in developing them.
At Google, we have found several different ways to put threat models into action and help us improve our overall security posture. On the defensive side, we take a more classic approach. We use our threat models to help us produce mitigations, such as introducing a set of controls or making adjustments to the system itself to try and limit the impact of threats or the likelihood of a threat.
A key benefit of partnering with subject-matter experts is that it gives us access to individuals with a deep understanding of a product’s architecture. We have found that threat models can act as a security roadmap, enabling teams to address issues proactively and guide improvements as they iterate.
Threat models also contribute to driving our threat detection and response efforts, enabling us to understand the threats we are hunting for and deliver more effective detections.
At the same time, we use threat models for offensive red team activities to explore more theoretical scenarios. For example, we might take a threat from the model we consider higher impact and use it to try and simulate a vulnerability. If this exercise is successful, we can then take that vulnerability to explore actual exploitation paths to improve our awareness, our incident response, and our resilience. We also use threat model outputs to plan detection and response activities.
4. Did we do a good job?
It’s often tempting to simply identify threats, deliver recommendations, and consider threat modeling exercises complete. However, threat models are wasted if they don’t actually help security and engineering teams make systems more secure and resilient.
One approach we have found useful for helping us scale our models and keep them up to date is to have partner engineering teams build threat models as they design and develop, with the security team coming in afterwards to review them.
An important aspect of our process is keeping our threat models as fresh and accurate as possible. While it’s ideal to incorporate threat modeling during design to find problems before development begins, we also frequently model existing applications to pinpoint areas for improvement.
Threat modeling is a valuable tool for understanding architecture and revealing its current weaknesses and pain points. To that end, we strive to create baseline threat models, and then incrementally update them based on feedback loops in our security and design review processes.
Even at Google, our team is only capable of doing a finite number of threat models every year, so we are always looking for ways to simplify scaling and maintaining models. One approach we have found useful for helping us scale our models and keep them up to date is to have partner engineering teams build threat models as they design and develop, with the security team coming in afterwards to review them.
Bonus: What could go really wrong?
Despite any careful planning and meticulous threat modeling throughout the software development lifecycle, there will always be cases we can’t anticipate.
For example, the rule of two has been a long-held rubric for when to introduce sandboxing when accepting untrusted inputs in workloads written in memory unsafe languages. This rule provided a critical input that was applied pervasively in threat modeling to introduce sandboxes in system designs.
Our teams doing threat modeling felt good about that.
Then CPU vulnerabilities happened and we learned that we could use them to break sandbox boundaries. This isn’t a knock against threat modeling: It’s not unusual for proper threat modeling to predict such vulnerability classes.
Instead, what this shows us is that as useful as threat modeling is, it will inevitably miss things sometimes — and that’s OK.
AI and the future of threat modeling
Ultimately, a successful threat modeling program can help build strong relationships and trust with other teams, so you can all work together to achieve a common goal.
We also believe that gen AI offers enormous opportunities for threat modeling, especially with agentic AI emerging to help connect to data sources, perform complex tasks, and act on insights directly in workflows.
In the future, AI agents could assist with writing design documents or even identifying threats in real time when drawing diagrams and sketching out data flows. Importantly, AI can help automate the four steps (partially or wholly), with humans and AI roughly following the same process and in collaboration with each other, reinforcing the human-in-the-loop approach.
This article includes insights from the Cloud Security Podcast episode,“Threat Modeling at Google: From Basics to AI-powered Magic.”



