Jump to Content
Security & Identity

How Google Does It: Red teaming at scale

March 11, 2025
https://storage.googleapis.com/gweb-cloudblog-publish/images/GettyImages-127825107.max-2600x2600.jpg
Stefan Friedli

staff security engineer, Google Red Team co-lead

Anton Chuvakin

Security Advisor, Office of the CISO

Hear monthly from our Cloud CISO in your inbox

Get the latest on security from Cloud CISO Phil Venables.

Subscribe

Ever wondered how Google does security? As part of our “How Google Does It” series, we’ll share insights, observations, and top tips about how Google approaches some of today's most pressing security topics, challenges, and concerns — straight from Google experts. In this edition, Stefan Friedli, one of Google's staff security engineers and a global lead for the Red Team, dives into the critical role the Google Red Team plays in helping to defend Google, and shares some insights into what makes our approach unique.

The Google Red Team is a team of hackers that simulates real cyberattacks against Google’s networks, systems, and devices. The attacks are based on the threat behaviors of known adversaries, including nation-states, Advanced Persistent Threat (APT) groups, hacktivists, cybercriminals, and malicious insiders. Think of us as a sparring partner for Google’s defense teams.

In a nutshell, our mission is to use our role playing attackers targeting Google to improve Google’s overall security.

https://storage.googleapis.com/gweb-cloudblog-publish/images/Hacking_Google_003_key_frame.max-1400x1400.png

Hacking Google: Red team

We help test how well our signals and security controls work in the real world, and identify areas where we can improve. We mimic the strategies, motives, goals, and even the tools of the threat actor we’re simulating in an effort to get into the minds of the hackers that target Google — and ultimately help protect our employees, users, and customers by making our defenses faster and stronger.

Getting to this point is the result of a lot of organic growth. The Red Team started in 2016 as a “20% project” — an internal initiative where Googlers can pursue interesting projects outside the scope of their day-to-day responsibilities. However, our insights were so strong that we quickly realized the value of applying a hacker mindset to our security problems.

Since then, we’ve become an integral part of Google’s security approach, providing insights and context about vulnerabilities that significantly strengthens our ability to keep people safe.

Here’s how we hack our way to better security.

Break things to understand them

Google’s Red Team has a strong track record for operating safely and responsibly. As a result, we don't have many constraints in what we target, so we can more freely shine bright lights in dark corners.

We often have multiple exercises happening at once, so logging our activities provides a clear record that defenders can check. This helps ensure that they can quickly rule out external actors acting maliciously.

One real test we conducted more than a decade ago involved sending a USB-powered plasma globe to Google engineers. Once connected to a laptop or workstation, the globes would emulate a keyboard and rapidly send a sequence of keystrokes to execute a series of commands to download and run an implant, giving the team remote access whenever the device was in use.

This access was used to run an exercise to find out if an adversary interested in Google’s intellectual property could feasibly gain access to design documents and other sensitive information. Unfortunately for the red team, the engineer tasked with addressing the request became suspicious and alerted the security team.

Consequently, the Red Team filed a number of bugs for the tactics, techniques, and procedures (TTPs) that were used from initial access, to gain persistence, and ultimately to access internal resources. Since there were no good solutions available at the time to reliably prevent a USB device such as a plasma globe from dropping a payload, we implemented our own solution in form of a python-based daemon and open-sourced it as USB Keystroke Injection Protection, in an effort to make this type of attack harder to pull off beyond Google, too.

Any action we undertake is tracked in a log shared with defense teams... Most issues revealed during our exercises get resolved quickly after we share our findings with affected teams, and the likelihood of us being able to use the same vulnerability or tactic again later is extremely low.

The main point is for the red team to emulate the same behaviors a real threat actor would use in a real-world scenario to gain a foothold and remain undetected. Identifying vulnerabilities and other limitations can help us zero in on the specific technical elements that make attacks successful, so we can extract them, study them, and implement solutions that make them less effective.

Always practice open communication

Our philosophy is to share what we know. We collaborate closely with defenders across Google, such as the Google Threat Intelligence Group (GTIG) and our threat detection and response team. We have cultivated a high level of trust with the groups we work with and do our best to make sure it’s as easy and efficient as possible to identify Red Team exercises to avoid wasting valuable response resources.

Any action we undertake is tracked in a log shared with defense teams. We also regularly report our findings in detail and share anything new we may have discovered.

We often have multiple exercises happening at once, so logging our activities provides a clear record that defenders can check. This helps ensure that they can quickly rule out external actors acting maliciously. It also creates accountability later, in case there are follow-up questions.

To make sure we maintain good working relationships, we have a lot of stakeholder management in place. We get stakeholders in the loop as early as possible in the development of an exercise. We inform them about what we’re doing at a high level, such as our targets and the basic premise of our activities.

The Google Red Team also has dedicated resources for tracking remediation: The people who run attacks are not the same ones following up after an exercise.

Once an exercise wraps up, we keep these lines of communication open during remediation to make sure things get resolved.

Using adversarial research

As part of testing, we pay close attention to what’s happening in the world of threat intelligence and regularly research new attack vectors. Our team tends to simulate specific threats, focusing on those we actually encounter or which have been documented by the GTIG.

We often use threat research to inspire our exercises and inform critical aspects of our approach, including how we simulate a threat actor, the targets we choose, and the tools we use.

Curated threat intelligence can play a crucial role in red team engagements, allowing organizations to understand what threats they are actually facing and the form these attacks will take.

For example, you might be worried that a nation state actor like the Russian hacking group APT29 might use a zero-day attack to target your CEO’s devices, but GTIG research shows that these types of groups are more likely to launch supply chain attacks targeting third-party vendors to find a way into organizations. Therefore, we (and you) should focus on the scenarios that are as realistic as possible.

Build strong remediation

Many red teams and penetration tests are driven by compliance, where it’s unfortunately common to test an environment one year, list all your findings, and come back the next year to find the situation virtually unchanged.

At Google, it’s rare we can pull the same trick twice. We are extremely privileged to have a mature remediation and threat management program, which we have built and strengthened over time.

Most issues revealed during our exercises get resolved quickly after we share our findings with affected teams, and the likelihood of us being able to use the same vulnerability or tactic again later is extremely low. We focus on different areas of the company across a certain period of time — and any time we restart a cycle, we’re typically always starting from scratch.

The Google Red Team also has dedicated resources for tracking remediation: The people who run attacks are not the same ones following up after an exercise. Instead, we task specific individuals, who have the expertise to advise on remediation, with making sure that issues get resolved within a reasonable amount of time.

The overall result is that our response capabilities and processes have become so powerful and effective that we can keep everyone informed about our activities and run multiple exercises whenever we want — even Fridays — without ruining someone’s weekend.

Learn from your failures

If you’re familiar with Google Site Reliability Engineering (SRE), you’ll know that the notion of a blameless post-mortem is a key tenet of SRE philosophy and an essential part of engineering at Google. We adopt a similar approach on the Google Red Team when running exercises.

We strongly believe that the burden of security should not be on the user, instead focusing on the contributing causes that would allow us to successfully compromise a target. We regularly ask our users, customers, partners, and employees to interact with the products and services that we provide. Our focus is always on how we can make it easier to spot or report suspicious activity rather than reprimanding people.

Ultimately, we are still Googlers. We are trying to make Google and the people who use our services more secure, and so the concept of “blamelessness” is extremely important for building relationships with other teams across Google.

This article includes insights from the Cloud Security Podcast episode, “Attacking Google to Defend Google: How Google Does Red Team.” You can learn more about becoming a red team engineer at the new Hack the Box and Google course for AI Red Teamers, an innovative upskilling program designed to equip cybersecurity practitioners with the necessary skills to evaluate, test, and defend AI systems.

Posted in