Jump to Content
Security & Identity

Cloud CISO Perspectives: Practical guidance on building with SAIF

January 16, 2026
https://storage.googleapis.com/gweb-cloudblog-publish/images/Cloud_CISO_Perspectives_header_4_Blue.max-2500x2500.png
Tom Curry

Senior Security Consultant, Office of the CISO

Anton Chuvakin

Security Advisor, Office of the CISO

Get original CISO insights in your inbox

The latest on security from Google Cloud's Office of the CISO, twice a month.

Subscribe

Welcome to the first Cloud CISO Perspectives for January 2026. Today, Tom Curry and Anton Chuvakin, from Google Cloud’s Office of the CISO, share our new report on using Google’s Secure AI Framework with Google Cloud capabilities and services to build boldly and responsibly with AI.

As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.

Practical guidance on building with SAIF

By Tom Curry, senior security consultant, and Anton Chuvakin, security advisor, Office of the CISO

https://storage.googleapis.com/gweb-cloudblog-publish/images/Tom_Curry.max-600x600.png

Ensuring that AI can be used safely and securely to achieve bold and responsible business goals is a critical matter for today’s organizations.

At Google, we know that security and data privacy are the top concern for executives when evaluating AI providers, and security is the top use case for AI agents in a majority of the industries surveyed, according to our recent report on the Return on Investment of AI in security. To help security and business leaders secure AI and mitigate AI risk, we developed the Secure AI Framework in 2023. As AI has evolved, so has SAIF, and we now offer practical guidance on how to implement technology-agnostic SAIF controls on Google Cloud.

Google’s Secure AI Framework (SAIF) is a framework for securing AI systems throughout their lifecycles. While SAIF is designed for security professionals, developers, and data scientists on the frontlines to ensure AI models and applications are secure by design, security and business leaders — and especially CISOs — play a crucial role in helping their organizations incorporate SAIF as part of their secure-by-design strategy.

https://storage.googleapis.com/gweb-cloudblog-publish/images/Anton_Chuvakin_Headshot_18L8044_SQ1_Hi_Res.max-2200x2200.jpg

The SAIF Risk Map details a conceptual system architecture for AI based on four components: Data, infrastructure, model, and application. SAIF also identifies 15 common AI risks, highlights where these risks occur, and maps each one against AI controls — including guidance on agentic AI risks and controls.

We strongly believe that securing AI should be an industry-wide, community effort, and as part of that commitment we’ve contributed SAIF components to the Coalition for Secure AI (CoSAI). As we explain in our newest paper on how to implement SAIF controls in Google Cloud, there are three key approaches that can help successfully apply SAIF to AI development: Data should be treated as the new perimeter, prompts should be treated as code, and secure agentic AI requires identity propagation.

The model is only as secure as the data that feeds it. We recommend organizations shift focus from protecting the model to sanitizing the supply chain.

1. Data is the new perimeter

The model is only as secure as the data that feeds it. We recommend organizations shift focus from protecting the model to sanitizing the supply chain. This is where automated discovery and differential privacy can be used to ensure personally identifiable information never becomes part of the model's memory.

The SAIF Data Controls address risks in data sourcing, management, and use for model training and user interaction, ensuring privacy, integrity, and authorized use throughout the AI lifecycle. Key data security tools on Google Cloud include: Identity and Access Management, access controls for Cloud Storage and BigQuery, Dataplex for data governance, and Vertex AI managed datasets.

2. Treat prompts like code

Once data and infrastructure have been secured, we want to secure the model itself. Models can be attacked directly through malicious inputs (prompts), and their outputs can be manipulated to cause harm. In terms of ease-of-use for threat actors, prompt injection is the new SQL injection.

The SAIF Model Controls are designed to build resilience into the model and sanitize its inputs and outputs to protect against these emerging threats. We recommend that you deploy a dedicated AI firewall (such as Model Armor) to inspect every input for malicious intent and every output for sensitive data leaks before they reach the user. Additional key tools from Google Cloud include: Using Gemini as a guard model and using Apigee as a sophisticated API gateway.

3. Agentic AI requires identity propagation

Moving from chatbots to autonomous or semi-autonomous agents increases the blast radius of a security compromise. To help mitigate the risks of rogue actions and sensitive data disclosure, we strongly advise against using service accounts that have broad access: Any actions taken by AI agents on a user’s behalf should be properly controlled and permissioned, and agents should be instructed to propagate the actual user’s identity and permissions to every backend tool they touch.

SAIF recommends application controls to secure the interface between the end user and the AI model. As described in Google’s Agent Development Kit safety and security guidelines, AI agent developers should carefully consider whether interactions with backend tools should be authorized with the agent’s own identity or with the identity of the controlling user. As we explain in the new SAIF report, it takes several steps to implement user authorization for agents: Front-end authentication, identity propagation, authorization for model context protocol (MCP) and agent-to-agent (A2A) protocol, and IAM for Google Cloud services.

Bold and responsible: Building with SAIF

The Secure AI Framework provides a roadmap for navigating the complex security landscape of artificial intelligence. These three key approaches are crucial to SAIF, but there’s more to the framework. Governance controls, assurance controls (including red teaming and vulnerability management,) and application controls are critical SAIF components — and a key part of our alignment with Google Cloud global-scale security principles and capabilities.

For more information on how your organization can operationalize SAIF, you can read the full report here.

In case you missed it

Here are the latest updates, products, services, and resources from our security teams so far this month:

  • How Google Does It: Collecting and analyzing cloud forensics: Here’s how Google’s Incident Management and Digital Forensics team gathers and analyzes digital evidence. Read more.
  • Auto-ISAC, Google partner to boost automotive sector cybersecurity: Google Cloud is proud to join Auto-ISAC as an Innovator Partner to significantly deepen our commitment to the automotive and transportation sectors. Read more.
  • When securing Web3, remember your Web2 fundamentals: As Web3 matures, the stakes continue to rise. For Web3 to thrive, security should expand beyond the blockchain to protect operational infrastructure. Here’s how. Read more.
  • How Mandiant can help test and strengthen your cyber resilience: To help teams better prepare for actual incidents, we developed ThreatSpace, a cyber proving ground with all the digital noise of real employee activities. Read more.

Please visit the Google Cloud blog for more security stories published this month.

Threat Intelligence news

  • Auditing Salesforce Aura for data exposure: Mandiant released AuraInspector, a new open-source tool designed to help defenders identify and audit access control misconfigurations in the Salesforce Aura framework. Read more.
  • How threat actors are exploiting React2Shell: Shortly after CVE-2025-55182 was disclosed, Google Threat Intelligence Group (GTIG) began observing widespread exploitation across many threat clusters, from opportunistic cybercrime actors to suspected espionage groups. Here’s what GTIG has observed so far. Read more.
  • Intellexa’s prolific zero-day exploits continue: Despite extensive scrutiny and public reporting, commercial surveillance vendors such as Intellexa continue to operate unimpeded. Known for its “Predator” spyware, new GTIG analysis shows that Intellexa is evading restrictions and thriving. Read more.

Please visit the Google Cloud blog for more threat intelligence stories published this month.

Now hear this: Podcasts from Google Cloud

  • Why your security strategy needs an immune system, not a fortress: Google’s Royal Hansen, vice-president, Engineering, chats with hosts Anton Chuvakin and Tim Peacock on how cybersecurity needs to move from an engineering mindset to a biological model — and how CISOs can start that journey. Listen here.
  • What actually breaks when OT meets the cloud: Chris Sistrunk, technical leader, OT Consulting, Mandiant, joins Anton and Tim to discuss the real security risks that come with connecting industrial and operational technology to the cloud. Listen here.
  • Behind the Binary: Windows under the hood: Host Josh Stroschein is joined by leading Windows expert, Pavel Yosifovich, who describes himself as a developer who hates security. Listen here.

To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in a few weeks with more security-related updates from Google Cloud.

Posted in