Delivering trusted and secure AI using Google Cloud

Aug 2025

DOME Test Author

Introduction

Enterprises today face a critical challenge: delivering AI to production while ensuring accuracy, safety, and data security. Google Cloud’s approach to generative AI prioritizes enterprise readiness with built-in mechanisms for robust data governance, privacy controls, IP indemnification, and responsible AI practices. We provide the tools and services necessary to secure AI and offer data sovereignty options, giving customers the confidence to deploy models at scale within enterprises.

Enterprises today face a critical challenge: delivering AI to production while ensuring accuracy, safety, and data security. Google Cloud’s approach to generative AI prioritizes enterprise readiness with built-in mechanisms for robust data governance, privacy controls, IP indemnification, and responsible AI practices. We provide the tools and services necessary to secure AI and offer data sovereignty options, giving customers the confidence to deploy models at scale within enterprises.

In this paper, we explore how Google Cloud helps enterprises and corporations realize these benefits. It unpacks how we build enterprise-grade gen AI responsibly, and how we approach AI data governance, privacy, security, and compliance when developing gen AI through the Vertex AI platform. For context as used throughout this paper, gen AI refers to the use of AI to create new content such as text, images, music, audio, and video, or some variation thereof as enabled by multimodal gen AI.

Responsible Innovation

With AI moving so fast, how can we strike the right balance between innovation, reliability, and risk mitigation? In this chapter, learn how Google Cloud embeds a responsible approach to building AI technologies by carefully considering everything from risk assessments and data governance to privacy, security, and compliance; as well as portability and emissions reduction

Chapter 01

Innovation

As with any transformational and new technology, gen AI comes with complexities and risks, and these need to be managed as part of a comprehensive risk management framework and governance structure. AI presents critical questions and we are working to build AI responsibly to benefit both our customers and the wider societies in which we operate. The challenge is to do so in a way that is proportionately tailored to mitigate risks and promote reliable, robust, and trustworthy AI applications, while still enabling innovation and the promise of AI for societal benefit.

Responsible AI is woven into the fabric of our work. As part of our principled approach to building AI technologies, we commit to developing and applying strong safety and security practices, and incorporate our privacy principles in the development and use of AI. Recognizing that rigorous evaluations are critical to building successful AI, we engage specialized teams in analyses and risk assessments for the AI products we build and for early-stage customer co-development opportunities so that everyone can reap its benefits. 


Responsible product development practices span multiple dimensions. Some are technical, involving evaluations of data sets and models for bias; some pertain to product experiences; and some are around policy, informing what we will and won’t offer from a product perspective. We have developed a four-phase process (consisting of Research, Design, Govern, and Share) to review projects against the AI Principles and work with subject matter experts on privacy, security, and compliance, to name a few. The initial Research and Design phases foster innovation, while the Govern and Share phases focus on risk assessment, testing, monitoring, and transparency. 

Our research draws on in-house expertise, including computer scientists, social scientists, and user experience researchers. We also regularly publish on the progress we’re making to enable transparency into our work, support safer and more accountable products, earn and keep our customers’ trust, and foster a culture of responsible innovation. 

Our approach to building AI responsibly is guided by our AI Principles and builds upon Google’s previous experience with keeping users safe on our platforms. As we build gen AI services, our technical approaches to enforce policies at scale include techniques like fine-tuning and reinforcement learning from human feedback (RLHF). Other layered protections are invoked both when a person inputs a prompt and again when the model provides the output. Policy improvements are informed by ongoing user feedback and monitoring. Responsibility by design also involves building security into our products from the very beginning. We’ve codified this approach in our Secure AI Framework (SAIF). Applying SAIF, we build on our existing security knowledge and adjust mitigations to these new threats, as further discussed below. 



Risk Assessments

For every organization, the decision to leverage the power of gen AI hinges on a myriad of questions, one of the most salient being: How can I help my organization harness the power of AI while minimizing risk? Google Cloud helps customers answer this question in a number of ways.


01. Comprehensive reviews during AI product development


We identify and assess potential risks at both the model level, and the point of their integration into a product or service. Our socio-technical approach considers how AI will interact with the world and existing social systems, and assesses the potential impacts and risks that may be posed both at the initial release and as time goes on. Reviewers understand that potential risks and impacts might be different at the model level and at the application level, and consider mitigations accordingly. We draw from various sources, including academic literature, external and internal expertise, and our in-house ethics and safety research.


02. Private Releases of models


This allows our product teams to gather valuable feedback before we make them generally available. Once feedback is incorporated, we update our product documentation to account for any changes. This documentation generally provides known limitations of the model and may include service-specific terms to further advise customers on proper use of our products. Google Cloud continues to invest in tools to support our customers including: Vertex’s Explainable AI, Model Fairness, Model Evaluation, Model Monitoring, and Model Registry to support data and model governance.


03. Mitigation strategies to address potential risks


These apply to any risks identified prior to releasing the product for general availability (GA), and can take various forms. For instance, for gen AI products, mitigations may draw on technical approaches to evaluate and improve models during development, establish policy-driven safety guardrails, or may be enabled by tooling customers can leverage in their own projects for further safety efforts. Policy restrictions are typically guided by the relevant Acceptable Use Policy, Terms of Service, and privacy restrictions, as further discussed in the AI Data Governance and Privacy section below.

Enterprises can further mitigate the risk of AI adoption using:

Customizable technical controls such as safety filters, which can block model responses that violate policy guidelines, for instance, around child safety. A customer could create safety filters leveraging safety attributes, which include “harmful categories” and topics that may be considered sensitive, such as “drugs” or “derogatory.” 

Google’s Responsible Generative AI Toolkit, which provides guidance and tools to create safer AI applications with these new open models, including how to set safety policies and methodologies for building robust safety classifiers. 

Explainable AI tools and frameworks to help understand and interpret predictions made by machine learning models, natively integrated with a number of Google Cloud products and services. 

In addition, documentation on our API models, open models, and large language foundation models is available on our model card hub, articulating our model’s strengths and limitations.

Adaptable thresholds for blocked responses to help enterprises control content based on their own business needs and policies. For instance, safety settings can be configured based on both probability and severity scores. 

Tools to better understand and control AI models. For instance, using models similar to those in our safety filters, customers can use our text moderation service to scan the entire corpus of training set data for terms that fall within predefined “harmful categories” and topics that may be considered sensitive, enabling ongoing compliance. 

Model evaluation on Vertex AI, which includes metrics to understand model performance and evaluate potential bias using common data and model bias metrics. These tools can promote fairness by evaluating data and model outputs during training and over time, highlighting areas of concern and providing suggestions for remediation. 

Google Cloud