Jump to Content
Security & Identity

Google Cloud’s approach to trust and transparency in AI

November 9, 2023
Marina Kaganovich

Executive Trust Lead, Office of the CISO, Google Cloud

Anton Chuvakin

Security Advisor, Office of the CISO, Google Cloud

Generative artificial intelligence has emerged as a disruptive technology that presents tremendous potential to revolutionize and transform the way we do business. It has the power to unlock opportunities for communities, companies, and countries around the world, bringing meaningful change that could improve billions of lives.

The challenge is to do so in a way that is proportionately tailored to mitigate risks and promote reliable, robust, and trustworthy gen AI applications, while still enabling innovation and the promise of AI for societal benefit.

At Google Cloud, we believe that the only way to be truly bold in the long term is to be responsible from the start.

“We are convinced that the AI-enabled innovations we are focused on developing and delivering boldly and responsibly are useful, compelling, and have the potential to assist and improve lives of people everywhere — this is what compels us,” said James Manyika, Google’s senior vice president for research, technology and society.

We put that philosophy to work with a holistic approach to building enterprise-grade AI responsibly, taking into consideration a wide range of disciplines including data governance, privacy, security, and compliance. We detail how we’re using this foundation in our AI development in a new paper, “Google Cloud’s Approach to Trust in AI.”

As we discuss in the paper, our approach is fundamentally informed by our AI principles. We were one of the first to introduce and advance responsible AI practices, and these principles serve as an ongoing commitment to our customers worldwide who rely on our products to build and grow their businesses safely.

Our AI products are built atop a scalable technical infrastructure underpinned by a secure-by-design foundation and supported by robust logical, operational and physical controls to achieve defense in depth, at scale, and by default. We’ve taken a three-pronged approach to secure, scale, and evolve the security ecosystem, helping organizations deploy AI systems on Google Cloud, continuing to launch cutting-edge, AI-powered products and services to help organizations achieve better security outcomes at scale, and continuously evolving to stay ahead of threats.

In addition to our focus on security, our approach includes incorporating privacy design principles, designing architectures with privacy safeguards, and providing appropriate transparency and control over the use of data. When bringing new offerings to the market, we incorporate these principles throughout the product lifecycle and design architectures with comprehensive privacy safeguards such as data encryption and the ability to turn relevant features on or off.

One of the questions frequently posed to us is whether our foundation models are trained on customer data, and by extension, whether customer data may as a result be exposed to Google, Google’s other customers, or the public. To address this question, by default, Google Cloud does not use Customer Data to train its foundation model. We outline some key aspects of our model tuning and deployment and data governance practices in our Vertex AI offerings.

Lastly, AI regulation is a dynamic, rapidly-evolving space. We believe that AI is too important not to regulate, and too important not to regulate well, and thus advocate for risk-based frameworks that reflect the complexity of the AI ecosystem by building on existing general concepts. Our teams closely monitor and analyze new and updated regulations, and we regularly engage regulators. Google Cloud also makes compliance documentation, certifications, control attestations, and independent audit reports readily available to satisfy regional and industry-specific requirements and support customers in their compliance validation efforts of Google Cloud’s platform, as well as their assessment of Vertex AI’s compliance and security controls.

For further reading, refer to the Secure AI Framework (SAIF), a conceptual framework on how to secure AI, and an overview of what changes and what stays the same when it comes to securing AI.

Posted in