Jump to Content
Transform with Google Cloud

The Prompt: Bringing risk management and data governance to your gen AI models

August 11, 2023
https://storage.googleapis.com/gweb-cloudblog-publish/images/gen-ai-risk-management-data-governance.max-1100x1100.jpg
Philip Moyer

Global VP, AI & Business Solutions at Google Cloud

Generative AI in Google

Google brings new AI capabilities to developers & businesses

Read more

Business leaders are buzzing about generative AI. To help you keep up with this fast-moving, transformative topic, each week in “The Prompt,” we’ll bring you observations from our work with customers and partners, as well as the newest AI happenings at Google. In this edition, Phil Moyer, global vice-president for AI & Business Solutions at Google Cloud, shares his observations on managing risk when adopting and implementing generative AI technologies.

Data governance is crucial in the age of generative AI. In recent months, we’ve seen some well-publicized missteps around made-up and proprietary materials, while some business leaders and public officials work to ensure the valuable data they feed into their models remains their own. 

Striking the right balance grows more urgent with each passing day, as the adoption of gen AI continues to grow, along with the new and novel applications organizations keep discovering. It's just as important to be able to trust and rely on those little assets as it is to secure them, though. 

Today’s leaders are eager to adopt generative AI technologies and tools. Yet the next question after what to do with it remains How do you ensure risk management and governance with your AI models? 

In an earlier edition of the Prompt, I discussed why generative AI needs to be manageable and include safeguards. Investing in generative AI offers incredible potential for creating value, while also coming with a unique set of challenges. In particular, using generative AI in a business setting can pose various risks around accuracy, privacy and security, regulatory compliance, and intellectual property infringement. 

Innovation shouldn’t come as a compromise with trust, privacy, or security — and vice versa. Even as businesses look to harness the power of generative AI models, they need to do it safely and responsibly. 

Here we’ll explore the importance of each of these traits and how they are reflected in the Google Cloud strategy. 

Safe and trustworthy generative AI models

While the list of potential generative AI use cases continues to grow, many leaders remain concerned about the reliability and accuracy of generative AI foundation models. They want to get going fast, but they also want to make sure they’re protecting — not endangering — their most critical assets. 

We believe at Google Cloud that adopting generative AI should be as safe as it is easy. We operate in a shared fate model, taking an active stake in helping business leaders manage the risks they face. A big part of that is ensuring that you can fully trust and understand the generative AI tools and models you use, whether building your own custom models or leveraging integrated out-of-box solutions and capabilities. 

https://storage.googleapis.com/gweb-cloudblog-publish/images/new-way-to-cloud-hero.max-2200x2200.png

Leading companies build with generative AI on Google Cloud.

While our secure-by-default data protections and many of our security controls extend to generative AI support on Vertex AI and Gen App Builder, we’ve taken additional steps to ensure that organizations can deploy their AI models with confidence.

For example, we have built a comprehensive data governance program for Cloud AI Large Models and Generative AI. The program is aligned with the AI Risk Management Framework (AI RMF) from the National Institute of Standards and Technology (NIST) and underpinned by Google’s industry-leading research and growing library of resources, tools, and recommended best practices. 

As part of this program, we’ve developed several mitigations to support the needs of generative AI in enterprise development, including technical guardrails for evaluating and monitoring models for specific generative AI risks, policies around customer data and sensitive use cases, and clear documentation and implementation guides. Additionally, training data and inference data is not used by Google to train or enhance our foundation models — nor is it used to benefit other customers. 

Prioritizing responsible AI 

New tools and systems are helping to put generative AI into the hands of many different users with varying levels of expertise, bringing new opportunities to increase productivity and creativity. Still, it’s important to keep in mind that if not designed and deployed correctly, generative AI can result in unintended consequences and be harmful to organizations, individuals, and society as a whole. 

For years, Google has talked about being an AI-first company, and throughout that time, Google has firmly held that all AI — including generative AI — must be developed responsibly (it's one of the reasons we were a founding sponsor that helped the ISO develop its standards for AI management). It’s imperative that organizations work with clear, ethical guidelines for how to develop and use generative AI in order to reduce risks while maximizing its social benefits. 

Beyond our core AI principles, we have built a scalable, repeatable ethics review process. We also share what we’re learning through our Responsible AI practicesfairness best practices, and other tools and education materials to help teams inspect, understand, and effectively apply their AI models. Last year, we also expanded Vertex’s Explainable AI to make it easier to know how models make decisions and added new features to further support data and model governance, such as iterative model evaluations to assess performance and quality audits to ensure only the best models get deployed. 

Throughout our work, we have discovered that there are clear ways to promote safer and more socially responsible practices to address generative AI risks. When organizations have the ability to integrate ethical considerations at the earliest stages of generative AI design and development, it can help identify and proactively mitigate risks before they have a chance to do real harm.


Posted in