Jump to Content
Transform with Google Cloud

Enterprise readiness in the age of generative AI: What organizations need to get AI right

November 3, 2023
https://storage.googleapis.com/gweb-cloudblog-publish/images/gen_ai_YnjQmbm.max-2500x2500.jpg
June Yang

VP, Cloud AI & Industry Solutions

The revolutionary nature of generative AI requires a platform that offers privacy, security, control, and compliance capabilities organizations can rely on.

We are in the dawn of the generative AI era, a technological revolution with the potential to redefine how we create, engage, and collaborate across our personal and professional lives. In a few words, gen AI’s new and once unimaginable opportunities tap into our collective knowledge, creativity, and insight, upending what we once thought possible. Yet, standing on this precipice of innovation, many organizations have questions about integrating and deploying these powerful tools.

Google Cloud is committed to helping our customers leverage the full potential of generative AI with privacy, security, and compliance capabilities. Our goal is to build trust by protecting systems, enabling transparency, and offering flexible, always-available infrastructure, all while grounding efforts in our AI principles.

This post details why customers are selecting Google Cloud as an enterprise-ready solution for building their next gen AI projects, and how we approach building our AI and data expertise into leading products for small and large organizations alike.

Data governance, privacy, and indemnification with Google Cloud

In the world of gen AI, data governance and privacy are paramount. That's why, at Google Cloud, we've made it our mission to help keep you in control of your data and protect your organization’s information from unauthorized access.

First and foremost, your data is your data and is never used by Google without your permission. When we say “your data,” that includes any customer data you store on Google Cloud, the input prompt, the model output, and tuning data; all of these are part of your data and your IP. We never utilize your data for training our models without your permission or instruction. Rather, it is used by you and for the explicit purposes you dictate.

We employ specific techniques like training data deduplication and safety filters to help ensure responsible data use. Deduplication of our internal training data can eliminate redundancy, making the system more efficient. Meanwhile, safety filters help prevent models from reproducing training data or other problematic content.

https://storage.googleapis.com/gweb-cloudblog-publish/images/Enterprise_tumbnail_2_1.max-2000x2000.png

The output you generate with Google's foundation models is also your data. To bolster this idea, we recently announced an expanded intellectual property indemnity as it pertains to generative AI.

This two-pronged indemnity approach applies to customers who use our services in a responsible way and covers 1) Google’s use of training data to create Google foundation models utilized by all our Generative AI services and 2) the generated output created by our customers for select services. The training data indemnity has always been in our terms, but we wanted to provide more clarity to our customers.

Our safeguards allow you to confidently embrace generative AI's promise without compromising data governance and privacy because we believe in solving real problems with clarity, simplicity, and authenticity.

Security and compliance support with Google Cloud

Vertex AI already provides robust data commitments to ensure data and models are not used to train our foundation models or leaked to other customers. However, two concerns that organizations have are (1) protecting data, and (2) reducing the risk of customizing and training models with their own data that may include sensitive elements such as personal information (PI) or personally identifiable information (PII). Often, this personal data is surrounded by context that the model needs so it can function properly. To effectively segment, anonymize, and help protect data, we have a robust set of tools and services that we are continuously optimizing, including:

  • Our Sensitive Data Protection service, which provides sensitive data detection and transformation options such as masking or tokenization to add additional layers of data protection throughout a generative AI model’s lifecycle, from training to tuning to inference.
  • VPC Service Controls, which allows for secure deployment within defined data perimeters. With VPC SC, you can define perimeters to isolate resources, control and limit access, and reduce the risk of data exfiltration or leakage.

We continue investing in and expanding our digital sovereignty offerings like Assured Workloads and administrative Access Transparency and Access Approval. We also continue to grow our Regulated and Sovereignty solutions partner initiative to bring innovative third-party solutions to customers’ regulated cloud environments.

In addition to providing a high level of security and control for customers, we are committed to customers on their compliance journey. We engage in comprehensive GDPR and HIPAA privacy support efforts, including our transparency commitments for customer data usage and the support for our customer’s Data Protection Impact Assessments (DPIAs). Our teams closely monitor and analyze new and updated regulations, and we regularly engage regulators through roundtables and other forums and respond to regulators’ information requests.

Infrastructure reliability and sustainability with Google Cloud

Building and productionizing large generative AI models requires enormous amounts of dedicated computation, data processing, and workload parallelization - this is why enterprises turn to infrastructure purpose built for AI workloads. To align your performance, cost, and reliability needs to your AI project, we offer a wide range of options from Google Cloud or our partners, and this flexibility is why over 70% of AI unicorns rely on Google Cloud for their AI infrastructure.

To begin, Tensor Processing Units (TPUs) are designed to meet the intensive workload required for training and serving large language models like PaLM 2. Because different generative AI workloads have different requirements, Google Cloud and our partner ecosystem provide a wide range of compute options including NVIDIA‘s latest accelerators optimized for large model training and inference. Additionally, our Deep Learning VMs come pre-configured for the more popular AI frameworks, making it fast and easy to get deep learning projects started without worrying about software compatibility.

With these capabilities organizations can focus on their customers as opposed to worrying about their infrastructure. When you work with Google Cloud, we handle the heavy lifting so you can deliver uninterrupted service to your users. Our resilient, scalable, and energy-efficient global network can provide you with the high-availability performance you need, at a fraction of the carbon costs.

Our customers are continuously looking for ways to exceed their sustainability goals, and our commitment to sustainability and achieving carbon neutrality means you can get rock-solid reliability with a light environmental footprint. Google Cloud’s infrastructure isn't just about providing cutting-edge technology; it's about doing so responsibly with sustainability at the core of every layer of our technology and operations.

Responsible AI: a Google commitment

At Google, our Principles dictate we maximize the positive benefits of AI while proactively mitigating risks, and we want to make it easy for organizations to do the same. Google Cloud takes a multi-pronged approach to ensuring the responsible development and use of AI. We build our AI products with responsible practices through the ML lifecycle from development to deployment - so you know our products can be safe for use. We responsibly curate datasets for large models to address downstream concerns like hate speech, racial bias, toxicity, and more.

During training, we stress-test models on scenarios that have the potential to produce harmful results and adjust the model to mitigate these risks. In deployment, we give customers the tools they need to practice AI responsibly, including working with customers to create and scale responsible AI frameworks and assessments. For example, we provide safety filters for customers to address concerns, including bias, toxicity, and other harmful content.

At Google Cloud, we keep things simple, transparent, and authentic, focusing on solving real-world problems. With a comprehensive approach to data governance, privacy, security, compliance, infrastructure reliability, and responsible AI practices, our products are built for enterprise use and scale. We’re committed to this approach to help ensure our customers leverage the full potential of generative AI in a secure, sustainable, and responsible manner.

Posted in