Navigating the EU AI Act: Google Cloud's proactive approach
Jeanette Manfra
Senior Director, Global Risk & Compliance, Google Cloud
Governance of AI has reached a critical milestone in the European Union: The AI Act has been published in the EU’s Official Journal, and will enter into force on August 1. The AI Act is a legal framework that establishes obligations for AI systems based on their potential risks and levels of impact. It will phase in over the next 36 months and include bans on certain practices, general-purpose AI rules, and obligations for high-risk systems. Critically, the Act’s yet-to-be-developed AI Code of Practice will establish compliance requirements for a subset of general purpose AI.
At Google, we believe in the opportunity AI can bring to society and we recognize the importance of mitigating risks. Today we’re summarizing:
-
How we currently support our AI customers
-
How we’re preparing for compliance with this new law, and,
-
What customers can do to prepare
How Google Cloud currently supports our AI customers
We provide data protection.
We have committed to building privacy protections into our Cloud architecture. We provide meaningful transparency into the use of data, including clear disclosures and commitments regarding access to a customer’s data, and our long-standing commitment to GDPR compliance. We have continued this commitment with generative AI. At Google Cloud, we do not use data provided to us by our customers to train our models without the customer’s permission.
In our standard generative AI implementation for enterprise customers, the organization’s data at rest remains in the customer’s cloud environment.
Importantly, organizations control how their data is accessed, used, and processed, as articulated in the Google Cloud Platform Terms of Service and Cloud Data Processing Addendum. We also give customers visibility into who can access their data and why.
Additionally, customers have control when tuning foundation models. We allow customers to tune certain foundation models for specific tasks, without having to rebuild the entire foundation model. Each tuning job results in the creation of additional “adapter weights,” which are learned parameters. The adapter weights are specific to the customer, and only available to the customer who tuned those weights.
During inference, the foundation model receives the adapter weights, runs through the request, and returns the results. The customer can manage the encryption of stored adapters that were generated during training by using customer-managed encryption keys (CMEK), and customers can choose to delete adapter weights at any time.
We invest in comprehensive risk management.
Rigorous evaluations are crucial for building successful AI that adheres to security, privacy, and safety standards. Cloud’s commitment to risk management was demonstrated in Coalfire’s recent AI readiness assessment of our work.
Internally, we continue to invest in comprehensive AI risk assessments, and often refine our risk processes and taxonomy, informed by efforts that include ongoing research on emerging and evolving AI risks, user feedback, and red team test results. Since the technical details and context of AI use are unique for each product, they require separate evaluations. A vital component of these analyses includes examining the role Google Cloud and our customers play to assure secure and safe deployment.
Google leads in AI safety and responsibility.
We've made a resolute commitment to lead the charge in responsible AI development. We continue to oversee our governance processes according to our long-standing AI Principles. We have identified specific areas we will not pursue, such as technologies that cause harm, contravene international law and human rights, or enable surveillance that violates accepted norms.
We’ve also infused these values into the Google Cloud Platform Acceptable Use Policy and the Generative AI Prohibited Use Policy, so that they are transparent and communicated to our customers.
We support transparency.
We firmly believe that building trust is essential for the long-term success of AI, and that starts with a dedication to transparency. Google led the world in supporting the concept of model cards that offer a shared understanding around model capabilities and limitations, and can help customers and researchers understand how our generative models are trained and tested.
In addition to the model specific details, our paper on the Approach to Trust in Artificial Intelligence outlines how we identify, assess, and mitigate potential harmful impacts as part of the end-to-end process. We remain committed to sharing latest research, covering topics such as responsible AI, security, privacy, and abuse prevention.
We support our customers on issues of security, copyright, and portability.
We believe in a shared fate with our customers and recognize that responsible AI requires an ecosystem approach. We offer enterprises protection with generative AI copyright indemnification, and we don’t charge egress fees, so that customers are free to pick the most responsible provider without fear of being locked-in to models. We have also developed Responsible AI tooling, enablement and support so customers can tailor their own risk and safety posture for each use case and each deployment.
Our Secure AI Framework (SAIF) can help Google Cloud customers evaluate the relevance of traditional security risks and controls, and how they may need to be adapted or expanded to help cover AI systems. We also want to support our customers who are looking to establish their AI strategy, through sharing guidance and best practices for topics such as AI Governance and AI Security.
How Google Cloud is preparing for AI Act compliance
Internally, our AI Act readiness program is focused on ensuring our products and services align with the Act's requirements while continuing to deliver the innovative solutions our customers expect. This is a company-wide initiative that involves collaboration among a multitude of teams, including:
-
Legal and Policy: thoroughly analyzing the AI Act's requirements and working to integrate them into our existing policies, practices, and contracts.
-
Risk and Compliance: assessing and mitigating potential risks associated with AI Act compliance, ensuring robust processes are in place.
-
Product and Engineering: ensuring our AI systems continue to be designed and built with the AI Act's principles of transparency, accountability, and fairness in mind and constantly improving the user experience, incorporating the AI Act’s requirements for testing, monitoring, and documentation.
-
Customer engagement: Working closely with our customers to understand their needs and concerns regarding the AI Act, and providing guidance and support as needed.
What Google Cloud customers can do to prepare for the AI Act
The AI Act is a complex piece of legislation, and details on how it will be implemented are still being discussed within the European Commission and the AI Office. As the AI Office moves forward and implementation guidance continues to evolve, it is important to familiarize yourself with the requirements of the AI Act and how they may apply to your current or future use of AI.
We have several recommendations for Google Cloud customers interested in preparing for the AI Act:
-
Follow the development of Codes of Practice and AI Office forums: Over the coming months, there will be continued discussions to determine the basis of compliance for general-purpose AI (GPAI) models. Understand how your organization is using GPAI and where your compliance obligations may be.
-
Engage with European regulators and your industry associations: The AI Act can help Europe boost its competitiveness, productivity, and innovation opportunities, but only if it is implemented in accordance with international best practices and with real world use-cases in mind. Connect with your industry associations and European regulations to share how your company is using AI, and the value and benefits you envision AI to have for your business in the future.
-
Review your AI governance practices: The AI Act sets out a number of requirements for AI oversight. You should review your governance practices to ensure that they meet the Act’s requirements. It may be beneficial to assess the level of risk across your AI systems and your data governance program, as this will support explainability and transparency efforts.
While the EU AI Act provides a framework for AI regulation, there are still areas that require ongoing attention and clarification. We are committed to open dialogue and collaboration to help address concerns and ensure that the benefits of AI are accessible to all — while mitigating potential risks.
As we continue to prepare, we remain committed to providing our enterprise customers with cutting-edge AI solutions that are both innovative and compliant. We have the capabilities and experience, and we will continue to partner with policymakers and customers as new regulations, frameworks, and standards are developed.