Jump to Content
Security & Identity

Google Cloud's commitment to responsible AI is now ISO/IEC certified

December 18, 2024
Jeanette Manfra

Senior Director, Global Risk & Compliance

Join us at Google Cloud Next

Early bird pricing available now through Feb 14th.

Register

With the rapid advancement and adoption of AI, organizations face increasing pressure to ensure their AI systems are developed and used responsibly. This includes considerations around bias, fairness, transparency, privacy, and security. 

A comprehensive framework for managing the risks and opportunities associated with AI can help by offering a structured approach to building trust and mitigating potential harm. The ISO/IEC 42001:2023 standard provides a framework for addressing the unique challenges AI poses, and we're excited to announce that Google Cloud has achieved an accredited ISO/IEC 42001:2023 certification for our AI management system.

This certification helps demonstrate our commitment to developing and deploying AI responsibly. It underscores our dedication to building trust and transparency in the AI ecosystem, and provides our customers with further assurance that our AI services meet the industry standards for quality, safety, and ethical considerations. 

In a landscape increasingly shaped by the advent of AI regulations, such as the EU AI Act, this certification is foundational upon which we continue to build and expand our responsible AI efforts. As AI continues to transform industries, this certification reinforces our position as a leader in providing responsible, compliant, and innovative AI solutions.

Our journey to certification

Achieving ISO/IEC 42001:2023 certification was a significant undertaking, reflecting our long-standing commitment to responsible AI as we continue to align our processes with industry standards. This independent validation reinforces our commitment to AI risk management and continuous improvement across our AI lifecycle.

This certification offers our customers several key benefits:

  • Enhanced trust and transparency: Independent validation of our AI management system provides increased confidence in the responsible development and operation of our AI products and services.

  • Compliance support: This certification enables our customers to use services supported by a certified AI management system, supporting their own compliance efforts, and demonstrating a commitment to using responsibly-built AI technology.

  • Risk management: The certification demonstrates our dedication to managing the risks inherent in AI development and deployment, such as bias, fairness, security, and privacy.

  • Access to innovative and responsible AI: Customers can use our certified AI services to build and deploy their own AI solutions with confidence, knowing they are built on a foundation of responsible AI principles.

What's next

We are committed to maintaining high standards and continually improving our AI management system. We will continue to work closely with standards organizations, regulators, and our customers to shape the future of responsible AI.  

To help customers get started, last year we introduced the Secure AI Framework (SAIF). We also recently published the SAIF Risk Assessment tool, which is an interactive way for AI developers and organizations to take stock of their security posture, assess risks and implement stronger security practices. 

Of course, operationalizing an industry framework requires close partnership and collaboration, and above all, a forum to make that happen. This is why we introduced the Coalition for Secure AI (CoSAI), comprising a forum of industry peers to advance comprehensive security measures addressing the risks that come with AI.

Google Cloud is committed to sharing our learnings, strategies, and guidance so we can collectively build and deliver responsible, secure, compliant and trustworthy AI systems.

Posted in