Secure AI Framework (SAIF) and Google Cloud

GShield

Secure AI Framework (SAIF) and Google Cloud

Secure AI Framework (SAIF) and Google Cloud

Secure AI Framework (SAIF) offers guidance organizations to looking to secure AI systems.

Google has created SAIF.google, including a SAIF Risk Self Assessment to support the implementation of SAIF in organizations, to help build and deploy AI systems securely.

Below we look at the Google Cloud Security portfolio through the lens of SAIF, and show how we can help your organization mitigate potential AI risks with various cloud solutions and controls.

Funktionsweise

Looking to address risks identified in your SAIF Risk Report? Get started today.

Gängige Einsatzmöglichkeiten

Data poisoning

As part of Google Cloud’s commitment to security, our foundational models have extensive access controls to prevent unauthorized copying or reading as well as state of the art detection capabilities to monitor for unauthorized actions.

Google Cloud also provides controls that our customers can configure on our platform to protect their usage of our models or your own third-party services. Organizational policies and VPC service controls prevent exfiltration of data along with fine-grained identity and access management (IAM) controls that prevent unauthorized access.

Learn more about ways Google Cloud helps customers protect their data:

    As part of Google Cloud’s commitment to security, our foundational models have extensive access controls to prevent unauthorized copying or reading as well as state of the art detection capabilities to monitor for unauthorized actions.

    Google Cloud also provides controls that our customers can configure on our platform to protect their usage of our models or your own third-party services. Organizational policies and VPC service controls prevent exfiltration of data along with fine-grained identity and access management (IAM) controls that prevent unauthorized access.

    Learn more about ways Google Cloud helps customers protect their data:

      Need additional support addressing AI risks like model exfiltration, model source tampering, and data poisoning?

      Google Cloud Consulting services help you achieve a strong AI security posture for model and data access controls. 

      • Secure cloud foundations: Establish strong and secure infrastructure foundations with necessary protections to run AI workloads.
      • Gen AI security accelerator: Learn from Google’s AI security specialists on how to implement Google's Secure AI Framework to safeguard your gen AI workloads against AI-specific risks.
      • Security hardening: Secure design and end-to-end AI service deployment using Infrastructure as Code (IaC).
      • Advanced threat simulations: Simulate real-world AI threats in an isolated Google Cloud environment to train security teams and evaluate their readiness to respond to security incidents.

      Contact Google Cloud Consulting services

      Work with Mandiant experts to help you leverage AI securely. 

      • Red Teaming for AI: Validate your defenses protecting AI systems and assess your ability to detect and respond to an active attack involving AI systems.
      • Securing the use of AI: Assess the architecture, data defenses, and applications built on AI models and develop threat models using security frameworks.

      Contact Mandiant Security Experts


      Wondering which SAIF Risk Self Assessment questions this risk category relates to?

      • Are you able to detect, remove, and remediate malicious or accidental changes in your training, tuning, or evaluation data? 
      • Do you have a complete inventory of all models, datasets (for training, tuning, or evaluation), and related ML artifacts (such as code)? 
      • Do you have robust access controls on all models, datasets, and related ML artifacts to minimize, detect, and prevent unauthorized reading or copying? 
      • Are you able to ensure that all data, models, and code used to train, tune, or evaluate models cannot be tampered without detection during model development and during deployment?
      • Are the frameworks, libraries, software systems, and hardware components used in the development and deployment of your models analyzed for and protected against security vulnerabilities?
      Take the SAIF Risk Self Assessment

        Denial of ML service

        Security is a key tenant of Google Cloud and we take a defense in depth approach. As part of our robust measures we have strong capabilities to protect against service disruptions such as distributed denial-of-service (DDoS). We also provide additional controls that customers can utilize to protect against such DDoS and application layer style attacks through Cloud Armor.

        Our built-in security and risk management solution, Security Command Center, protects Vertex AI applications with preventative and detective controls. This includes responding to Vertex AI security events and attack path simulation to mimic how a real-world attacker could access and compromise Vertex AI workloads.   

        For AI applications, customers can also utilize Model Armor to inspect, route, and protect foundation model prompts and responses. It can help customers mitigate risks such as prompt injections, jailbreaks, toxic content, and sensitive data leakage. Model Armor will integrate with products across Google Cloud, including Vertex AI. If you’d like to learn more about early access for Model Armor, you can sign up here or watch our on-demand video.


          Security is a key tenant of Google Cloud and we take a defense in depth approach. As part of our robust measures we have strong capabilities to protect against service disruptions such as distributed denial-of-service (DDoS). We also provide additional controls that customers can utilize to protect against such DDoS and application layer style attacks through Cloud Armor.

          Our built-in security and risk management solution, Security Command Center, protects Vertex AI applications with preventative and detective controls. This includes responding to Vertex AI security events and attack path simulation to mimic how a real-world attacker could access and compromise Vertex AI workloads.   

          For AI applications, customers can also utilize Model Armor to inspect, route, and protect foundation model prompts and responses. It can help customers mitigate risks such as prompt injections, jailbreaks, toxic content, and sensitive data leakage. Model Armor will integrate with products across Google Cloud, including Vertex AI. If you’d like to learn more about early access for Model Armor, you can sign up here or watch our on-demand video.


            Need additional support addressing AI risks like denial of ML service and model reverse engineering?

            Work with Mandiant experts to you leverage AI securely.

            • Red Teaming for AI: Validate your defenses protecting AI systems and assess your ability to detect and respond to an active attack involving AI systems.
            • Securing the use of AI: Assess the architecture, data defenses, and applications built on AI models and develop threat models using security frameworks.

            Contact Mandiant Security Experts


            Wondering which SAIF Risk Self Assessment questions this risk category relates to?

            • Do you protect your generative AI applications and models against large-scale malicious queries from user accounts, devices, or via APIs?
            Take the SAIF Risk Self Assessment

              Strengthen your AI security posture with Google Cloud

              Trying to understand what makes securing AI unique? Discover the security differences between AI and traditional software development.

              Need to kick start the AI conversation for your organization? Get started today with seven key questions CISOs should be asking—and answering—to use AI securely and leverage its security benefits.

              Generative AI, Privacy, and Google Cloud: Ready to tackle AI data components? Learn more about Google Cloud’s approach to privacy and data protection when developing generative AI services.

              Google Cloud
              • ‪English‬
              • ‪Deutsch‬
              • ‪Español‬
              • ‪Español (Latinoamérica)‬
              • ‪Français‬
              • ‪Indonesia‬
              • ‪Italiano‬
              • ‪Português (Brasil)‬
              • ‪简体中文‬
              • ‪繁體中文‬
              • ‪日本語‬
              • ‪한국어‬
              Console
              Google Cloud