Secure AI Framework (SAIF) and Google Cloud
Secure AI Framework (SAIF) offers guidance organizations to looking to secure AI systems.
Google has created SAIF.google, including a SAIF Risk Self Assessment to support the implementation of SAIF in organizations, to help build and deploy AI systems securely.
Below we look at the Google Cloud Security portfolio through the lens of SAIF, and show how we can help your organization mitigate potential AI risks with various cloud solutions and controls.
How It Works
Looking to address risks identified in your SAIF Risk Report? Get started today.
Common Uses
As part of Google Cloud’s commitment to security, our foundational models have extensive access controls to prevent unauthorized copying or reading as well as state of the art detection capabilities to monitor for unauthorized actions.
Google Cloud also provides controls that our customers can configure on our platform to protect their usage of our models or your own third-party services. Organizational policies and VPC service controls prevent exfiltration of data along with fine-grained identity and access management (IAM) controls that prevent unauthorized access.
Learn more about ways Google Cloud helps customers protect their data:
Google Cloud Consulting services help you achieve a strong AI security posture for model and data access controls.
Contact Google Cloud Consulting services
Work with Mandiant experts to help you leverage AI securely.
Contact Mandiant Security Experts
As part of Google Cloud’s commitment to security, our foundational models have extensive access controls to prevent unauthorized copying or reading as well as state of the art detection capabilities to monitor for unauthorized actions.
Google Cloud also provides controls that our customers can configure on our platform to protect their usage of our models or your own third-party services. Organizational policies and VPC service controls prevent exfiltration of data along with fine-grained identity and access management (IAM) controls that prevent unauthorized access.
Learn more about ways Google Cloud helps customers protect their data:
Google Cloud Consulting services help you achieve a strong AI security posture for model and data access controls.
Contact Google Cloud Consulting services
Work with Mandiant experts to help you leverage AI securely.
Contact Mandiant Security Experts
Security is a key tenant of Google Cloud and we take a defense in depth approach. As part of our robust measures we have strong capabilities to protect against service disruptions such as distributed denial-of-service (DDoS). We also provide additional controls that customers can utilize to protect against such DDoS and application layer style attacks through Cloud Armor.
Our built-in security and risk management solution, Security Command Center, protects Vertex AI applications with preventative and detective controls. This includes responding to Vertex AI security events and attack path simulation to mimic how a real-world attacker could access and compromise Vertex AI workloads.
For AI applications, customers can also utilize Model Armor to inspect, route, and protect foundation model prompts and responses. It can help customers mitigate risks such as prompt injections, jailbreaks, toxic content, and sensitive data leakage. Model Armor will integrate with products across Google Cloud, including Vertex AI. If you’d like to learn more about early access for Model Armor, you can sign up here or watch our on-demand video.
Work with Mandiant experts to you leverage AI securely.
Contact Mandiant Security Experts
Security is a key tenant of Google Cloud and we take a defense in depth approach. As part of our robust measures we have strong capabilities to protect against service disruptions such as distributed denial-of-service (DDoS). We also provide additional controls that customers can utilize to protect against such DDoS and application layer style attacks through Cloud Armor.
Our built-in security and risk management solution, Security Command Center, protects Vertex AI applications with preventative and detective controls. This includes responding to Vertex AI security events and attack path simulation to mimic how a real-world attacker could access and compromise Vertex AI workloads.
For AI applications, customers can also utilize Model Armor to inspect, route, and protect foundation model prompts and responses. It can help customers mitigate risks such as prompt injections, jailbreaks, toxic content, and sensitive data leakage. Model Armor will integrate with products across Google Cloud, including Vertex AI. If you’d like to learn more about early access for Model Armor, you can sign up here or watch our on-demand video.
Work with Mandiant experts to you leverage AI securely.
Contact Mandiant Security Experts