This document in the Architecture Framework: AI and ML perspective provides an overview of principles and recommendations to ensure that your AI and ML deployments meet the security and compliance requirements of your organization. The recommendations in this document align with the security pillar of the Architecture Framework.
Secure deployment of AI and ML workloads is a critical requirement, particularly in enterprise environments. To meet this requirement, you need to adopt a holistic security approach that starts from the initial conceptualization of your AI and ML solutions and extends to development, deployment, and ongoing operations. Google Cloud offers robust tools and services that are designed to help secure your AI and ML workloads.
Define clear goals and requirements
It's easier to integrate the required security and compliance controls early in your design and development process, than to add the controls after development. From the start of your design and development process, make decisions that are appropriate for your specific risk environment and your specific business priorities.
Consider the following recommendations:
- Identify potential attack vectors and adopt a security and compliance perspective from the start. As you design and evolve your AI systems, keep track of the attack surface, potential risks, and obligations that you might face.
- Align your AI and ML security efforts with your business goals and ensure that security is an integral part of your overall strategy. Understand the effects of your security choices on your main business goals.
Keep data secure and prevent loss or mishandling
Data is a valuable and sensitive asset that must be kept secure. Data security helps you to maintain user trust, support your business objectives, and meet your compliance requirements.
Consider the following recommendations:
- Don't collect, keep, or use data that's not strictly necessary for your business goals. If possible, use synthetic or fully anonymized data.
- Monitor data collection, storage, and transformation. Maintain logs for all data access and manipulation activities. The logs help you to audit data access, detect unauthorized access attempts, and prevent unwanted access.
- Implement different levels of access (for example, no-access, read-only, or write) based on user roles. Ensure that permissions are assigned based on the principle of least privilege. Users must have only the minimum permissions that are necessary to let them perform their role activities.
- Implement measures like encryption, secure perimeters, and restrictions on data movement. These measures help you to prevent data exfiltration and data loss.
- Guard against data poisoning for your ML training systems.
Keep AI pipelines secure and robust against tampering
Your AI and ML code and the code-defined pipelines are critical assets. Code that isn't secured can be tampered with, which can lead to data leaks, compliance failure, and disruption of critical business activities. Keeping your AI and ML code secure helps to ensure the integrity and value of your models and model outputs.
Consider the following recommendations:
- Use secure coding practices, such as dependency management or input validation and sanitization, during model development to prevent vulnerabilities.
- Protect your pipeline code and your model artifacts, like files, model weights, and deployment specifications, from unauthorized access. Implement different access levels for each artifact based on user roles and needs.
- Enforce lineage and tracking of your assets and pipeline runs. This enforcement helps you to meet compliance requirements and to avoid compromising production systems.
Deploy on secure systems with secure tools and artifacts
Ensure that your code and models run in a secure environment that has a robust access control system with security assurances for the tools and artifacts that are deployed in the environment.
Consider the following recommendations:
- Train and deploy your models in a secure environment that has appropriate access controls and protection against unauthorized use or manipulation.
- Follow standard Supply-chain Levels for Software Artifacts (SLSA) guidelines for your AI-specific artifacts, like models and software packages.
- Prefer using validated prebuilt container images that are specifically designed for AI workloads.
Protect and monitor inputs
AI systems need inputs to make predictions, generate content, or automate actions. Some inputs might pose risks or be used as attack vectors that must be detected and sanitized. Detecting potential malicious inputs early helps you to keep your AI systems secure and operating as intended.
Consider the following recommendations:
- Implement secure practices to develop and manage prompts for generative AI systems, and ensure that the prompts are screened for harmful intent.
- Monitor inputs to predictive or generative systems to prevent issues like overloaded endpoints or prompts that the systems aren't designed to handle.
- Ensure that only the intended users of a deployed system can use it.
Monitor, evaluate, and prepare to respond to outputs
AI systems deliver value because they produce outputs that augment, optimize, or automate human decision-making. To maintain the integrity and trustworthiness of your AI systems and applications, you need to make sure that the outputs are secure and within expected parameters. You also need a plan to respond to incidents.
Consider the following recommendations:
- Monitor the outputs of your AI and ML models in production, and identify any performance, security, and compliance issues.
- Evaluate model performance by implementing robust metrics and security measures, like identifying out-of-scope generative responses or extreme outputs in predictive models. Collect user feedback on model performance.
- Implement robust alerting and incident response procedures to address any potential issues.
Contributors
Authors:
- Kamilla Kurta | GenAI/ML Specialist Customer Engineer
- Filipe Gracio, PhD | Customer Engineer
- Mohamed Fawzi | Benelux Security and Compliance Lead
Other contributors:
- Daniel Lees | Cloud Security Architect
- Kumar Dhanagopal | Cross-Product Solution Developer
- Marwan Al Shawi | Partner Customer Engineer
- Wade Holmes | Global Solutions Director