This section of the architecture framework discusses how to plan your security controls, approach privacy, and how to work with Google Cloud compliance levels.
The framework consists of the following series of docs:
- Google Cloud system design considerations
- Operational excellence
- Security, privacy, and compliance (this article)
- Performance and cost optimization
Use these strategies to help achieve security, privacy, and compliance.
Implement least privilege with identity and authorization controls. Use centralized identity management to implement the principle of least privilege and to set appropriate authorization levels and access policies.
Build a layered security approach. Implement security at each level in your application and infrastructure, applying a defence-in-depth approach. Use the features in each product to limit access. Use encryption.
Automate deployment of sensitive tasks. Take humans out of the workstream by automating deployment and other admin tasks.
Implement security monitoring. Use automated tools to monitor your application and infrastructure. Use automated scanning in your continuous integration and continuous deployment (CI/CD) pipelines, to scan your infrastructure for vulnerabilities and to detect security incidents.
Follow these practices to help achieve security, privacy, and compliance.
- Manage risk with controls.
- Manage authentication and authorization.
- Implement compute security controls.
- Secure the network.
- Implement data security controls.
- Build with application supply chain controls.
- Audit your infrastructure with audit logs.
Manage risk with controls
Prior to creating and deploying resources on Google Cloud, assess the security features you need to meet your internal security requirements and external regulatory requirements.
Three control areas focus on mitigating risk:
- Technical controls refer to the features and technologies that you use to protect your environment. These include native cloud security controls, such as firewalls and enabling logging, and can also encompass third-party tools and vendors to reinforce or support your security strategy.
- Contractual protections refer to the legal commitments made by the cloud vendor around Google Cloud services.
- Third-party verifications or attestations refer to having a third party audit the cloud provider to ensure that the provider meets compliance requirements. For example, Google was audited by a third party for ISO 27017 compliance.
You need to assess all three control areas to mitigate risk when you adopt new public cloud services.
We start from the fundamental premise that Google Cloud customers own their data and control how it is used. The data a customer stores and manages on Google Cloud systems is only used to provide that customer with Google Cloud services and to make those services work better for them, and for no other purpose. We have robust internal controls and auditing to protect against insider access to customer data. This includes providing our customers with near real-time logs of Google administrator access on Google Cloud.
Google Cloud is committed to maintaining and expanding our compliance portfolio. The Data Processing and Security Terms (DPST) document defines our commitment to maintaining our ISO 27001, 27017, 27018 certifications as well as updating our SOC 2 and SOC 3 reports every 12 months.
The DPST also outlines the access controls in place to limit Google support engineers' access to customers' environments; this includes the rigorous logging and approval process that takes place. The DPST has more information around the access controls and contractual commitments.
Customers can view Google Cloud's current certifications and letters of attestation at the Compliance Resource Center.
Manage authentication and authorization
Within Google Cloud, customers use Identity and Access Management (IAM) to dictate resource access. Administrators can restrict access all the way down to the resource level. IAM policies dictate "who can do what on which resources." Correctly configured IAM policies help secure your environment by preventing unauthorized access to resources.
Security administrators want to ensure that the right individuals get access only to the resources and services that they need to perform their job (principle of least privilege).
Users should have no more and no less than the administrative permissions required for them to accomplish their job. Over-provisioning permissions access can increase the risk of insider threat, misconfigured resources, and difficult audit trails. Under-provisioning permissions can prevent users from being able to access the resources they need to complete their task.
A variety of technical controls help you shape your access management policies. Here are the primary features and services that you can use to control access:
Grant appropriate roles
A role is a collection of permissions applied to a user. Permissions determine what actions are allowed on a resource and usually correspond with REST methods. For example, a user or a group of users can be granted the compute admin role which allows them to view and edit Compute Engine instances.
Understand when to use service accounts
A service account is a special Google account that belongs to your application or a virtual machine (VM), instead of to an individual end user. Your application uses the service account to call the Google API of a service, so that the users aren't directly involved.
IAM focuses on who, and lets the administrator authorize who can take action on specific resources based on permissions. Organization Policy Service focuses on what, and lets the administrator set restrictions on specific resources to determine how they can be configured.
This service provides an organization-wide snapshot of your inventory for a wide variety of Google Cloud resources and policies with a single API call. Automation tools can then use the snapshot for monitoring or policy enforcement, or archived for compliance auditing. If you want to analyze changes to the assets, asset inventory also supports exporting metadata history.
The recommender, troubleshooter, and validator tools provide helpful recommendations for IAM role assignment, monitor and prevent overly permissive IAM policies, and assist with troubleshooting access control related issues.
- How do you manage your identity?
- Can you federate identities with Google Cloud?
- What access permissions are required by your user types? What programmatic access is required?
- Do you have a process to grant and audit access control roles and permission to users depending on various deployment environments (test, dev, prod, and so on)?
- Avoid basic roles, such as (roles/owner, roles/editor, and roles/viewer). Instead, grant predefined roles or custom roles whenever possible.
Grant basic roles in the following cases:
- When the Google Cloud service does not provide a predefined role.
- When you want to grant broader permissions for a project. This often happens when you're granting permissions in development or test environments.
- When you work in a small team where the team members don't need granular permissions.
Treat each component of your application as a separate trust boundary. If you have multiple services that require different permissions, create a separate service account for each of the services, then grant only the required permissions to each service account.
Restrict who can act as service accounts. Users who are granted the Service Account Actor role for a service account can access all the resources for which the service account has access. Be cautious when granting the Service Account Actor role to a user.
Check the policy granted on every resource and make sure you understand the hierarchy of inheritance. A policy set on a child resource does not restrict access granted on its parent.
Grant roles at the smallest scope needed. For example, if a user only needs access to publish a Pub/Sub topic, grant the Publisher role to the user for that topic.
Granting the Owner role to a member allows them to access and modify almost all resources, including modifying IAM policies. This amount of privilege is potentially risky. Grant the Owner role only when (nearly) universal access is required.
When an Organization is created, all users in your domain are granted the Billing Account Creator and Project Creator roles by default. Identify users who can perform these duties, and revoke these roles from other users.
Grant roles to a Google group instead of individual users. It's easier to add members and remove members from a Google group instead of updating an IAM policy to add or remove users.
If you need to grant multiple roles to allow a particular task, create a Google group, grant the roles to that group, and then add users to that group.
Track all resources in your organization that are using Cloud Asset Inventory.
Automate policy controls and validate or troubleshoot them using Policy Intelligence.
Use IAM Conditions for conditional, attribute-based control of access to resources.
Implement defence-in-depth with VPC Service Controls to further restrict access to resources.
- Use Cloud Audit Logs to regularly audit changes to your IAM policy.
- Export audit logs to Cloud Storage to store your logs for long periods of time.
- Audit who has the ability to change your IAM policies on your projects.
- Restrict access to logs using logging roles.
- Apply the same access policies to the Google Cloud resource that you use to export logs as applied to the logs viewer.
- Use Cloud Audit Logs to regularly audit access to service account keys.
IAM authorizes who can access and take action on specific Google Cloud resources, and gives you full control and visibility to centrally manage Google Cloud resources.
BeyondCorp Enterprise provides a zero-trust solution that enables an organization's workforce to access web applications securely from anywhere and without the need for VPN, while reducing the threats of malware, phishing, and data loss.
Cloud Asset Inventory provides inventory services based on a time series database. This database keeps a five week history of Google Cloud asset metadata. The service allows you to export all asset metadata at a certain timestamp or export event change history during a timeframe.
Cloud Audit Logs answers the questions of "who did what, where, and when?" within your Google Cloud resources.
Implement compute security controls
You can disable External IP access to your production VMs using organization policies. You can deploy private clusters with Private IPs within GKE to limit possible network attacks. You can also define network policy to manage pod-to-pod communication in the cluster.
Compute instance usage
It's also important to know who can spin up instances and access control using IAM because you can incur significant cost if there is a break-in. Google Cloud lets you define custom quotas on projects to limit such activity. VPC Service Controls can help remediate this, for details, see the section on network security.
Compute OS images
Google provides you with curated OS images that are maintained and patched regularly. Although you can bring your own custom images and run them on Compute Engine, you still have to patch, update, and maintain them. Google Cloud regularly updates new vulnerabilities found through security bulletins and provides remediation to fix vulnerabilities for existing deployments.
GKE and Docker
App Engine flexible runs application instances within Docker containers, letting you run any runtime. You can also enable SSH access to the underlying instances, we do not recommend this unless you have a valid business use case.
Cloud Audit Logs is enabled for GKE, letting you automatically capture all activities with your cluster and monitor for any suspicious activity.
To provide infrastructure security for your cluster, GKE provides the ability to use IAM with role-based access control (RBAC) to manage access to your cluster and namespaces.
We recommend that you enable node auto-upgrade to have Google update your cluster nodes with the latest patch. Google manages GKE masters, and they are automatically updated and patched regularly. In addition, use Google-curated container-optimized images for your deployment. These are also regularly patched and updated by Google.
GKE Sandbox is a good candidate for deploying multi-tenant applications that need an extra layer of security and isolation from their host kernel.
GKE integrates with various partner solutions for runtime security to provide you with robust solutions to monitor and manage your deployment. All these solutions can be built to integrate with Security Command Center, providing you with a single pane of glass.
Partner solutions for host-protection
In addition to using curated hardened OS images provided by Google, you can use various Google Cloud partner solutions for host protection. Most partner solutions offered on Google Cloud integrate with Security Command Center, from where you can go to the partner portal for advanced threat analysis or extra runtime security. Forseti is another security solution that integrates with Security Command Center and helps you monitor and alert your deployment for any violations in configurations.
- How do you manage security of your computing nodes?
- Do you need host-based protection?
- Do you maintain curated hardened images?
- How do you control who can create or delete compute nodes in your production environment?
- How frequently do you audit compute creation and deletion.
- Do you perform security testing on your deployments in a sandboxed environment? How frequently do you perform and monitor these and update compatibility with the current version of deployment nodes?
- Isolate VM communication using service accounts when possible.
- Disable external IP addresses at organization level, unless explicitly required.
- Use Google-curated images.
- Track security bulletins for new vulnerabilities and remediate your instances.
- Use private master deployment when using GKE.
- Use Workload Identity to access control Cloud API access from your GKE clusters.
- Enable GKE node auto upgrade.
Shielded VMs are virtual machines (VMs) on Google Cloud that are hardened by a set of security controls that help defend against rootkits and bootkits. Using Shielded VMs helps protect enterprise workloads from threats like remote attacks, privilege escalation, and malicious insiders. Shielded VMs use advanced platform security capabilities such as secure and measured boot, a virtual trusted platform module (vTPM), UEFI firmware, and integrity monitoring.
Workload Identity helps reduce the potential "blast radius" of a breach or compromise and management overhead, while helping you enforce the principle of least privilege across your GKE environment.
GKE Sandbox is a container isolation solution that provides a second layer of defense between containerized workloads on GKE. GKE Sandbox was built for applications that are low I/O but highly scaled. These containerized workloads need to maintain their speed and performance, but might also involve untrusted code that demands added security. gVisor, a container runtime sandbox, provides stronger security isolation between application and host kernel. gVisor helps you provide additional integrity checks, or limit the scope of access for a service. It is not a container hardening service to protect against external threats.
- Launch Checklist for Google Cloud | Documentation
- Security bulletins | Compute Engine Documentation
- Verifying the Identity of Instances | Compute Engine Documentation
- Workload Identity | Google Kubernetes Engine Documentation
- Shielded VM | Documentation
Secure your network
When you create a new project, it automatically provisions a default Google Cloud Virtual Private Cloud (VPC) with an RFC 1918 IP address. For production deployment, we recommend that you avoid this default VPC. Instead, delete it and provision a new VPC with the subnets you want. Because VPC lets you use any private IP addresses, to avoid conflicts we recommend that you carefully design your network and IP address allocation across your connected deployments and across your projects A project allows multiple VPC networks, but it is a best practice to limit these networks to one per project to effectively use IAM enforcement.
Virtual Private Cloud provides centralized deployment, management, and control. You can use it to build a robust production deployment that gives you a single network and isolates workloads into individual projects managed by different teams. Virtual Private Cloud consists of host and service projects. A host project is the main project with a well-thought out VPC and subnets. A service project attaches to the host project subnets and allows you to isolate users on the project level by using IAM.
Cloud Logging and Cloud Monitoring let you ingest logs at scale from various services and visualize them on the fly. You can integrate this capability with external security information and event management (SIEM) tools to analyze for advanced threats.
Firewalls in Google Cloud scale very well, so you can define them on the project level and they are evaluated on the attached instance level. Firewall rules help you define multiple rules that target ingress and egress network traffic. The scalable approach to apply these firewall rules is to target them to network tags or service accounts.
The more secure way to deploy firewall rules is to use firewall rules with a service account. But if you're using Virtual Private Cloud deployments, always define firewalls on your host project for centralized management.
Network intrusion detection
Many customers use advanced security and traffic inspection tools on-premises, and need the same tools to be available in the cloud for certain applications. VPC packet mirroring lets you troubleshoot your existing Virtual Private Clouds (VPCs). With Google Cloud packet mirroring, you can use third-party tools to collect and inspect network traffic at scale, provide intrusion detection, application performance monitoring, and better security controls, helping you ensure the security and compliance of workloads running in Compute Engine and Google Kubernetes Engine (GKE).
For production deployments, review configured routes under each VPC; strict and limited scoped rules are recommended. For GKE deployments, use Traffic Director to scale envoy management, or Istio for traffic flow management.
Within Google Cloud, choose Virtual Private Cloud or use VPC peering. Network tags and service accounts don't translate over peered projects, but Virtual Private Cloud can help centralize them on the host project. Virtual Private Cloud makes it easier to centralize service accounts and network tags, but we recommend that you carefully plan how to manage quotas and limitations. VPC peering might introduce duplicated effort to manage these controls, but it gives you more flexibility on quotas and limitations.
For external access, evaluate your bandwidth needs and choose between Cloud VPN, Cloud Interconnect, or Partner Interconnect. It's possible to centralize jump points through a single VPC or project to minimize network management.
VPC Service Controls provides an additional layer of security defense for Google Cloud services that is independent of IAM. While IAM enables granular identity-based access control, VPC Service Controls enables broader context-based perimeter security, including controlling data egress across the perimeter. Use VPC Service Controls and IAM for defense in depth.
Security Command Center provides multiple detectors that help you analyze the security of your infrastructure, for example, Event Threat Detection (Event Threat Detection), Google Cloud Armor logs, and Security Health Analytics (SHA). Enable services you need for your workloads and only monitor and analyze required data.
Network Intelligence Center gives you visibility into how your network topology and architecture are performing. You can get detailed insights into network performance and can optimize your deployment to eliminate any bottlenecks on your service. Network reachability provides you with insights into the firewall rules and policies that are applied to the network path.
- How do you manage network security today?
- Do you have perimeter-based firewall filtering?
- Can you move toward application-level firewall filtering?
- How do you manage networking access controls ?
- Do you depend heavily on static IP addresses?
- Can you move towards services-based access control?
- How do you monitor network security?
- Do you have IDS/IPS to monitor network traffic?
- How do you manage networking for admin access to your production environment?
- Do you have frequent audits to monitor admin network access activity?
- Define service perimeters using VPC Service Controls for sensitive data.
- Manage traffic with Google Cloud native firewall rules whenever possible.
- Use fewer, broader firewall rule sets when possible.
- Use service accounts with firewall rules for additional security.
- Use automation to monitor security policies when using tags.
- Use third-party tools in addition to Security Command Center to help secure and protect your network.
- Limit external access.
- Add explicit routes for Google APIs if you need to modify the VPC default route.
- Deploy instances that use Google APIs on the same subnet.
VPC Service Controls helps improve your ability to mitigate the risk of data exfiltration from Google-managed services like Cloud Storage and BigQuery. With VPC Service Controls, you can configure security perimeters around the resources of your Google-managed services and control the movement of data across the perimeter boundary.
Traffic Director is Google Cloud's fully managed traffic control plane for service meshes. Using Traffic Director, you can deploy global load balancing across clusters and VM instances in multiple regions, offload health checking from the service proxies, and configure sophisticated traffic control policies. Traffic Director uses open standard APIs (xDS v2) to communicate with the service proxies in the data plane, ensuring that you are not locked in to a proprietary solution and allowing you to use the service mesh control plane of your choice.
Security Command Center provides visibility into what resources are in Google Cloud and their security state. Security Command Center helps make it easier for you to prevent, detect, and respond to threats. It helps you identify security misconfigurations in virtual machines, networks, applications, and storage buckets from a centralized dashboard and take action on them before they can potentially result in business damage or loss. Its built-in capabilities can quickly surface suspicious activity in your Cloud Logging security logs or indicate compromised virtual machines. You can respond to threats by following actionable recommendations or exporting logs to your SIEM for further investigation.
Event Threat Detection automatically scans various types of logs for suspicious activity in your Google Cloud environment. Using industry-leading threat intelligence, you can quickly detect high-risk and costly threats such as malware, cryptomining, unauthorized access to Google Cloud resources, DDoS attacks, and brute-force SSH. By distilling volumes of log data, security teams can quickly identify high-risk incidents and focus on remediation.
Istio is an open service mesh that provides a uniform way to connect, manage, and secure microservices. It supports managing traffic flows between services, enforcing access policies, and aggregating telemetry data, all without requiring changes to the microservice code. Istio on Google Kubernetes Engine is an add-on for GKE that lets you quickly create a cluster with all the components you need to create and run as an Istio service mesh.
Packet Mirroring allows lets you to mirror your network traffic and send it to a third-party security solution such as an Intrusion Detection Solution (IDS) for proactively detecting threats and responding to intrusions. It supports regulatory and compliance requirements such as PCI 11.4, which requires having an intrusion detection solution.
- Best practices and reference architectures for VPC design | Solutions
- IAM Roles for Administering VPC Service Controls | VPC Service Controls
- Onboarding as a Security Command Center partner
- Viewing vulnerabilities and threats in Security Command Center
- VPC packet mirroring: Visualize and protect your cloud network
- Using packet mirroring for intrusion detection
- Using packet mirroring with a partner IDS
Implement data security controls
You can implement data security controls in relation to three areas: encryption, storage, and databases.
Google Cloud offers a continuum of encryption key management options to meet your needs. Identify the solutions that best fit your requirements for key generation, storage, and rotation, whether you are choosing for your storage, compute, or big data workloads. Use encryption as one piece of a broader data security strategy.
Default Encryption: Google Cloud encrypts customer data stored at rest by default, with no additional action required from you. For details on how envelope encryption works, see Encryption at Rest in Google Cloud.
Custom Encryption: Google Cloud lets you use envelope encryption to encrypt your data while storing the key encryption key in Cloud Key Management Service (Cloud KMS). Google Cloud also provides Cloud Hardware Security Modules (HSMs) if you need them. Using IAM permissions with Cloud KMS/HSM at the user-level on individual keys helps you manage access and the encryption process. To view admin activity and key use logs, use Cloud Audit Logs. To secure your data, monitor logs using Monitoring to ensure proper use of your keys.
Cloud Storage offers Object Versioning, which we recommend be turned on for objects that need to maintain state. Versioning introduces additional storage cost, and a careful tradeoff should be made for sensitive objects.
Object Lifecycle Management helps to archive older objects and downgrade storage class to save cost. These operations need careful planning because there might be charges related to changing storage class and accessing data.
Retention policies using Bucket Lock allow you to govern how long objects in the bucket must be retained for compliance and legal holds. Note that once you lock the bucket with a certain retention policy, you cannot remove or delete the bucket before the expiration date.
IAM permissions grant access to buckets as well as bulk access to a bucket's objects. IAM permissions give you broad control over your projects and buckets, but not fine-grained control over individual objects.
Access Control Lists (ACLs) grant read or write access to users for individual buckets or objects. In most cases, we recommend that you use IAM permissions instead of ACLs. Use ACLs only when you need fine-grained control over individual objects.
Signed URLs (query string authentication) give time-limited read or write access to an object through a URL that you generate. Anyone with whom you share the URL can access the object for the duration that you specify, regardless of whether they have a Google account.
Signed URLs also come in handy when you want to delegate access to private objects for a limited period of time.
Signed policy documents specify what can be uploaded to a bucket. Policy documents allow greater control over size, content type, and other upload characteristics than signed URLs, and can be used by website owners to allow visitors to upload files to Cloud Storage.
Cloud Storage encryption allows you to control encryption mechanisms. Cloud Storage always encrypts your data on the server side, before it is written to the disk. You can control server-side encryption using either of the following:
Customer-managed encryption keys (CMEK). You can generate and manage your encryption keys using Cloud KMS, which act as an additional encryption layer on top of the standard Cloud Storage encryption.
Customer-supplied encryption keys (CSEK). You can create and manage your own encryption keys. This is an additional layer of encryption to standard Cloud Storage encryption.
Note that keys in Cloud KMS are replicated and made available by Google; the security and availability of CSEK is your responsibility. We recommend a careful decision trade-off. You can always choose to perform client-side encryption and store encrypted data in Cloud Storage, which is encrypted with server-side encryption.
Persistent disks are automatically encrypted, but you can choose to supply or manage your own keys. You can store these keys in Cloud KMS or Cloud HSM, or you can supply them from your on-premises devices. Customer-supplied keys are not stored in instance templates nor in any Google infrastructure; therefore, the keys cannot be recovered if you lose them.
Persistent disk snapshots by default are stored in the multi-region that is closest to the location of your persistent disk, or you can choose your region location. You can easily share snapshots to restore new machines within the project in any new region. To share them with other projects, you need to create a custom image.
Cost control. Use cache-control metadata to analyze frequently accessed objects, and Cloud CDN for caching static public content.
- Do you have a process to use encryption keys setup?
- How do you manage the lifecycle of keys?
- How do you govern access control to encryption keys?
- How frequently do you audit access to keys?
- How do you plan to use storage? What's important to you: latency, IOPs, availability?
- How do you plan to store and control access to sensitive data?
- How frequently do you audit data and how to control access to it?
- Do you have a system in place to monitor data exfiltration? How do you process those alerts?
- Use Google-managed keys to simplify the key management lifecycle.
- Use IAM to govern who can access Cloud KMS.
- Frequently audit access control to encryption keys and revoke unnecessary access.
Access control and storage
- Use object-level access control to limit granting bucket-wide access.
- Enable object versioning for sensitive data to quickly restore data to a known state.
- Limit who can delete objects or buckets in Cloud Storage.
- Use the principle of least privilege when granting IAM access.
- Avoid granting roles with
setIamPolicypermission to people you do not know.
- Be careful how you grant permissions for anonymous users.
- Understand various roles in Cloud Storage, and curate restricted roles using custom roles and permissions.
- Avoid setting permissions that result in inaccessible buckets and objects.
- Delegate administrative control of your buckets.
- Ensure that Cloud Storage buckets have
Bucket Policy Onlyenabled.
Cloud Key Management Service lets you keep millions of cryptographic keys, allowing you to determine the level of granularity at which to encrypt your data. Set keys to automatically rotate regularly, using a new primary version to encrypt data and limit the scope of data accessible with any single key version. Keep as many active key versions as you want.
- Image management best practices | Solutions
- Best practices for persistent disk snapshots | Compute Engine Documentation
- Access control options
- Bucket locations | Cloud Storage
- Retention policies using Bucket Lock
- Cloud Audit Logs with Cloud Storage
- Cloud Storage IAM best practices
- Best practices for Cloud Storage
Use database access controls
Before you deploy Cloud SQL, follow security best practice to help secure access control. Access control occurs on instance and database layers.
Instance-level access authorizes access to your Cloud SQL instance from an application or client (running on App Engine or externally) or another Google Cloud service, such as Compute Engine.
Database access uses the MySQL Access Privilege System to control which MySQL users have access to the data in your instance.
Whenever possible, use Cloud SQL Proxy to manage communication between application and database. Cloud SQL Proxy is available for Compute Engine and GKE deployments.
Cloud Bigtable uses similar controls using IAM at the project level and instance level.
Cloud Spanner lets you access control users and groups at project, instance, and database level. In all cases, we recommend using IAM with least privilege.
- Do you have a process to grant access to databases and how frequently you audit them?
- Do you scan databases for any sensitive data?
- How frequently do you take snapshots of or back up your databases?
- How do you manage and secure encryption keys for those?
- Limit access to the database.
- Use the principle of least privilege when granting IAM access.
- Avoid granting roles with
setIamPolicypermission to people you do not know.
- Be careful how you grant permissions for anonymous users.
- Ensure that BigQuery datasets are not anonymously or publicly accessible.
- Best practices for SQL Server instances | Compute Engine Documentation
- Best Practices | Datastore Documentation
- Data Catalog
Implement data security
Cloud Data Loss Prevention (DLP) provides tools to classify, mask, tokenize, and transform sensitive elements to help you better manage the data that you collect, store, or use for business or analytics. For example, features like format-preserving encryption or tokenization allow you to preserve the utility of your data for joining or analytics while obfuscating the raw sensitive identifiers.
Cloud DLP architecture includes several features to make it easy to use in small or large operations. Templates for inspection and de-identification allow you to define configurations once and use them across requests. Cloud DLP job triggers and actions allow you to kick off inspection jobs periodically and generate Pub/Sub notifications when jobs are complete.
Quasi-identifiers are partially identifying or elements or combinations of data that may link to a single person or a very small group. Cloud DLP allows you to measure statistical properties such as k-anonymity and l-diversity, expanding your ability to understand and protect data privacy.
Cloud DLP (DLP) helps you better understand and manage sensitive data. It provides fast, scalable classification and redaction for sensitive data elements like credit card numbers, names, Social Security numbers, US and selected international identifier numbers, phone numbers, and Google Cloud credentials. Cloud DLP classifies this data using more than 120 predefined detectors to identify patterns, formats, and checksums, and even understands contextual clues. You can optionally redact data using techniques like masking, secure hashing, tokenization, bucketing, and format-preserving encryption.
Build apps with supply chain security controls
Without automated tools, increasingly complex application environments that are deployed, updated, or patched make it hard to meet consistent security requirements. Building a CI/CD pipeline solves many of these issues. For details, see Operational excellence.
Automated pipelines remove manual errors, provide standardized development feedback loops and enable fast product iterations; therefore, it is important to secure these pipelines. If an attacker can compromise your pipeline, your entire stack could be affected. We recommend that you secure access to these pipelines with least privileges and have a chain of approval before pushing the code in deployments.
Container Analysis helps vulnerability scanning and fixing issues before container deployments. Container Analysis stores metadata for scanned images that can help identify the latest vulnerabilities and patch or update them.
Binary Authorization helps sign containers with one or multiple unique attestations. Such attestations along with policy definitions help you identify, control, and only deploy approved containers during runtime. It's a best practice to set up a strict policy model and at least one signer to approve and sign-off on container deployment.
Web Security Scanner scans your deployed application for vulnerabilities during runtime. You can configure Web Security Scanner to interact with your application as a signed user that navigates and crawls through various pages scanning for vulnerabilities. We recommended running scans on a test environment that reflects production to avoid unintended behaviors.
For a complex deployment spanning across multiple teams and functions, deploy changes through automation that is audited and verified on a gradual rolling basis.
- Do you have a well-defined process (for example, canary testing, vulnerability scanning, or approval chain) to secure how you deploy and monitor new changes to your production environment?
- Do you perform vulnerability scanning on your application code to mitigate the latest vulnerabilities?
- Do you have a sandboxed environment to test and monitor such scans?
- How do you manage secrets for your deployments?
- How do you audit and rotate your secrets?
- Use approved base images with the minimum set of components needed.
- Use image scanning and analysis as part of your CI/CD process.
- Track notifications and issues, and automate patching.
- Use Secrets Management.
- Regularly audit how your applications are deployed.
Container Registry is a single place for your team to manage Docker images, perform vulnerability analysis, and decide who can access what with fine-grained access control. Existing CI/CD integrations let you set up fully automated Docker pipelines to get fast feedback. Container Registry is a private container image registry that runs on Google Cloud. It supports Docker Image Manifest V2 and OCI image formats.
Container Analysis provides vulnerability information and other types of metadata for the container images in Container Registry. The metadata is stored as notes. An occurrence is created for each instance of a note associated with an image.
Binary Authorization service provides software supply-chain security for applications that run in the Cloud. Binary Authorization works with images that you deploy to GKE from Container Registry or another container image registry. With Binary Authorization, you can help ensure that internal processes that safeguard the quality and integrity of your software are successfully completed before an application is deployed to your production environment.
Security Command Center identifies security vulnerabilities in your App Engine, Compute Engine, and Google Kubernetes Engine web applications. It crawls your application, following all links within the scope of your starting URLs, and attempts to exercise as many user inputs and event handlers as possible. It can automatically scan and detect four common vulnerabilities, including cross-site scripting (XSS), Flash injection, mixed content (HTTP in HTTPS), and outdated or insecure libraries. It enables early identification and delivers low false positive rates. You can easily set up, run, schedule, and manage security scans, and it is available at no additional charge for Google Cloud users.
Audit your infrastructure
Cloud Logging provides audit logging for your Google Cloud services. Cloud Audit Logs helps security teams maintain audit trails in Google Cloud. Because Cloud Logging is integrated into all Google Cloud services, you can log details for long-term archival and compliance requirements. Logging data-access logs can get costly, so be sure to plan carefully before enabling this feature.
You can export audit logs through real-time streaming and store them to help meet your compliance requirements. You can export to Cloud Storage, BigQuery, or Pub/Sub. By moving your logs to either BigQuery or Cloud Storage, you benefit from reduced cost and long-term storage.
Access Transparency logs include data about the support team actions that you might have requested by phone, lower-level engineering investigations into your support requests, or other investigations made for valid business purposes, such as recovering from an outage.
- How long do you need to retain audit logs?
- Which audit logs do you need to retain?
- Do you need to export audit logs?
- How will you use any exported audit logs?
- Export audit logs to BigQuery for reporting purposes.
- Export audit logs to Cloud Storage for any logs that you need to keep, but that you don't expect to have to use or process.
- Use Organization-level sinks to capture all logs for an organization.
- Use logging sink filters to exclude logs that you don't need to export.