This document describes how to manage resources so that they are available only in specific regions. It includes an overview of how and why you might need to meet location restrictions, and it describes the Google Cloud controls that you use for this task.
The document is part of a series of security blueprints that provide prescriptive guidance for working with Anthos. For more information about these blueprints, see Anthos Security Blueprints: Frequently asked questions.
Many enterprises must meet location-specific residency requirements. That is, they require their services—and in many cases the clusters the services are running on—to be deployed or accessible only from specific locations. There can be various reasons: regulatory requirements, latency requirements, or a business requirement to offer the service only in specific countries.
For meeting location-specific residency requirements, you need to consider the following:
- Where the services need to be located.
- Whether you need to restrict access to your services from specific regions.
- What Google Cloud services your services depend on.
- Whether you're using GKE on Google Cloud or Anthos clusters on VMware.
- What the permitted flows and operations are within and to and from your app.
When you understand these issues, you can determine how to configure each of the applicable security controls to meet your location-specific residency requirements.
The content in the enforcing-locality directory in the GitHub repository that's associated with this blueprint provides instructions on how to configure the security controls that you need in order to meet your locality restriction requirements.
Understanding the security controls you need
This section discusses the controls that you can use to help you to achieve the level of locality restriction that meets your requirements.
Labeling resources that should use the same policies
Namespaces let you provide a scope for related resources within a cluster—for example, Pods, Services, and replication controllers. By using namespaces, you can delegate administration responsibility for the related resources as a unit. Therefore, namespaces are integral to most security patterns.
Namespaces are an important feature for control plane isolation. However, they don't provide node isolation, data plane isolation, or network isolation.
A common approach is to create namespaces for individual applications. For
example, you might create the namespace
myapp-frontend for the UI component
of an application.
Restricting the locations where you can deploy your clusters
When you restrict services to a locality, you use the
Resource Location Restriction constraint.
This constraint defines the set of locations where location-based
Google Cloud resources can be created. Policies for this constraint can
specify different allowed or denied locations. For example, the policies can
specify multi-regions such as Asia and Europe, regions such as
europe-west1, or individual zones such as
europe-west1-b. Every location to
be allowed or denied must be listed explicitly.
Anthos Config Management
Applying configurations to your Anthos clusters
A best practice when you manage Anthos clusters is to use
Anthos Config Management,
which keeps your enrolled clusters in sync with configs. A
is a YAML or JSON file that's stored in your
and that contains the same types of configuration details that you can manually
apply to a cluster by using the
kubectl apply command.
Anthos Config Management lets you manage your policies and
infrastructure deployments like you do your apps—by adopting a
You use Anthos Config Management in conjunction with a Git repository that acts as the single source of truth for your declared policies. Anthos Config Management can manage access-control policies like RBAC, resource quotas, namespaces, and platform-level infrastructure deployments. Anthos Config Management is declarative; it continuously checks cluster state and applies the state declared in the config in order to enforce policies.
Authorized networks for master cluster access
Restricting the locations that can access the Kubernetes API server
Authorized networks give you the ability to allow specific CIDR ranges and to allow IP addresses in those ranges to access your cluster master endpoint using HTTPS. Private clusters do not expose external IP addresses, and optionally run their cluster master without a publicly-reachable endpoint.
A best practice is to follow the guidance in the GKE hardening guide that recommends using private clusters in combination with enabling authorized networks.
Selecting a cluster type
To understand what cluster type is appropriate for your use case, you need to
regions and zones
where resources can be deployed. Regions are independent geographic areas that
consist of zones. For example, London (
europe-west2) is a region within
Europe, and Oregon (
us-west1) is a region within the Americas.
A zone is a deployment area for Google Cloud resources within a region. Zones can be considered a single failure domain within a region. In order to deploy fault-tolerant apps with high availability, you can deploy your apps across multiple zones in a region.
Locations within regions tend to have round-trip network latencies of under 1
millisecond on the 95th percentile. An example of a location is Tokyo, Japan,
which has 3 zones and is in the
GKE has three types of clusters:
- Single-zone cluster. A single-zone cluster has a single control plane (master) running in one zone. This control plane manages workloads on nodes running in the same zone.
- Multi-zonal cluster. A multi-zonal cluster has a single replica of the control plane running in a single zone, and has nodes running in multiple zones.
- Regional cluster.
replicate cluster masters and nodes across multiple zones within a single
region. For example, a regional cluster in the
us-east1region creates replicas of the control plane and nodes in three
You should select the cluster that meets your needs for availability. For example, if you need your workloads to be highly available, use a regional cluster.
Bringing it all together
To integrate the controls, determine your locality-restriction requirements. Then map out the scope of the controls discussed in this guide and the stage at which they need to be configured, as follows:
- Define your regional requirements.
- Identify the appropriate location for your cluster.
- Create a resource location organizational policy to restrict creation of your GKE clusters to only the locations that meet your requirements.
- Create your
by following the guidance in the
cluster hardening guide. When you create your cluster, be sure you follow
the hardening guide and use the
--enable-network-policyflag. Network policies are required, and this step lets you implement firewall rules later that restrict the traffic that flows between Pods in a cluster.
- Add authorized networks to your private clusters.
If you need to restrict where customers can access your services from, do the following:
- For GKE on Google Cloud clusters, define a Google Cloud Armor security policy to enforce access control based on the geographic location of incoming traffic. Attach the security policy to each of the backends that's associated with the ingress resource.
- Use Anthos Config Management to define policies that restrict who can access your clusters.