This page explains how to configure the enforcement of network policies in Google Kubernetes Engine (GKE). For general information about GKE networking, visit the Network overview.
Overview
You can use GKE's network policy enforcement to control the communication between your cluster's Pods and Services. You define a network policy by using the Kubernetes Network Policy API to create Pod-level firewall rules. These firewall rules determine which Pods and Services can access one another inside your cluster.
Defining network policy helps you enable things like defense in depth when your cluster is serving a multi-level application. For example, you can create a network policy to ensure that a compromised front-end service in your application cannot communicate directly with a billing or accounting service several levels down.
Network policy can also make it easier for your application to host data from multiple users simultaneously. For example, you can provide secure multi-tenancy by defining a tenant-per-namespace model. In such a model, network policy rules can ensure that Pods and Services in a given namespace cannot access other Pods or Services in a different namespace.
Before you begin
Before you start, make sure you have performed the following tasks:
- Ensure that you have enabled the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- Ensure that you have installed the Google Cloud CLI.
- Set up default Google Cloud CLI settings for your project by using one of the following methods:
- Use
gcloud init
, if you want to be walked through setting project defaults. - Use
gcloud config
, to individually set your project ID, zone, and region. -
Run
gcloud init
and follow the directions:gcloud init
If you are using SSH on a remote server, use the
--console-only
flag to prevent the command from launching a browser:gcloud init --console-only
- Follow the instructions to authorize the gcloud CLI to use your Google Cloud account.
- Create a new configuration or select an existing one.
- Choose a Google Cloud project.
- Choose a default Compute Engine zone.
- Choose a default Compute Engine region.
- Set your default project ID:
gcloud config set project PROJECT_ID
- Set your default Compute Engine region (for example,
us-central1
):gcloud config set compute/region COMPUTE_REGION
- Set your default Compute Engine zone (for example,
us-central1-c
):gcloud config set compute/zone COMPUTE_ZONE
- Update
gcloud
to the latest version:gcloud components update
gcloud init
gcloud config
By setting default locations, you can avoid errors in gcloud CLI like the
following: One of [--zone, --region] must be supplied: Please specify location
.
Using network policy enforcement
You can enable network policy enforcement when you create a GKE cluster or enable it for an existing cluster. You can also disable network policy for an existing cluster.
Once you have enabled network policy in your cluster, you can create a network policy by using the Kubernetes Network Policy API.
Enabling network policy enforcement
Network policy enforcement is built into GKE Dataplane V2. You do not need to enable network policy enforcement in clusters that use GKE Dataplane V2.
When you enable network policy enforcement in a GKE cluster that doesn't use GKE Dataplane V2, GKE manages and enforces network policies within that cluster.
You can enable network policy enforcement in GKE by using the gcloud CLI, the Google Cloud console, or the GKE API.
gcloud
To enable network policy enforcement when creating a new cluster, run the following command:
gcloud container clusters create CLUSTER_NAME --enable-network-policy
Replace CLUSTER_NAME
with the name of the new
cluster.
To enable network policy enforcement for an existing cluster, perform the following tasks:
Run the following command to enable the add-on:
gcloud container clusters update CLUSTER_NAME --update-addons=NetworkPolicy=ENABLED
Replace
CLUSTER_NAME
with the name of the cluster.Run the following command to enable network policy enforcement on your cluster, which in turn recreates your cluster's node pools with network policy enforcement enabled:
gcloud container clusters update CLUSTER_NAME --enable-network-policy
Console
To enable network policy enforcement when creating a new cluster:
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click add_box Create.
Configure your cluster as desired.
From the navigation pane, under Cluster, click Networking.
Select the Enable network policy checkbox.
Click Create.
To enable network policy enforcement for an existing cluster:
Go to the Google Kubernetes Engine page in Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Networking, in the Network policy field, click edit Edit network policy.
Select the Enable network policy for master checkbox and click Save Changes.
Wait for your changes to apply, and then click edit Edit network policy again.
Select the Enable network policy for nodes checkbox.
Click Save Changes.
API
To enable network policy enforcement, perform the following:
Specify the
networkPolicy
object inside thecluster
object that you provide to projects.zones.clusters.create or projects.zones.clusters.update.The
networkPolicy
object requires an enum that specifies which network policy provider to use, and a boolean value that specifies whether to enable network policy. If you enable network policy but do not set the provider, thecreate
andupdate
commands return an error.
Disabling network policy enforcement
You can disable network policy enforcement by using the gcloud CLI, the Google Cloud console, or the GKE API. Network policy enforcement can't be disabled in clusters that use GKE Dataplane V2.
gcloud
To disable network policy enforcement for an existing cluster, run the following command:
gcloud container clusters update CLUSTER_NAME --no-enable-network-policy
Replace CLUSTER_NAME
with the name of the
cluster.
Console
To disable network policy enforcement for an existing cluster, perform the following:
Go to the Google Kubernetes Engine page in Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Networking, in the Network policy field, click edit Edit network policy.
Clear the Enable network policy for nodes checkbox and click Save Changes.
Wait for your changes to apply, and then click edit Edit network policy again.
Clear the Enable network policy for master checkbox.
Click Save Changes.
API
To disable network policy enforcement for an existing cluster, perform the following:
- Specify the
networkPolicy
object inside yourcluster
object you provide to projects.zones.clusters.update. - Inside the
networkPolicy
object, set the booleanenabled
value tofalse
.
If you disable network policy enforcement, make sure to also update any add-ons (for example, Calico DaemonSet) to indicate the network policy is disabled for the add-ons:
gcloud container clusters update CLUSTER_NAME --update-addons=NetworkPolicy=DISABLED
Replace CLUSTER_NAME
with the name of the cluster.
Creating a network policy
Once you have enabled network policy enforcement for your cluster, you'll need to define the actual network policy. You define the network policy using the Kubernetes Network Policy API.
For further details on creating a network policy, see the following topics in the Kubernetes documentation:
Working with PodSecurityPolicy
If you are using a NetworkPolicy, and you have a Pod that is subject to a PodSecurityPolicy, create an RBAC Role or ClusterRole that has permission to use the PodSecurityPolicy. Then bind the Role or ClusterRole to the Pod's service account. When using NetworkPolicy and PodSecurityPolicy together, granting permissions to user accounts is not sufficient. You must bind the role to the service account. For more information, see Authorizing policies.
Overhead, limitations, and caveats
Enabling network policy enforcement consumes additional resources in nodes. Specifically, it increases the memory footprint of the
kube-system
process by approximately 128 MB, and requires approximately 300 millicores of CPU.Enabling network policy enforcement requires that your nodes be recreated. If your cluster has an active maintenance window, your nodes are not automatically recreated until the next maintenance window. If you prefer, you can manually upgrade your cluster at any time.
Limitations and requirements
- The recommended minimum cluster size to run network policy enforcement is
three
e2-medium
instances. - Network policy is not supported for clusters whose nodes are
f1-micro
org1-small
instances, as the resource requirements are too high for instances of that size. - If you use network policy with GKE Workload Identity, you
must allow egress to the following IP addresses and port numbers so your Pods
can communicate with the GKE metadata server. For clusters
running GKE version 1.21.0-gke.1000 and later, allow egress to
169.254.169.252/32
on port988
. For clusters running GKE versions prior to 1.21.0-gke.1000, allow egress to127.0.0.1/32
on port988
. To avoid disruptions during auto-upgrades, allow egress to all of these IP addresses and ports. - If you specify an
endPort
field in a Network Policy on a cluster that has GKE Dataplane V2 enabled, it might not take effect starting in GKE version 1.22. For more information, see Network Policy port ranges do not take effect.
For more information about node machine types and allocatable resources, see Standard cluster architecture - Nodes.
Migrating from Calico to GKE Dataplane V2
If you migrate your network policies from Calico to GKE Dataplane V2, consider the following limitations:
You cannot use a Pod or Service IP address in the
ipBlock.cidr
field of aNetworkPolicy
manifest. You must reference workloads using labels. For example, the following configuration is invalid:- ipBlock: cidr: 10.8.0.6/32
You cannot specify an empty
ports.port
field in aNetworkPolicy
manifest. If you specify a port, you must also specify a protocol. For example, the following configuration is invalid:ingress: - ports: - protocol: TCP
Working with HTTP(S) Load Balancing
When an Ingress is applied to a Service to build an HTTP(S) load balancer, you must configure the network policy applied to Pods behind that Service to allow the appropriate HTTP(S) load balancer health check IP ranges. If you are using an internal HTTP(S) load balancer, you must also configure the network policy to allow the proxy-only subnet.
If you are not using container-native load balancing with network endpoint
groups, node ports for a Service might forward connections to Pods on
other nodes unless they are prevented from doing so by setting
externalTrafficPolicy
to Local
in the Service definition. If
externalTrafficPolicy
is not set to Local
, the network policy must also
allow connections from other node IPs in the cluster.
Known issues
StatefulSet pod termination with Calico
GKE clusters with
Calico network
policy enabled might experience an issue where a StatefulSet pod drops existing
connections when the pod is deleted. After a pod enters the Terminating
state,
the terminationGracePeriodSeconds
configuration in the pod spec is not honored
and causes disruptions for other applications that have an existing connection
with the StatefulSet pod. For more information about this issue, see
Calico issue #4710.
This issue affects the following GKE versions:
- 1.18
- 1.19 to 1.19.16-gke.99
- 1.20 to 1.20.11-gke.1299
- 1.21 to 1.21.4-gke.1499
To mitigate this issue, upgrade your GKE control plane to one of the following versions:
- 1.19.16-gke.100 or later
- 1.20.11-gke.1300 or later
- 1.21.4-gke.1500 or later
What's next
- Follow the Network Policies tutorial.
- Read the Kubernetes documentation about network policies.
- Implement common approaches to restrict traffic using network policies.
- Using network policy logging.
- Use security insights to explore other ways to harden your infrastructure.