Creating a cluster network policy

This page explains how to configure the enforcement of network policies in Google Kubernetes Engine (GKE). For general information about GKE networking, visit the Network overview.

Overview

You can use GKE's network policy enforcement to control the communication between your cluster's Pods and Services. You define a network policy by using the Kubernetes Network Policy API to create Pod-level firewall rules. These firewall rules determine which Pods and Services can access one another inside your cluster.

Defining network policy helps you enable things like defense in depth when your cluster is serving a multi-level application. For example, you can create a network policy to ensure that a compromised front-end service in your application cannot communicate directly with a billing or accounting service several levels down.

Network policy can also make it easier for your application to host data from multiple users simultaneously. For example, you can provide secure multi-tenancy by defining a tenant-per-namespace model. In such a model, network policy rules can ensure that Pods and Services in a given namespace cannot access other Pods or Services in a different namespace.

Before you begin

Before you start, make sure you have performed the following tasks:

Set up default gcloud settings using one of the following methods:

  • Using gcloud init, if you want to be walked through setting defaults.
  • Using gcloud config, to individually set your project ID, zone, and region.

Using gcloud init

  1. Run gcloud init and follow the directions:

    gcloud init

    If you are using SSH on a remote server, use the --console-only flag to prevent the command from launching a browser:

    gcloud init --console-only
  2. Follow the instructions to authorize gcloud to use your Google Cloud account.
  3. Create a new configuration or select an existing one.
  4. Choose a Google Cloud project.
  5. Choose a default Compute Engine zone.

Using gcloud config

  • Set your default project ID:
    gcloud config set project project-id
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone compute-zone
  • If you are working with regional clusters, set your default compute region:
    gcloud config set compute/region compute-region
  • Update gcloud to the latest version:
    gcloud components update

Using network policy enforcement

You can enable network policy enforcement when you create a GKE cluster or enable it for an existing cluster. You can also disable network policy for an existing cluster.

Once you have enabled network policy in your cluster, you can create a network policy by using the Kubernetes Network Policy API.

Enabling network policy enforcement

When you enable network policy enforcement in a GKE cluster, GKE manages and enforces network policies within that cluster.

You can enable network policy enforcement in GKE by using the gcloud tool, the Google Cloud Console, or the GKE REST API.

gcloud

To enable network policy enforcement when creating a new cluster, run the following command:

gcloud container clusters create cluster-name --enable-network-policy

To enable network policy enforcement for an existing cluster, perform the following tasks:

  1. Run the following command to enable the add-on:

    gcloud container clusters update cluster-name --update-addons=NetworkPolicy=ENABLED
  2. Run the following command to enable network policy enforcement on your cluster, which in turn recreates your cluster's node pools with network policy enforcement enabled:

    gcloud container clusters update cluster-name --enable-network-policy

Console

To enable network policy enforcement when creating a new cluster:

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click the Create cluster button.

  3. Configure your cluster as desired.

  4. From the navigation pane, under Cluster, click Networking.

  5. Select the Enable network policy checkbox.

  6. Click Create.

To enable network policy enforcement for an existing cluster:

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click the cluster you want to enforce network policy.

  3. Click Edit, next to a pencil icon.

  4. From the Network policy for master drop-down menu, select Enabled.

  5. Click Save, and then click Edit again once the cluster is updated.

  6. From the Network policy for nodes drop-down menu, select Enabled.

  7. Click Save.

API

To enable network policy enforcement, perform the following:

  1. Specify the networkPolicy object inside the cluster object that you provide to projects.zones.clusters.create or projects.zones.clusters.update.

  2. The networkPolicy object requires an enum that specifies which network policy provider to use, and a boolean value that specifies whether to enable network policy. If you enable network policy but do not set the provider, the create and update commands return an error.

Disabling network policy enforcement

You can disable network policy enforcement by using the gcloud tool, the Google Cloud Console, or the GKE API.

gcloud

To disable network policy enforcement for an existing cluster, run the following command:

gcloud container clusters update cluster-name --no-enable-network-policy

Console

To disable network policy enforcement for an existing cluster, perform the following:

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click the cluster's Edit button, which looks like a pencil.

  3. From the Network policy for nodes drop-down menu, select Disabled.

  4. Click Save. Then, click Edit again.

  5. From the Network policy for master drop-down menu, select Disabled.

  6. Click Save.

API

To disable network policy enforcement for an existing cluster, perform the following:

  1. Specify the networkPolicy object inside your cluster object you provide to projects.zones.clusters.update.
  2. Inside the networkPolicy object, set the boolean enabled value to false.

Creating a network policy

Once you have enabled network policy enforcement for your cluster, you'll need to define the actual network policy. You define the network policy using the Kubernetes Network Policy API.

For further details on creating a network policy, see the following topics in the Kubernetes documentation:

Working with PodSecurityPolicy

If you are using a NetworkPolicy, and you have a Pod that is subject to a PodSecurityPolicy, create an RBAC Role or ClusterRole that has permission to use the PodSecurityPolicy. Then bind the Role or ClusterRole to the Pod's service account. When using NetworkPolicy and PodSecurityPolicy together, granting permissions to user accounts is not sufficient. You must bind the role to the service account. For more information, see Authorizing policies.

Temporarily overriding network policy

You can temporarily disable network policy enforcement on your cluster in case of issues or extraordinary circumstances.

Overhead, limitations, and caveats

  • Enabling network policy enforcement consumes additional resources in nodes. Specifically, it increases the memory footprint of the kube-system process by approximately 128 MB, and requires approximately 300 millicores of CPU.

  • Enabling network policy enforcement requires that your nodes be recreated. If your cluster has an active maintenance window, your nodes are not automatically recreated until the next maintenance window. If you prefer, you can manually upgrade your cluster at any time.

Limitations and requirements

  • Your cluster must have at least 2 nodes of type n1-standard-1 or higher. The recommended minimum size cluster to run network policy enforcement is 3 n1-standard-1 instances.
  • Network policy is not supported for clusters whose nodes are f1-micro or g1-small instances, as the resource requirements are too high for instances of that size.

For more information about node machine types and allocatable resources, refer to Cluster Architecture - Nodes.

Working with HTTP(S) load balancer health checks

When an Ingress is applied to a Service to build an HTTP(S) load balancer, the network policy applied to Pods behind that Service must also allow the appropriate HTTP(S) load balancer probe IP ranges.

Additionally, node ports for a Service may forward connections to Pods on other nodes unless they are prevented from doing so by setting externalTrafficPolicy to "Local" in the Service definition. If externalTrafficPolicy is not set to Local, the network policy must also allow connections from other node IPs in the cluster.

Note that this discussion does not apply when using Container-native Load Balancing with Network Endpoint Groups.

What's next