Creating a Cluster Network Policy

This page explains how to configure network policies in Google Kubernetes Engine.

Overview

You can use GKE's network policy enforcement to control the communication between your cluster's Pods and Services. To define a network policy on GKE, you can use the Kubernetes Network Policy API to create Pod-level firewall rules. These firewall rules determine which Pods and Services can access one another inside your cluster.

Defining network policy helps you enable things like defense in depth when your cluster is serving a multi-level application. For example, you can create a network policy to ensure that a compromised front-end service in your application cannot communicate directly with a billing or accounting service several levels down.

Network policy can also make it easier for your application to host data from multiple users simultaneously. For example, you can provide secure multi-tenancy by defining a tenant-per-namespace model. In such a model, network policy rules can ensure that Pods and Services in a given namespace cannot access other Pods or Services in a different namespace.

Before you begin

To prepare for this task, perform the following steps:

  • Ensure that you have enabled the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • Ensure that you have installed the Cloud SDK.
  • Set your default project ID:
    gcloud config set project [PROJECT_ID]
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone [COMPUTE_ZONE]
  • If you are working with regional clusters, set your default compute region:
    gcloud config set compute/region [COMPUTE_REGION]
  • Update gcloud to the latest version:
    gcloud components update

Using network policy enforcement

To enable or disable network policy enforcement, you can use the gcloud command-line tool, GKE REST API, or Google Cloud Platform Console.

Once you have enabled network policy in your cluster, you can create a network policy using the Kubernetes Network Policy API.

You can enable network policy enforcement when you create a GKE cluster or enable it for an existing cluster. You can also disable network policy for an existing cluster.

Enabling network policy enforcement

gcloud

To enable network policy enforcement when creating a new cluster using the gcloud command-line tool, run the gcloud container clusters create command with the --enable-network-policy flag:

gcloud container clusters create [CLUSTER_NAME] --enable-network-policy

Enabling network policy enforcement for an existing cluster with the gcloud command-line tool is a two-step process. First, run the gcloud container clusters update command with the --update-addons flag:

gcloud container clusters update [CLUSTER_NAME] --update-addons=NetworkPolicy=ENABLED

Then, run the gcloud container clusters update command with the --enable-network-policy flag. This command causes your cluster's node pools to be recreated with network policy enabled:

gcloud container clusters update [CLUSTER_NAME] --enable-network-policy

Console

  1. Visit the Google Kubernetes Engine menu in GCP Console.

    Visit the Google Kubernetes Engine menu

  2. Click Create cluster.

  3. Configure your cluster as desired.
  4. Click Advanced options. In the Networking section, select Enable network policy.
  5. Click Create.

API

To enable network policy using the GKE API, specify the networkPolicy object inside the cluster object that you provide to projects.zones.clusters.create or projects.zones.clusters.update.

The networkPolicy object requires an enum that specifies which network policy provider to use, and a boolean that specifies whether to enable network policy. If you enable network policy but do not set the provider, the create and update commands return an error. Currently, the only valid provider value is CALICO.

Disabling network policy enforcement

gcloud

To disable network policy enforcement for an existing cluster using the gcloud command-line tool, run the gcloud container clusters update command with the --no-enable-network-policy flag.

gcloud container clusters update [CLUSTER_NAME] --no-enable-network-policy

Console

  1. Visit the Google Kubernetes Engine menu in GCP Console.

    Visit the Google Kubernetes Engine menu

  2. Click the cluster's Edit button, which looks like a pencil.

  3. From the Network policy for nodes drop-down menu, select Disabled.
  4. Click Save. Then, click Edit again.
  5. From the Network policy for master drop-down menu, select Disabled.
  6. Click Save.

API

To disable network policy enforcement for an existing cluster using the GKE API, specify the networkPolicy object inside your cluster object you provide to projects.zones.clusters.update. Inside the networkPolicy object, set the boolean enabled value to false.

Creating a network policy

Once you have enabled network policy enforcement for your cluster, you'll need to define the actual network policy. You define the network policy using the Kubernetes Network Policy API.

For further details on creating a network policy, see the following topics in the Kubernetes documentation:

Temporarily overriding network policy

You can temporarily disable network policy enforcement on your cluster in case of issues or extraordinary circumstances. For more information, refer to Tigera's documentation on overriding Calico policy.

Overhead, limitations, and caveats

  • Enabling network policy enforcement consumes additional resources in nodes. Specifically, it increases the memory footprint of the `kube-system` process by approximately 128 MB, and requires approximately 300 millicores of CPU.
  • Enabling network policy enforcement requires that your nodes be recreated. If your cluster has an active maintenance window, your nodes are not automatically recreated until the next maintenance window. If you prefer, you can manually upgrade your cluster at any time.

Limitations and Requirements

  • Your cluster must have at least 2 nodes of type n1-standard-1 or higher. The recommended minimum size cluster to run network policy enforcement is 3 n1-standard-1 instances.
  • Network policy is not supported for clusters whose nodes are f1-micro or g1-small instances, as the resource requirements are too high for instances of that size.

For more information about node machine types and allocatable resources, refer to Cluster Architecture - Nodes.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Kubernetes Engine