Using a NAT Gateway with Kubernetes Engine

This tutorial shows how to define new node IP address mappings by using network address translation (NAT) gateways.

Under normal circumstances, Google Kubernetes Engine nodes route all egress traffic through the internet gateway associated with their node cluster. The internet gateway connection, in turn, is defined by the Compute Engine network associated with the node cluster. Each node in the cluster has an ephemeral external IP address. When nodes are created and destroyed during autoscaling, new node IP addresses are allocated automatically.

The default gateway behavior works well under normal circumstances. However, you might want to modify how ephemeral external IP addresses are allocated in order to:

  • Provide a third-party service with a consistent external IP address.
  • Monitor and filter egress traffic out of the Google Kubernetes Engine cluster.

In this tutorial, you learn how to:

  1. Create a NAT gateway instance and configure its routing details for an existing Google Kubernetes Engine cluster.
  2. Create a custom routing rule for the NAT gateway instance.

The following diagram shows an overview of the architecture:

architecture of a NAT gateway

Objectives

  • Use Terraform to create a NAT gateway instance.
  • Create an egress network routing rule for an existing Google Kubernetes Engine cluster.
  • Verify that outbound traffic from a pod is routed through the NAT gateway.

Costs

This tutorial uses billable components of GCP, including:

  • Google Kubernetes Engine
  • Compute Engine

Use the Pricing Calculator to estimate the cost of your environment. The resources used in this tutorial, including the Google Kubernetes Engine cluster, cost about $3.19 per day.

Before you begin

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. Select or create a GCP project.

    Go to the Manage resources page

  3. Make sure that billing is enabled for your project.

    Learn how to enable billing

  4. Enable the Compute Engine and Google Kubernetes Engine APIs.

    Enable the APIs

Creating the NAT gateway with Terraform

This tutorial uses the Modular NAT Gateway on Compute Engine for Terraform to automate creation of the NAT gateway managed-instance group. You direct traffic from the instances by using tag-based routing, although only instances with matching tags use the NAT gateway route.

Compute Engine routes have a default priority of 1000, with lower numbers indicating higher priority. The Terraform module creates a Compute Engine route with priority 800, redirecting all outbound traffic from the Google Kubernetes Engine nodes to the NAT gateway instance instead of using the default internet gateway. The example code in the module also creates a static route with priority 700, redirecting traffic from the Google Kubernetes Engine nodes to the Google Kubernetes Engine master, which preserves normal cluster operation by splitting the egress traffic.

After the NAT gateway instance is up and running, the startup script configures IP forwarding and adds the firewall rules needed to perform address translation.

  1. Using the following button, open Cloud Shell, clone the GitHub repository, and navigate to the examples/gke-nat-gateway directory:

    OPEN IN CLOUD SHELL

    The repository includes the Modular NAT Gateway on Compute Engine for Terraform.

  2. Configure your Cloud Shell environment to use the latest version of Terraform by installing that version with the helper script:

    curl -sL https://goo.gl/yZS5XU | bash
    source ${HOME}/.bashrc
    

  3. Set variables for the cluster name, region, and zone. You use these variables throughout this tutorial.

    CLUSTER_NAME=dev
    REGION=us-central1
    ZONE=us-central1-f
    

  4. Create a Google Kubernetes Engine cluster in the us-central1-f zone:

    gcloud container clusters create ${CLUSTER_NAME} --zone ${ZONE}

  5. Create the terraform.tfvars file by using the REGION and ZONE variables:

    echo "region = \"${REGION}\"" > terraform.tfvars
    echo "zone = \"${ZONE}\"" >> terraform.tfvars
    

  6. Extract the Google Kubernetes Engine master IP and network tag name using the gcloud command-line tool and add them to the terraform.tfvars file:

    echo "gke_master_ip = \"$(gcloud container clusters describe ${CLUSTER_NAME} --zone ${ZONE} --format='get(endpoint)')\"" >> terraform.tfvars
    echo "gke_node_tag = \"$(gcloud compute instance-templates describe $(gcloud compute instance-templates list --filter=name~gke-${CLUSTER_NAME} --limit=1 --uri) --format='get(properties.tags.items[0])')\"" >> terraform.tfvars
    

  7. Set the GOOGLE_PROJECT environment variable used by Terraform:

    export GOOGLE_PROJECT=$(gcloud config get-value project)

  8. Deploy the NAT gateway by using Terraform commands:

    terraform init
    terraform plan
    terraform apply
    

You now have a Google Kubernetes Engine cluster, a NAT gateway instance, and a Compute Engine route configured for your cluster. The next section shows how to verify that outbound cluster traffic is routed through the NAT gateway instance.

Verify the NAT gateway routing

  1. Show the external IP address that the cluster node is using by running a Kubernetes pod that uses curl:

    kubectl run example -i -t --rm --restart=Never --image centos:7 -- curl -s http://ipinfo.io/ip

    The run command output displays the external IP address of the NAT gateway. This address is available as a Terraform output variable.

  2. Extract the value of the external_ip variable:

    terraform output -module=nat -json | jq -r .external_ip.value

    Verify that the pod output from the previous step matches the external IP address of the NAT gateway instance.

You have now verified that the NAT gateway for your Google Kubernetes Engine cluster is functioning as expected.

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

  1. Remove the resources that Terraform created:

    terraform destroy

  2. Delete the Google Kubernetes Engine cluster:

    CLUSTER_NAME=dev
    ZONE=us-central1-f
    gcloud container clusters delete ${CLUSTER_NAME} --zone ${ZONE}
    

What's next

Was this page helpful? Let us know how we did:

Send feedback about...