Using a NAT Gateway with Kubernetes Engine

This tutorial shows how to define new node IP address mappings by using network address translation (NAT) gateways.

Under normal circumstances, Google Kubernetes Engine nodes route all egress traffic through the internet gateway associated with their node cluster. The internet gateway connection, in turn, is defined by the Compute Engine network associated with the node cluster. Each node in the cluster has an ephemeral external IP address. When nodes are created and destroyed during autoscaling, new node IP addresses are allocated automatically.

The default gateway behavior works well under normal circumstances. However, you might want to modify how ephemeral external IP addresses are allocated in order to:

  • Provide a third-party service with a consistent external IP address.
  • Monitor and filter egress traffic out of the Google Kubernetes Engine cluster.

In this tutorial, you learn how to:

  1. Create a NAT gateway instance and configure its routing details for an existing Google Kubernetes Engine cluster.
  2. Create a custom routing rule for the NAT gateway instance.

The following diagram shows an overview of the architecture:

architecture of a NAT gateway

Objectives

  • Use Terraform to create a NAT gateway instance.
  • Create an egress network routing rule for an existing Google Kubernetes Engine cluster.
  • Verify that outbound traffic from a pod is routed through the NAT gateway.

Costs

This tutorial uses billable components of GCP, including:

  • Google Kubernetes Engine
  • Compute Engine

Use the Pricing Calculator to estimate the cost of your environment. The resources used in this tutorial, including the Google Kubernetes Engine cluster, cost about $3.19 per day.

Before you begin

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. Select or create a GCP project.

    Go to the Manage resources page

  3. Make sure that billing is enabled for your project.

    Learn how to enable billing

  4. Enable the Compute Engine and Google Kubernetes Engine APIs.

    Enable the APIs

Creating the NAT gateway with Terraform

This tutorial uses the Modular NAT Gateway on Compute Engine for Terraform to automate creation of the NAT gateway managed-instance group. You direct traffic from the instances by using tag-based routing, although only instances with matching tags use the NAT gateway route.

Compute Engine routes have a default priority of 1000, with lower numbers indicating higher priority. The Terraform module creates a Compute Engine route with priority 800, redirecting all outbound traffic from the Google Kubernetes Engine nodes to the NAT gateway instance instead of using the default internet gateway. The example code in the module also creates a static route with priority 700, redirecting traffic from the Google Kubernetes Engine nodes to the Google Kubernetes Engine master, which preserves normal cluster operation by splitting the egress traffic.

After the NAT gateway instance is up and running, the startup script configures IP forwarding and adds the firewall rules needed to perform address translation.

  1. Open Cloud Shell, and clone the GitHub repository:

    OPEN IN CLOUD SHELL

    The repository includes the Modular NAT Gateway on Compute Engine for Terraform. This example works with zonal, regional, and private clusters.

  2. Navigate to the examples/gke-nat-gateway directory:

    cd examples/gke-nat-gateway
    

  3. Configure your Cloud Shell environment to use the latest version of Terraform by installing that version with the helper script:

    curl -sLO https://raw.githubusercontent.com/GoogleCloudPlatform/terraform-google-nat-gateway/master/examples/terraform-install.sh
    bash terraform-install.sh
    source ${HOME}/.bashrc
    

  4. Set variables for the cluster name, region, and zone. You use these variables throughout this tutorial.

    CLUSTER_NAME=dev-nat
    REGION=us-central1
    ZONE=us-central1-f
    NETWORK=default
    SUBNETWORK=default
    

  5. Create a Google Kubernetes Engine cluster in the us-central1-f zone:

    gcloud container clusters create ${CLUSTER_NAME:0:20} --zone ${ZONE}

  6. Create the terraform.tfvars file by using the REGION, ZONE, NETWORK, and SUBNETWORK variables:

    echo "region = \"${REGION}\"" > terraform.tfvars
    echo "zone = \"${ZONE}\"" >> terraform.tfvars
    echo "network = \"${NETWORK}\"" >> terraform.tfvars
    echo "subnetwork = \"${SUBNETWORK}\"" >> terraform.tfvars
    

  7. Extract the Google Kubernetes Engine master IP and network tag name using the gcloud command-line tool and add them to the terraform.tfvars file:

    NODE_TAG=$(gcloud compute instance-templates describe $(gcloud compute instance-templates list --filter=name~gke-${CLUSTER_NAME:0:20} --limit=1 --uri) --format='get(properties.tags.items[0])')
    echo "gke_master_ip = \"$(gcloud compute firewall-rules describe ${NODE_TAG/-node/-ssh} --format='value(sourceRanges)')\"" >> terraform.tfvars
    echo "gke_node_tag = \"${NODE_TAG}\"" >> terraform.tfvars
    

  8. Set the GOOGLE_PROJECT environment variable used by Terraform:

    export GOOGLE_PROJECT=$(gcloud config get-value project)

  9. Deploy the NAT gateway by using Terraform commands:

    terraform init
    terraform plan -out terraform.tfplan
    terraform apply terraform.tfplan
    

You now have a Google Kubernetes Engine cluster, a NAT gateway instance, and a Compute Engine route configured for your cluster. The next section shows how to verify that outbound cluster traffic is routed through the NAT gateway instance.

Verify the NAT gateway routing

  1. Show the external IP address that the cluster node is using by running a Kubernetes pod that uses curl:

    kubectl run example -i -t --rm --restart=Never --image centos:7 -- curl -s http://ipinfo.io/ip

    The run command output displays the external IP address of the NAT gateway. This address is available as a Terraform output variable.

  2. Extract the value of the external_ip variable:

    terraform output ip-nat-gateway

    Verify that the pod output from the previous step matches the external IP address of the NAT gateway instance.

You have now verified that the NAT gateway for your Google Kubernetes Engine cluster is functioning as expected.

Access cluster nodes with SSH

When the NAT gateway is in place, the SSH connection from the GCP Console will no longer work for cluster nodes. To SSH into a cluster node, use the following commands to use the NAT gateway compute instance as a bastion host:

eval ssh-agent $SHELL;
gcloud -q compute config-ssh;
ssh-add ~/.ssh/google_compute_engine;
CLUSTER_NAME=dev-nat;
REGION=us-central1;
NODE=$(gcloud compute instances list --filter=name~gke-${CLUSTER_NAME}- --limit=1 --format='value(name)');
gcloud compute ssh $(gcloud compute instances list --filter=name~nat-gateway-${REGION} --uri) --ssh-flag="-A" -- ssh ${NODE} -o StrictHostKeyChecking=no;

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

  1. Remove the resources that Terraform created:

    terraform destroy

  2. Delete the Google Kubernetes Engine cluster:

    CLUSTER_NAME=dev-nat;
    ZONE=us-central1-f;
    gcloud container clusters delete ${CLUSTER_NAME} --zone ${ZONE};
    

What's next

Was this page helpful? Let us know how we did:

Send feedback about...