Creating GKE Private Clusters with Network Proxies for External Access

When you create a GKE private cluster, the cluster's master node is inaccessible from the public internet, but it needs to be accessible for administration. This tutorial shows how to configure a master for a GKE private cluster so that the master is inaccessible from outside your network, except through a proxy that you create and host in private IP space.

You can configure a proxy rather than routing the Kubernetes Service through an internal load balancer on the cluster. This lets you retrieve credentials by using gcloud container clusters get-credentials, and preserves access to the master over a secure connection.

Objectives

  • Create a GKE private cluster with minimal external access.
  • Create and deploy a Docker image to run the proxy.
  • Create a Kubernetes Service to access the proxy.
  • Remove all other external access to the cluster.

Costs

This tutorial uses billable components of Google Cloud Platform, including:

The estimated cost for this solution is approximately $2.42 per day.

Before you begin

  1. Select or create a GCP project.

    Go to the Manage resources page

  2. Make sure that billing is enabled for your project.

    Learn how to enable billing

  3. Make sure you have a client machine with the following configuration:
    • A known external IP address for initial access to the cluster. This is temporary, because after your proxy is set up, you will provide access to a CIDR range. One example is a Compute Engine virtual machine with an external IP address. You must also have permission to create clusters and deploy Pods and Services.
    • Docker installed and authenticated to gcr.io by using gcloud auth configure-docker.

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. See Cleaning up for more detail.

Creating a private cluster

The first step is to create or select a private cluster to use for this tutorial.

If you already have a cluster that you prefer to use, you can skip the step for creating the cluster, but you must configure some initial form of access on your client machine.

  1. To create a cluster, run the following command. For [YOUR_CLIENT_IP_ADDRESS], use the external, routable, IP address or netblock of your client.

    gcloud beta container clusters create frobnitz  \
        --zone=us-central1-c \
        --master-ipv4-cidr=172.16.0.64/28 \
        --enable-ip-alias \
        --private-cluster\
        --create-subnetwork=""  \
        --master-authorized-networks [YOUR_CLIENT_IP_ADDRESS]/32 \
        --enable-master-authorized-networks

    The command creates a GKE private cluster named frobnitz with master-authorized networks set to allow only the client machine to have access.

  2. If instead you have an existing cluster and want to grant access to it, run the following command. For [YOUR_CLIENT_IP_ADDRESS], use the external, routable, IP address or netblock of your client. [CLUSTER_NAME] is the name of your existing cluster.

    gcloud beta container clusters update [CLUSTER_NAME] \
        --zone [CLUSTER_ZONE] \
        --enable-master-authorized-networks \
        --master-authorized-networks=[YOUR_CLIENT_IP_ADDRESS]/32

Creating the Docker image

Use the following steps to build a Kubernetes API proxy image called k8s-api-proxy, which acts as a forward proxy to the Kubernetes API server.

  1. On your client machine, create a directory and change to that directory:

    mkdir k8s-api-proxy && cd k8s-api-proxy

  2. Create the Dockerfile. The following configuration creates a container from Alpine, which is a lightweight container distribution that has a Privoxy proxy. The Dockerfile also installs curl and jq for container initialization, adds the necessary configuration files, exposes port 8118 to GKE internally, and adds a startup script.

    FROM alpine
    RUN apk add -U curl privoxy jq && \ mv /etc/privoxy/templates /etc/privoxy-templates && \ rm -rf /var/cache/apk/* /etc/privoxy/* && \ mv /etc/privoxy-templates /etc/privoxy/templates ADD --chown=privoxy:privoxy config \ /etc/privoxy/ ADD --chown=privoxy:privoxy k8s-only.action \ /etc/privoxy/ ADD --chown=privoxy:privoxy k8s-rewrite-external.filter \ /etc/privoxy/ ADD k8s-api-proxy.sh /
    EXPOSE 8118/tcp
    ENTRYPOINT ["./k8s-api-proxy.sh"]

  3. In the k8s-api-proxy directory, create the config file and add the following content to it:

    # Allow Kubernetes API access only
    actionsfile /etc/privoxy/k8s-only.action
    # Rewrite https://CLUSTER_IP to https://kubernetes.default
    filterfile /etc/privoxy/k8s-rewrite-external.filter
    # Don't show the pod name in errors
    hostname k8s-privoxy
    # Bind to all interfaces, port :8118
    listen-address  :8118
    # User cannot click-through a block
    enforce-blocks 1
    # Allow more than one outbound connection
    tolerate-pipelining 1

  4. In the same directory, create the k8s-only.action file and add the following content to it. Note that CLUSTER_IP will be replaced when k8s-api-proxy.sh runs.

    # Block everything...
    {+block{Not Kubernetes}}
    /
    # ... except the external k8s endpoint, which you rewrite (see # k8s-rewrite-external.filter). {+client-header-filter{k8s-rewrite-external} -block{Kubernetes}} CLUSTER_IP/

  5. Create the k8s-rewrite-external.filter file and add the following content to it. Note that CLUSTER_IP will be replaced when k8s-api-proxy.sh runs.

    CLIENT-HEADER-FILTER: k8s-rewrite-external\
     Rewrite https://CLUSTER_IP/ to https://kubernetes.default/
    s@(CONNECT) CLUSTER_IP:443\
     (HTTP/\d.\d)@$1 kubernetes.default:443 $2@ig

  6. Create the k8s-api-proxy.sh file and add the following content to it.

    #!/bin/sh
    set -o errexit set -o pipefail set -o nounset
    # Get the external cluster IP EXTERNAL_IP=$(curl -SsL --insecure https://kubernetes.default/api | jq -r '.serverAddressByClientCIDRs[0].serverAddress')
    # Replace CLUSTER_IP in the rewrite filter and action file sed -i "s/CLUSTER_IP/${EXTERNAL_IP}/g"\ /etc/privoxy/k8s-rewrite-external.filter sed -i "s/CLUSTER_IP/${EXTERNAL_IP}/g"\ /etc/privoxy/k8s-only.action
    # Start Privoxy un-daemonized privoxy --no-daemon /etc/privoxy/config

  7. Make k8s-api-proxy.sh executable:

    chmod +x k8s-api-proxy.sh

  8. Build and push the container to your project. For [YOUR_PROJECT], substitute the ID of your project.

    docker build -t gcr.io/[YOUR_PROJECT]/k8s-api-proxy:0.1 .
    docker push gcr.io/[YOUR_PROJECT]/k8s-api-proxy:0.1

Deploying the image and Service

  1. Create a Kubernetes deployment that exposes the container that you just created. For [YOUR_PROJECT], use your project ID.

    kubectl run k8s-api-proxy \
        --image=gcr.io/[YOUR_PROJECT]/k8s-api-proxy:0.1 \
        --port=8118

  2. In the same directory, create the ilb.yaml file for the internal load balancer and copy the following into it:

    apiVersion: v1
    kind: Service
    metadata:
      labels:
        run: k8s-api-proxy
      name: k8s-api-proxy
      namespace: default
      annotations:
        cloud.google.com/load-balancer-type: "Internal"
    spec:
      ports:
      - port: 8118
        protocol: TCP
        targetPort: 8118
      selector:
        run: k8s-api-proxy
      type: LoadBalancer

  3. Deploy the internal load balancer:

    kubectl create -f ilb.yaml

  4. Check for the Service and wait for an IP address:

    kubectl get service/k8s-api-proxy

    The output will look like the following. When you see an external IP, the proxy is ready.

    NAME            TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
    k8s-api-proxy   LoadBalancer   10.24.13.129   10.24.24.3    8118:30282/TCP   2m

    The external IP address from this step is your proxy address.

  5. Determine the cluster's master IP address for testing:

    gcloud container clusters list

    In the results, make a note of the MASTER_IP value for your cluster:

    NAME      LOCATION       MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
    frobnitz  us-central1-c  1.9.7-gke.3     35.188.209.220  n1-standard-1  1.9.7-gke.3   3          RUNNING

  6. To verify that the proxy is usable, create a Compute Engine instance in the same network, access it with SSH, and verify the proxy. For [YOUR_SUBNET], [YOUR_PROXY] and [MASTER_IP], use the values that you got previously.

    gcloud compute instances create \
        --subnet=[YOUR_SUBNET] proxy-test
    gcloud compute ssh proxy-test -- \
        curl --insecure -x [YOUR_PROXY]:8118 https://[MASTER_IP]/api

  7. Set the https_proxy environment variable to the HTTP proxy so that the kubectl command can reach the internal load balancer from anywhere. For [YOUR_PROXY], use the external IP address that you got in the earlier step.

    export https_proxy=[YOUR_PROXY]:8118

  8. Test your proxy and https_proxy variable:

    kubectl get pods

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

  • Delete the cluster.
  • Delete any external IP addresses created for your Compute Engine instance.

Delete the cluster

If you created a cluster just for this tutorial, and if you don't want to use the cluster for another purpose, you can delete it. At the command line of your client machine, run the following command:

gcloud beta container clusters delete frobnitz \
    --zone us-central1-c

Delete external IP address and instance

If you created a Compute Engine instance for the initial external access, delete it and any external IP you associated with it.

Find the name of the created instance and then delete it:

gcloud compute instances list
gcloud compute instances delete [NAME]

Similarly, look up the reserved external address and delete it:

gcloud compute addresses list
gcloud compute addresses delete [NAME]

What's next

Was this page helpful? Let us know how we did:

Send feedback about...