Creating GKE private clusters with network proxies for master access

When you create a GKE private cluster with a private cluster master endpoint, the cluster's master node is inaccessible from the public internet, but it needs to be accessible for administration.

By default, clusters can access the master through its private endpoint, and authorized networks can be defined within the VPC. However, the master is inaccessible through another VPC network peering connection (such as in hub and spoke designs) except through a proxy that you create and host in authorized IP address space.

This tutorial shows you how to configure such a proxy within your GKE private cluster.

Objectives

  • Create a GKE private cluster with no external access.
  • Create and deploy a Docker image to run the proxy.
  • Create a Kubernetes Service to access the proxy.
  • Test access to the proxy.

Costs

This tutorial uses billable components of Google Cloud Platform, including:

You can use the pricing calculator to generate a cost estimate based on your projected usage.

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up.

Before you begin

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. Select or create a Google Cloud Platform project.

    Go to the Manage resources page

  3. Make sure that billing is enabled for your Google Cloud Platform project.

    Learn how to enable billing

  4. Enable the Compute Engine and Google Kubernetes Engine APIs.

    Enable the APIs

Setting up your environment

In this tutorial, you use Cloud Shell to enter commands. Cloud Shell gives you access to the command line in the GCP Console, and includes the Cloud SDK and other tools that you need to develop in GCP. Cloud Shell appears as a window at the bottom of the GCP Console. It can take several minutes to initialize, but the window appears immediately.

To use Cloud Shell to set up your environment:

  1. In the GCP Console, open Cloud Shell.

    OPEN Cloud Shell

  2. Make sure you are working in the project that you created or selected. Replace [YOUR_PROJECT_ID] with your GCP project.

    gcloud config set project [YOUR_PROJECT_ID]
    export PROJECT_ID=`gcloud config list --format="value(core.project)"`
    
  3. Set the default compute zone. For the purposes of this tutorial, it is us-east1. If you are deploying to a production environment, deploy to a region of your choice.

    gcloud config set compute/region us-central1
    gcloud config set compute/zone us-central1-c
    export REGION=us-central1
    export ZONE=us-central1-c
    
    

Creating a VPC network and client VM

Create a VPC network and subnet that will host the resources.

  1. Create a VPC network:

    gcloud compute networks create k8s-proxy --subnet-mode=custom
    
  2. Create a custom subnet in the newly created VPC network:

    gcloud compute networks subnets create subnet-cluster \
        --network=k8s-proxy --range=10.50.0.0/16
    
  3. Create a client VM which you will use to deploy resources in the Kubernetes cluster:

    gcloud compute instances create --subnet=subnet-cluster \
        --scopes cloud-platform proxy-temp
    
  4. Save the internal IP address of the newly created instance in an environment variable:

    export CLIENT_IP=`gcloud compute instances describe proxy-temp \
        --format="value(networkInterfaces[0].networkIP)"`
    
  5. Create a firewall rule to allow SSH access to the VPC network:

    gcloud compute firewall-rules create k8s-proxy-ssh --network k8s-proxy \
        --allow tcp:22
    

Creating a private cluster

Now create a private cluster to use for this tutorial.

If you already have a cluster that you prefer to use, you can skip the step for creating the cluster, but you must configure some initial form of access on your client machine.

  • In Cloud Shell, create a cluster:

    gcloud container clusters create frobnitz  \
        --master-ipv4-cidr=172.16.0.64/28 \
        --network k8s-proxy \
        --subnetwork=subnet-cluster \
        --enable-ip-alias \
        --enable-private-nodes \
        --enable-private-endpoint \
        --master-authorized-networks $CLIENT_IP/32 \
        --enable-master-authorized-networks
    

    The command creates a GKE private cluster named frobnitz with master-authorized networks set to allow only the client machine to have access.

Creating the Docker image

Use the following steps to build a Kubernetes API proxy image called k8s-api-proxy, which acts as a forward proxy to the Kubernetes API server.

  1. In Cloud Shell, create a directory and change to that directory:

    mkdir k8s-api-proxy && cd k8s-api-proxy
  2. Create the Dockerfile. The following configuration creates a container from Alpine, which is a lightweight container distribution that has a Privoxy proxy. The Dockerfile also installs curl and jq for container initialization, adds the necessary configuration files, exposes port 8118 to GKE internally, and adds a startup script.

    FROM alpine
    RUN apk add -U curl privoxy jq && \ mv /etc/privoxy/templates /etc/privoxy-templates && \ rm -rf /var/cache/apk/* /etc/privoxy/* && \ mv /etc/privoxy-templates /etc/privoxy/templates ADD --chown=privoxy:privoxy config \ /etc/privoxy/ ADD --chown=privoxy:privoxy k8s-only.action \ /etc/privoxy/ ADD --chown=privoxy:privoxy k8s-rewrite-internal.filter \ /etc/privoxy/ ADD k8s-api-proxy.sh /
    EXPOSE 8118/tcp
    ENTRYPOINT ["./k8s-api-proxy.sh"]
  3. In the k8s-api-proxy directory, create the config file and add the following content to it:

    #config directory
    confdir /etc/privoxy
    # Allow Kubernetes API access only
    actionsfile /etc/privoxy/k8s-only.action
    # Rewrite https://CLUSTER_IP to https://kubernetes.default
    filterfile /etc/privoxy/k8s-rewrite-internal.filter
    # Don't show the pod name in errors
    hostname k8s-privoxy
    # Bind to all interfaces, port :8118
    listen-address  :8118
    # User cannot click-through a block
    enforce-blocks 1
    # Allow more than one outbound connection
    tolerate-pipelining 1
    
  4. In the same directory, create the k8s-only.action file and add the following content to it. Note that CLUSTER_IP will be replaced when k8s-api-proxy.sh runs.

    # Block everything...
    {+block{Not Kubernetes}}
    /
    # ... except the internal k8s endpoint, which you rewrite (see # k8s-rewrite-internal.filter). {+client-header-filter{k8s-rewrite-internal} -block{Kubernetes}} CLUSTER_IP/
  5. Create the k8s-rewrite-internal.filter file and add the following content to it. Note that CLUSTER_IP will be replaced when k8s-api-proxy.sh runs.

    CLIENT-HEADER-FILTER: k8s-rewrite-internal\
     Rewrite https://CLUSTER_IP/ to https://kubernetes.default/
    s@(CONNECT) CLUSTER_IP:443\
     (HTTP/\d\.\d)@$1 kubernetes.default:443 $2@ig
    
  6. Create the k8s-api-proxy.sh file and add the following content to it.

    #!/bin/sh
    
    set -o errexit
    set -o pipefail
    set -o nounset
    
    # Get the internal cluster IP
    INTERNAL_IP=$(curl -k -SsL https://kubernetes.default/api |
     jq -r '.serverAddressByClientCIDRs[0].serverAddress')
    
    # Replace CLUSTER_IP in the rewrite filter and action file
    sed -i "s/CLUSTER_IP/${INTERNAL_IP}/g"\
     /etc/privoxy/k8s-rewrite-internal.filter
    sed -i "s/CLUSTER_IP/${INTERNAL_IP}/g"\
     /etc/privoxy/k8s-only.action
    
    # Start Privoxy un-daemonized
    privoxy --no-daemon /etc/privoxy/config
    
  7. Make k8s-api-proxy.sh executable:

    chmod +x k8s-api-proxy.sh
  8. Build and push the container to your project.

    docker build -t gcr.io/$PROJECT_ID/k8s-api-proxy:0.1 .
    docker push gcr.io/$PROJECT_ID/k8s-api-proxy:0.1
    

Deploying the image and Service

  1. In Cloud Shell, log in to the client VM you created earlier:

    gcloud compute ssh proxy-temp
    
  2. Install the kubectl tool:

    sudo apt-get install kubectl
    
  3. Save the project ID as an environment variable:

    export PROJECT_ID=`gcloud config list --format="value(core.project)"`
    
  4. Get the cluster credentials:

    gcloud container clusters get-credentials frobnitz \
    --zone us-central1-c --internal-ip
    
  5. Create a Kubernetes deployment that exposes the container that you just created:

    kubectl run k8s-api-proxy \
        --image=gcr.io/$PROJECT_ID/k8s-api-proxy:0.1 \
        --port=8118
    
  6. Create the ilb.yaml file for the internal load balancer and copy the following into it:

    apiVersion: v1
    kind: Service
    metadata:
      labels:
        run: k8s-api-proxy
      name: k8s-api-proxy
      namespace: default
      annotations:
        cloud.google.com/load-balancer-type: "Internal"
    spec:
      ports:
      - port: 8118
        protocol: TCP
        targetPort: 8118
      selector:
        run: k8s-api-proxy
      type: LoadBalancer
    
  7. Deploy the internal load balancer:

    kubectl create -f ilb.yaml
  8. Check for the Service and wait for an IP address:

    kubectl get service/k8s-api-proxy

    The output will look like the following. When you see an external IP, the proxy is ready.

    NAME            TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
    k8s-api-proxy   LoadBalancer   10.24.13.129   10.24.24.3    8118:30282/TCP   2m
    

    The external IP address from this step is your proxy address.

  9. Save the IP address of the ILB as an environment variable:

    export LB_IP=`kubectl get  service/k8s-api-proxy \
    -o jsonpath='{.status.loadBalancer.ingress[].ip}'`
    
  10. Save the cluster's master IP address in an environment variable:

    export MASTER_IP=`gcloud container clusters describe frobnitz \
    --zone=us-central1-c \
    --format="get(privateClusterConfig.privateEndpoint)"`
    
  11. Verify that the proxy is usable by accessing the Kubernetes API through it:

    curl -k -x $LB_IP:8118 https://$MASTER_IP/api
    
    The output will look like the following:
    {
      "kind": "APIVersions",
      "versions": [
        "v1"
      ],
      "serverAddressByClientCIDRs": [
        {
          "clientCIDR": "0.0.0.0/0",
          "serverAddress": "172.16.0.66:443"
        }
      ]
    }
    
  12. Set the https_proxy environment variable to the HTTP proxy so that the kubectl command can reach the internal load balancer from anywhere:

    export https_proxy=$LB_IP:8118
  13. Test your proxy and https_proxy variable by running the kubectl command:

    kubectl get pods

    You will get an output that looks like the following, which means that you successfully connected to the Kubernetes API through the proxy:

    NAME                             READY   STATUS    RESTARTS   AGE
    k8s-api-proxy-766c69dd45-mfqf4   1/1     Running   0          6m15s
    
  14. Exit the client VM:

    exit

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

Delete the project

  1. In the GCP Console, go to the Projects page.

    Go to the Projects page

  2. In the project list, select the project you want to delete and click Delete .
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

What's next

Var denne siden nyttig? Si fra hva du synes:

Send tilbakemelding om ...