Transparent proxy and filtering on Kubernetes with initializers
Contributed by Google employees.
This is a follow-on tutorial to the Transparent proxy and filtering on Kubernetes tutorial. It shows how to simplify the application of a transparent proxy for existing deployments using a deployment initializer. Initializers are one of the dynamic admission control features of Kubernetes.
This tutorial uses the tproxy-initializer Kubernetes Initializer to inject the
sidecar InitContainer, ConfigMap, and environment variables into a deployment when the annotation "initializer.kubernetes.io/tproxy": "true"
is present. This
tutorial also demonstrates how to deploy the tproxy Helm chart with
role-based access control (RBAC).
As in the previous tutorial, the purpose of the tproxy-sidecar container is to create
firewall rules in the pod network to block egress traffic. The tproxy-podwatch
controller watches for pod changes containing the annotation and automatically adds or removes the local firewall REDIRECT
rules to apply the transparent proxy
to the pod.
Figure 1. transparent proxy with initializers architecture diagram
Objectives
- Create a Kubernetes cluster with initializer and RBAC using Google Kubernetes Engine.
- Deploy the tproxy, tproxy-initializer, and the tproxy-podwatch pods using Helm.
- Deploy example apps with annotations to test external access to a Cloud Storage bucket.
Before you begin
This tutorial assumes that you already have a Google Cloud account and are familiar with the high-level concepts of Kubernetes Pods and Deployments.
Costs
This tutorial uses billable components of Google Cloud, including Google Kubernetes Engine.
Use the pricing calculator to estimate the costs for your environment.
Clone the source repository
Open Cloud Shell.
Clone the repository containing the code for this tutorial:
git clone https://github.com/danisla/kubernetes-tproxy cd kubernetes-tproxy
The remainder of this tutorial is run from the root of the cloned repository directory.
Create Kubernetes Engine cluster and install Helm
Create a Kubernetes Engine cluster with alpha features enabled, RBAC, and a cluster version of at least 1.7 to support initializers:
gcloud container clusters create tproxy-example \ --zone us-central1-f \ --enable-kubernetes-alpha
This command also automatically configures the
kubectl
command to use the cluster.Create a service account and cluster role binding for Helm to enable RBAC:
kubectl create serviceaccount tiller --namespace kube-system kubectl create clusterrolebinding tiller-cluster-rule \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:tiller
Install Helm in your Cloud Shell instance:
curl -sL https://get.helm.sh/helm-v2.17.0-linux-amd64.tar.gz | tar -xvf - && sudo mv linux-amd64/helm /usr/local/bin/ && rm -Rf linux-amd64
Initialize Helm with the service account:
helm init --service-account=tiller
This installs Tiller—which is the server-side component of Helm—in the Kubernetes cluster. The Tiller pod may take a minute to start.
Verify that the client and server components have been deployed:
helm version
You should see the Client and Server versions in the output.
Install the Helm chart
Before installing the chart, you must first extract the certificates generated by mitmproxy. The generated CA certificate is used in the example pods to trust the proxy when making HTTPS requests.
Extract the generated certificates using Docker:
cd charts/tproxy docker run --rm -v ${PWD}/certs/:/home/mitmproxy/.mitmproxy mitmproxy/mitmproxy >/dev/null 2>&1
Install the chart with the initializer and RBAC enabled:
helm install -n tproxy --set tproxy.useInitializer=true,tproxy.useRBAC=true .
The output of this command shows you how to augment your deployments to use the initializer. Example output below:
Add this metadata annotation to your deployment specs to apply the tproxy initializer: metadata: annotations: "initializer.kubernetes.io/tproxy": "true"
Get the status of the DaemonSet pods:
kubectl get pods -o wide
Notice in the example output below that there is a tproxy pod for each node:
NAME READY STATUS RESTARTS AGE IP NODE tproxy-tproxy-2h7lk 2/2 Running 0 21s 10.128.0.8 gke-tproxy-example-default-pool-1e70b38d-xchn tproxy-tproxy-4mvtf 2/2 Running 0 21s 10.128.0.7 gke-tproxy-example-default-pool-1e70b38d-hk89 tproxy-tproxy-ljfq9 2/2 Running 0 21s 10.128.0.6 gke-tproxy-example-default-pool-1e70b38d-jsqd
Verify that a single instance of the initializer pod is running in the
kube-system
namespace:kubectl get pods -n kube-system --selector=app=tproxy
Example output:
NAME READY STATUS RESTARTS AGE tproxy-tproxy-initializer-833731286-9kp84 1/1 Running 0 4m
Deploy example apps
Deploy the sample apps to demonstrate using and not using the annotation to trigger the initializer.
Change directories back to the repository root and deploy the example apps:
cd ../../ kubectl create -f examples/debian-app.yaml kubectl create -f examples/debian-app-locked.yaml
Note that the second deployment is the one that contains the deployment annotation described in the chart post-install notes.
Get the logs for the pod without the tproxy annotation:
kubectl logs --selector=app=debian-app,variant=unlocked --tail=10
Example output:
https://www.google.com: 200 https://storage.googleapis.com/solutions-public-assets/: 200 PING www.google.com (209.85.200.105): 56 data bytes 64 bytes from 209.85.200.105: icmp_seq=0 ttl=52 time=0.758 ms
The output from the example app shows the status codes for the requests and the output of a ping command.
Notice the following:
- The request to
https://www.google.com
succeeds with status code 200. - The request to the Cloud Storage bucket succeeds with status code 200.
- The ping to
www.google.com
succeeds.
- The request to
Get the logs for the pod with the tproxy annotation:
kubectl logs --selector=app=debian-app,variant=locked --tail=4
Example output:
https://www.google.com: 418 https://storage.googleapis.com/solutions-public-assets/: 200 PING www.google.com (209.85.200.147): 56 data bytes ping: sending packet: Operation not permitted
Notice the following:
- The proxy blocks the request to
https://www.google.com
with status code 418. - The proxy allows the request to the Cloud Storage bucket with status code 200.
- The ping to
www.google.com
is rejected.
- The proxy blocks the request to
Inspect the logs from the mitmproxy DaemonSet pod to show the intercepted requests and responses:
kubectl logs $(kubectl get pods -o wide | awk '/tproxy.*'$(kubectl get pods --selector=app=debian-app,variant=locked -o=jsonpath={.items..spec.nodeName})'/ {print $1}') -c tproxy-tproxy-mode --tail=10
Note that the logs have to be retrieved from the tproxy pod that is running on the same node as the example app.
Example output:
10.12.1.41:37380: clientconnect 10.12.1.41:37380: GET https://www.google.com/ HTTP/2.0 << 418 I'm a teapot 30b 10.12.1.41:37380: clientdisconnect 10.12.1.41:36496: clientconnect Streaming response from 64.233.191.128 10.12.1.41:36496: GET https://storage.googleapis.com/solutions-public-assets/adtech/dfp_networkimpressions.py HTTP/2.0 << 200 (content missing) 10.12.1.41:36496: clientdisconnect
Notice that the proxy blocks the request to
https://www.google.com
with status code 418.
Cleanup
Delete the sample apps:
kubectl delete -f examples/debian-app.yaml kubectl delete -f examples/debian-app-locked.yaml
Delete the tproxy Helm release:
helm delete --purge tproxy
Delete the Kubernetes Engine cluster:
gcloud container clusters delete tproxy-example --zone=us-central1-f
What's next?
- Transparent proxy and filtering on Kubernetes: Original tutorial that works without the initializer feature. Also contains some of the additional chart configuration examples.
- tproxy Helm chart: See all configuration options and deployment methods.
- Istio: A more broad approach to traffic filtering and network policy.
- Calico Egress NetworkPolicy: Another way to filter egress traffic at the pod level.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.