Intranode visibility

This page shows how to set up intranode visibility in a Google Kubernetes Engine cluster, so that all network traffic in your cluster is seen by the Google Cloud Platform network. This means you can see flow logs for all traffic between Pods, including traffic between Pods on the same node. Intranode visibility also allows you to create firewall rules that apply to all traffic among Pods, even Pods on the same node.

When a Pod sends a packet to another Pod on the same node, the packet leaves the node and is processed by the GCP network. Then the packet is immediately sent back to the same node and forwarded to the destination Pod.

Currently, intranode visibility is disabled by default.

Intranode visibility is available in GKE v1.11.x and higher.

Things to consider

Increased log volume

When intranode visibility is enabled, flow log volume may increase with more traffic being captured by the VPC. You can manage the costs associated with flow logging by adjusting the logging settings.

All Pod-to-Pod traffic is subject to firewalls

All traffic among Pods, including Pods deployed to the name node, is visible to the VPC when intranode visibility is enabled. Enabling intranode visibility might cause previously unrestricted traffic to be subject to firewall rules. Evaluate your node-level firewalls to ensure that legitimate traffic is not obstructed.

Before you begin

To prepare for this task, perform the following steps:

  • Ensure that you have enabled the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • Ensure that you have installed the Cloud SDK.
  • Set your default project ID:
    gcloud config set project [PROJECT_ID]
  • Update gcloud to the latest version:
    gcloud components update

Overview

Here's the big picture of the steps you perform in this topic:

  1. Enable flow logs for the default subnet in the us-central1 region.

  2. Create a cluster, in the us-central1-a zone, that has one node.

  3. Create two Pods in your cluster.

  4. Send an HTTP request from one Pod to the other Pod.

  5. View the flow log entry for the Pod-to-Pod request.

Enabling flow logs for a subnet

gcloud

Enable flow logs for the default subnet in the us-central1 region:

gcloud compute networks subnets update default --region us-central1 --enable-flow-logs

Verify that your subnet has flow logs enabled:

gcloud compute networks subnets describe default --region us-central1

The output shows that flow logs are enabled:

...
enableFlowLogs: true
...
ipCidrRange: 10.128.0.0/20
region: https://www.googleapis.com/compute/v1/projects/abc-712099/regions/us-central1

Console

Follow these steps to enable flow logs for the default subnet in the us-central1 region:

  1. Visit the Google Kubernetes Engine VPC networks page in GCP Console.

    Visit the VPC networks page

  2. In the us-central1 row, click default.

  3. Click Edit

  4. Under Flow logs, select On.

  5. Click Save.

Creating a cluster

Create a cluster that has one node:

gcloud

gcloud container clusters create [CLUSTER_NAME] \
    --zone us-central1-a \
    --num-nodes 1 \
    --enable-intra-node-visibility

Console

  1. Visit the Create a Kubernetes cluster page in GCP Console.

    Visit Kubernetes clusters page

  2. For Name, enter [CLUSTER_NAME].

  3. For Zone, select us-central1-a.

  4. For Number of nodes, enter 1.

  5. At the bottom of the page, click Advanced options.

  6. Select Enable intranode visibility.

  7. Click Create.

Getting credentials for your cluster:

Enter this command to get credentials for your new cluster:

gcloud container clusters get-credentials [CLUSTER_NAME] \
    --zone us-central1-a

The credentials are saved in your kubeconfig file, which is typically at $HOME/.kube/config.

Creating two Pods

Here is a manifest for a Pod:

apiVersion: v1
kind: Pod
metadata:
  name: pod-1
spec:
  containers:
  - name: container-1
    image: gcr.io/google-samples/hello-app:2.0

Save the manifest to a file named pod-1.yaml, and create the Pod:

kubectl apply -f pod-1.yaml

Here is a manifest for a second Pod:

apiVersion: v1
kind: Pod
metadata:
  name: pod-2
spec:
  containers:
  - name: container-2
    image: gcr.io/google-samples/node-hello:1.0

Save the manifest to a file named pod-2.yaml, and create the Pod:

kubectl apply -f pod-2.yaml

View the Pods:

kubectl get pod pod-1 pod-2 --output wide

The output shows the IP addresses of your Pods. Make a note of theses addresses:

NAME      READY     STATUS    RESTARTS   AGE       IP           ...
pod-1     1/1       Running   0          1d        10.52.0.13   ...
pod-2     1/1       Running   0          1d        10.52.0.14   ...

Sending a request from pod-1 to pod-2

Get a shell to the container in pod-1:

kubectl exec -it pod-1 sh

In your shell, send a request to pod-2:

wget -qO- [POD_2_IP_ADDRESS]:8080

where [POD_2_IP_ADDRESS] is the IP address of pod-2 that you found earlier in this exercise.

The output shows the response from the container running in pod-2:

Hello Kubernetes!

Enter exit to leave the shell and return to your main command-line environment.

Viewing flow log entries

gcloud

In your regular command-line window, enter this command to view a flow log entry for the request from pod-1 to pod-2.

gcloud logging read \
    'logName="projects/[PROJECT_ID]/logs/compute.googleapis.com%2Fvpc_flows" AND jsonPayload.connection.src_ip="[POD_1_IP_ADDRESS]"'

where:

  • [PROJECT_ID] is your project ID.
  • [POD_1_IP_ADDRESS] is the IP address of pod-1.

The output shows a flow log entry for a request from pod-1 to pod-2. In this example, pod-1 has IP address 10.56.0.13, and pod-2 has IP address 10.56.0.14.

...
jsonPayload:
  bytes_sent: '0'
  connection:
    dest_ip: 10.56.0.14
    dest_port: 8080
    protocol: 6
    src_ip: 10.56.0.13
    src_port: 35414
...

Console

  1. Visit the Google Kubernetes Engine Stackdriver logs page in GCP Console.

    Visit the Stackdriver logs page

  2. At the top of the page, in the filter box, at the right, click the down arrow, and select Convert to advanced filter.

  3. Delete any text that is in the filter box, and enter this query:

    resource.type="gce_subnetwork"
    logName="projects/[PROJECT_ID]/logs/compute.googleapis.com%2Fvpc_flows"
    jsonPayload.connection.src_ip="[POD_1_IP_ADDRESS]"
    

    where:

    • [PROJECT_ID] is your project ID.
    • [POD_1_IP_ADDRESS] is the IP address of pod-1.

    Expand the log entry that appears. Under jsonPayload you can that the request was sent from pod-1 to pod-2. In this example, pod-1 has IP address 10.56.0.13, and pod-2 has IP address 10.56.0.14.

    jsonPayload: {
      bytes_sent:  "0"
      connection: {
        dest_ip:  "10.56.0.14"
        dest_port:  8080
        protocol:  6
        src_ip:  "10.56.0.13"
        src_port:  35414
    

Recall that your cluster has only one node. So pod-1 and pod-2 are on the same node. Even so, flow log entries are available for the intranode communication between pod-1 and pod-2.

Enabling intranode visibility on an existing cluster

gcloud

To enable intranode visibility for an existing cluster, enter this command:

gcloud beta container clusters update [CLUSTER_NAME] \
    --enable-intra-node-visibility

where [CLUSTER_NAME] is the name of your existing cluster.

Console

  1. Visit the Google Kubernetes Engine menu in GCP Console.

    Visit the Google Kubernetes Engine menu

  2. Click the cluster's edit button , which looks like a pencil.

  3. Enable Intranode visibility.

  4. Click Save.

When you enable intranode visibility for an existing cluster, components in both the control plane and the worker nodes are restarted.

After you've enabled this feature, you can confirm it's activated by examining the routing rules on your node:

ip rule show

Output resembling the following appears:

0:  from all lookup local
30001:  from all fwmark 0x4000/0x4000 lookup main
30002:  from all iif lo lookup main
30003:  not from all iif eth0 lookup 1
32766:  from all lookup main
32767:  from all lookup default

and:

ip route show table 1

Output resembling the following appears:

default via [GKE_NODE_SUBNET_GW] dev eth0

Disabling intranode visibility

gcloud

To disable intranode visibility for an existing cluster, enter this command:

gcloud beta container clusters update [CLUSTER_NAME] \
  --no-enable-intra-node-visibility

where [CLUSTER_NAME] is the name of your existing cluster.

Console

  1. Visit the Kubernetes clusters page in GCP Console.

    Visit Kubernetes clusters page

  2. Click the cluster's edit button , which looks like a pencil.

  3. From the Intranode visilbility drop-down menu, select Disabled.

  4. Click Save.

When you disable intranode visibility for an existing cluster, components in both the control plane and the worker nodes are restarted.

What's next

Var denne siden nyttig? Si fra hva du synes:

Send tilbakemelding om ...

Kubernetes Engine Documentation