Intranode visibility

This guide shows you how to set up intranode visibility in a Google Kubernetes Engine (GKE) cluster, so that all network traffic in your cluster is seen by the Google Cloud network. This means you can see flow logs for all traffic between Pods, including traffic between Pods on the same node. Intranode visibility also allows you to create firewall rules that apply to all traffic among Pods, even Pods on the same node.

When a Pod sends a packet to another Pod on the same node, the packet leaves the node and is processed by the Google Cloud network. Then the packet is immediately sent back to the same node and forwarded to the destination Pod.

Currently, intranode visibility is disabled by default.

Intranode visibility is available in GKE v1.11.x and higher.

Things to consider

Increased log volume

When intranode visibility is enabled, flow log volume may increase with more traffic being captured by the VPC. You can manage the costs associated with flow logging by adjusting the logging settings.

All Pod-to-Pod traffic is subject to firewalls

All traffic among Pods, including Pods deployed to the name node, is visible to the VPC when intranode visibility is enabled. Enabling intranode visibility might cause previously unrestricted traffic to be subject to firewall rules. Evaluate your node-level firewalls using Connectivity Tests to ensure that legitimate traffic is not obstructed.

Before you begin

Before you start, make sure you have performed the following tasks:

Set up default gcloud settings using one of the following methods:

  • Using gcloud init, if you want to be walked through setting defaults.
  • Using gcloud config, to individually set your project ID, zone, and region.

Using gcloud init

  1. Run gcloud init and follow the directions:

    gcloud init

    If you are using SSH on a remote server, use the --console-only flag to prevent the command from launching a browser:

    gcloud init --console-only
  2. Follow the instructions to authorize gcloud to use your Google Cloud account.
  3. Create a new configuration or select an existing one.
  4. Choose a Google Cloud project.
  5. Choose a default Compute Engine zone.

Using gcloud config

  • Set your default project ID:
    gcloud config set project project-id
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone compute-zone
  • If you are working with regional clusters, set your default compute region:
    gcloud config set compute/region compute-region
  • Update gcloud to the latest version:
    gcloud components update

Tasks overview

To demonstrate setting up intranode visibility, here's an overview of the tasks that you perform:

  1. Enable flow logs for the default subnet in the us-central1 region.
  2. Create a single-node cluster that has intranode visibility enabled.
  3. Create two Pods in your cluster.
  4. Send an HTTP request from one Pod to the other Pod.
  5. View the flow log entry for the Pod-to-Pod request.

Examples in this guide use:

  • us-central1 as the default region.
  • us-central1-a as the default zone.

Enabling flow logs for a subnet

You can enable flow logs for a subnet using the gcloud tool or the Google Cloud Console.

gcloud

  1. Enable flow logs for the default subnet in the us-central1 region:

    gcloud compute networks subnets update default --region us-central1 \
        --enable-flow-logs
    
  2. Verify that your subnet has flow logs enabled:

    gcloud compute networks subnets describe default --region us-central1
    

    The output shows that flow logs are enabled:

    ...
    enableFlowLogs: true
    ...
    ipCidrRange: 10.128.0.0/20
    region: https://www.googleapis.com/compute/v1/projects/abc-712099/regions/us-central1
    

Console

  1. Visit the Google Kubernetes Engine VPC networks page in Cloud Console.

    Visit the VPC networks page

  2. In the us-central1 row, click default.

  3. Click Edit.

  4. Under Flow logs, select On.

  5. Click Save.

Creating a cluster with intranode visibility enabled

You can create a cluster that has intranode visibility enabled using the gcloud tool or the Google Cloud Console.

gcloud

Create a single-node cluster that has intranode visibility enabled:

gcloud container clusters create cluster-name \
    --zone us-central1-a \
    --num-nodes 1 \
    --enable-intra-node-visibility

Console

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click the Create cluster button.

  3. Enter the Name for your cluster.

  4. For the Location type, select Zonal.

  5. In the Zone drop-down list, select us-central1-a.

  6. From the navigation pane, under Node Pools, click default-pool.

  7. Enter a Name for the node pool.

  8. Choose the Node version for your node.

  9. For the Number of nodes, enter 1.

  10. From the navigation pane, under Cluster, click Networking.

  11. Select the Enable intranode visibility checkbox.

  12. Click Create.

Getting credentials for your cluster

Get the credentials for your new cluster:

gcloud container clusters get-credentials cluster-name \
    --zone us-central1-a

The credentials are saved in your kubeconfig file, which is typically at $HOME/.kube/config.

Creating two Pods

  1. For the first Pod, create a file named pod-1.yaml based on the following sample manifest:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-1
    spec:
      containers:
      - name: container-1
        image: gcr.io/google-samples/hello-app:2.0
    
  2. Create the Pod by running the following command:

    kubectl apply -f pod-1.yaml
    
  3. For the second Pod, create a file named pod-2.yaml based on the following sample manifest:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-2
    spec:
      containers:
      - name: container-2
        image: gcr.io/google-samples/node-hello:1.0
    
  4. Create the Pod by running the following command:

    kubectl apply -f pod-2.yaml
    
  5. View the Pods:

    kubectl get pod pod-1 pod-2 --output wide
    

    The output shows the IP addresses of your Pods. Make a note of these addresses.

    NAME      READY     STATUS    RESTARTS   AGE       IP           ...
    pod-1     1/1       Running   0          1d        10.52.0.13   ...
    pod-2     1/1       Running   0          1d        10.52.0.14   ...
    

Sending a request from pod-1 to pod-2

  1. Get a shell to the container in pod-1:

    kubectl exec -it pod-1 sh
    
  2. In your shell, send a request to pod-2:

    wget -qO- pod-2-ip-address:8080
    

    where pod-2-ip-address is the IP address of pod-2 that you noted earlier.

    The output shows the response from the container running in pod-2:

    Hello Kubernetes!
    
  3. Type exit to leave the shell and return to your main command-line environment.

Viewing flow log entries

You can view flog log entries using the gcloud tool or the Google Cloud Console.

gcloud

View a flow log entry for the request from pod-1 to pod-2:

gcloud logging read \
    'logName="projects/project-id/logs/compute.googleapis.com%2Fvpc_flows" AND jsonPayload.connection.src_ip="pod-1-ip-address"'

where:

  • project-id is your project ID.
  • pod-1-ip-address is the IP address of pod-1.

The output shows a flow log entry for a request from pod-1 to pod-2. In this example, pod-1 has IP address 10.56.0.13, and pod-2 has IP address 10.56.0.14.

...
jsonPayload:
  bytes_sent: '0'
  connection:
    dest_ip: 10.56.0.14
    dest_port: 8080
    protocol: 6
    src_ip: 10.56.0.13
    src_port: 35414
...

Console

  1. Visit the Google Kubernetes Engine Stackdriver logs page in Cloud Console.

    Visit the Stackdriver logs page

  2. At the top of the page, in the filter box, at the right, click the down arrow, and select Convert to advanced filter.

  3. Delete any text that is in the filter box, and enter this query:

    resource.type="gce_subnetwork"
    logName="projects/project-id/logs/compute.googleapis.com%2Fvpc_flows"
    jsonPayload.connection.src_ip="pod-1-ip-address"
    

    where:

    • project-id is your project ID.
    • pod-1-ip-address is the IP address of pod-1.
  4. Expand the log entry that appears. Under jsonPayload you can see that the request was sent from pod-1 to pod-2. In the following example, pod-1 has an IP address of 10.56.0.13, and pod-2 has an IP address of 10.56.0.14.

    jsonPayload: {
      bytes_sent:  "0"
      connection: {
        dest_ip:  "10.56.0.14"
        dest_port:  8080
        protocol:  6
        src_ip:  "10.56.0.13"
        src_port:  35414
    

Recall that your cluster has only one node. So pod-1 and pod-2 are on the same node. Even so, flow log entries are available for the intranode communication between pod-1 and pod-2.

Enabling intranode visibility on an existing cluster

You can enable intranode visibility for an existing cluster using the gcloud tool or the Google Cloud Console.

gcloud

gcloud beta container clusters update cluster-name \
    --enable-intra-node-visibility

where cluster-name is the name of your existing cluster.

Console

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click the cluster's edit button , which looks like a pencil.

  3. Enable Intranode visibility.

  4. Click Save.

When you enable intranode visibility for an existing cluster, components in both the control plane and the worker nodes are restarted.

After you've enabled this feature, you can confirm it's activated by examining the routing rules on your node:

  1. Show the IP rules:

    ip rule show
    

    The output looks similar to this:

    0:  from all lookup local
    30001:  from all fwmark 0x4000/0x4000 lookup main
    30002:  from all iif lo lookup main
    30003:  not from all iif eth0 lookup 1
    32766:  from all lookup main
    32767:  from all lookup default
    
  2. Show the IP routes:

    ip route show table 1
    

    The output looks similar to this:

    default via GKE-node-subnet-gw dev eth0
    

Disabling intranode visibility

You can disable intranode visibility for an existing cluster using the gcloud tool or the Google Cloud Console.

gcloud

gcloud beta container clusters update cluster-name \
    --no-enable-intra-node-visibility

where cluster-name is the name of your existing cluster.

Console

  1. Visit the Kubernetes clusters page in Cloud Console.

    Visit Kubernetes clusters page

  2. Click the cluster's edit button , which looks like a pencil.

  3. From the Intranode visibility drop-down menu, select Disabled.

  4. Click Save.

When you disable intranode visibility for an existing cluster, components in both the control plane and the worker nodes are restarted.

Restrictions

Clusters with intranode visibility enabled have the following restrictions:

  • If you enable intranode visibility, and use ip-masq-agent configured with the nonMasqueradeCIDRs parameter, the nonMasqueradeCIDRs needs to include the Pod CIDR, otherwise you can experience intranode connectivity issues.

What's next