Setting up intranode visibility

This guide shows you how to set up intranode visibility in a Google Kubernetes Engine (GKE) cluster, so that all network traffic in your cluster is seen by the Google Cloud network.

Intranode visibility allows you to:

  • See flow logs for all traffic between Pods, including traffic between Pods on the same node.
  • Create firewall rules that apply to all traffic among Pods, even Pods on the same node.
  • Use Packet Mirroring to clone traffic between Pods on the same node and forward it for examination.

When a Pod sends a packet to another Pod on the same node, the packet leaves the node and is processed by the Google Cloud network. Then the packet is immediately sent back to the same node and forwarded to the destination Pod.

Intranode visibility is disabled by default.

Intranode visibility is required if you want your cluster to use a VPC network whose MTU is 1500 bytes. In addition, the cluster's GKE version must be 1.15 or higher. For more information on 1500 MTU VPC networks, see Maximum transmission unit in the VPC documentation.

Things to consider

Increased log volume

When intranode visibility is enabled, flow log volume may increase with more traffic being captured by the VPC. You can manage the costs associated with flow logging by adjusting the logging settings.

All Pod-to-Pod traffic is subject to firewalls

All traffic among Pods, including Pods deployed to the same node, is visible to the VPC when intranode visibility is enabled. Enabling intranode visibility might cause previously unrestricted traffic to be subject to firewall rules. Evaluate your node-level firewalls using Connectivity Tests to ensure that legitimate traffic is not obstructed.

Before you begin

Before you start, make sure you have performed the following tasks:

Set up default gcloud settings using one of the following methods:

  • Using gcloud init, if you want to be walked through setting defaults.
  • Using gcloud config, to individually set your project ID, zone, and region.

Using gcloud init

If you receive the error One of [--zone, --region] must be supplied: Please specify location, complete this section.

  1. Run gcloud init and follow the directions:

    gcloud init

    If you are using SSH on a remote server, use the --console-only flag to prevent the command from launching a browser:

    gcloud init --console-only
  2. Follow the instructions to authorize gcloud to use your Google Cloud account.
  3. Create a new configuration or select an existing one.
  4. Choose a Google Cloud project.
  5. Choose a default Compute Engine zone for zonal clusters or a region for regional or Autopilot clusters.

Using gcloud config

  • Set your default project ID:
    gcloud config set project PROJECT_ID
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone COMPUTE_ZONE
  • If you are working with Autopilot or regional clusters, set your default compute region:
    gcloud config set compute/region COMPUTE_REGION
  • Update gcloud to the latest version:
    gcloud components update

Tasks overview

To demonstrate setting up intranode visibility, here's an overview of the tasks that you perform:

  1. Enable flow logs for the default subnet in the us-central1 region.
  2. Create a single-node cluster that has intranode visibility enabled.
  3. Create two Pods in your cluster.
  4. Send an HTTP request from one Pod to the other Pod.
  5. View the flow log entry for the Pod-to-Pod request.

Examples in this guide use:

  • us-central1 as the default region.
  • us-central1-a as the default zone.

Enabling flow logs for a subnet

You can enable flow logs for a subnet using the gcloud tool or the Google Cloud Console.

gcloud

  1. Enable flow logs for the default subnet in the us-central1 region:

    gcloud compute networks subnets update default --region us-central1 \
        --enable-flow-logs
    
  2. Verify that your subnet has flow logs enabled:

    gcloud compute networks subnets describe default --region us-central1
    

    The output shows that flow logs are enabled:

    ...
    enableFlowLogs: true
    ...
    ipCidrRange: 10.128.0.0/20
    region: https://www.googleapis.com/compute/v1/projects/abc-712099/regions/us-central1
    

Console

  1. Go to the VPC networks page in Cloud Console.

    Go to VPC networks

  2. In the us-central1 row, click default.

  3. On the Subnet details page, click Edit.

  4. Under Flow logs, select On.

  5. Click Save.

Creating a cluster with intranode visibility enabled

You can create a cluster that has intranode visibility enabled using the gcloud tool or the Google Cloud Console.

gcloud

Create a single-node cluster that has intranode visibility enabled:

gcloud container clusters create CLUSTER_NAME \
    --zone us-central1-a \
    --num-nodes 1 \
    --enable-intra-node-visibility

Replace CLUSTER_NAME with the name of your new cluster.

Console

  1. Go to the Google Kubernetes Engine page in Cloud Console.

    Go to Google Kubernetes Engine

  2. Click Create.

  3. Enter the Name for your cluster.

  4. For the Location type, select Zonal.

  5. In the Zone drop-down list, select us-central1-a.

  6. From the navigation pane, under Node Pools, click default-pool.

  7. Enter a Name for the node pool.

  8. For static version nodes, choose the Node version.

  9. For the Number of nodes, enter 1.

  10. From the navigation pane, under Cluster, click Networking.

  11. Select the Enable intranode visibility checkbox.

  12. Click Create.

Getting credentials for your cluster

Get the credentials for your new cluster:

gcloud container clusters get-credentials CLUSTER_NAME \
    --zone us-central1-a

The credentials are saved in your kubeconfig file, which is typically at $HOME/.kube/config.

Creating two Pods

  1. For the first Pod, create a file named pod-1.yaml based on the following sample manifest:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-1
    spec:
      containers:
      - name: container-1
        image: gcr.io/google-samples/hello-app:2.0
    
  2. Create the Pod by running the following command:

    kubectl apply -f pod-1.yaml
    
  3. For the second Pod, create a file named pod-2.yaml based on the following sample manifest:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-2
    spec:
      containers:
      - name: container-2
        image: gcr.io/google-samples/node-hello:1.0
    
  4. Create the Pod by running the following command:

    kubectl apply -f pod-2.yaml
    
  5. View the Pods:

    kubectl get pod pod-1 pod-2 --output wide
    

    The output shows the IP addresses of your Pods. Make a note of these addresses.

    NAME      READY     STATUS    RESTARTS   AGE       IP           ...
    pod-1     1/1       Running   0          1d        10.52.0.13   ...
    pod-2     1/1       Running   0          1d        10.52.0.14   ...
    

Sending a request from pod-1 to pod-2

  1. Get a shell to the container in pod-1:

    kubectl exec -it pod-1 sh
    
  2. In your shell, send a request to pod-2:

    wget -qO- POD_2_IP_ADDRESS:8080
    

    Replace POD_2_IP_ADDRESS with the IP address of pod-2 that you noted earlier.

    The output shows the response from the container running in pod-2:

    Hello Kubernetes!
    
  3. Type exit to leave the shell and return to your main command-line environment.

Viewing flow log entries

You can view flow log entries using the gcloud tool or the Google Cloud Console.

gcloud

View a flow log entry for the request from pod-1 to pod-2:

gcloud logging read \
    'logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" AND jsonPayload.connection.src_ip="POD_1_IP_ADDRESS"'

Replace the following:

  • PROJECT_ID: your project ID.
  • POD_1_IP_ADDRESS: the IP address of pod-1.

The output shows a flow log entry for a request from pod-1 to pod-2. In this example, pod-1 has IP address 10.56.0.13, and pod-2 has IP address 10.56.0.14.

...
jsonPayload:
  bytes_sent: '0'
  connection:
    dest_ip: 10.56.0.14
    dest_port: 8080
    protocol: 6
    src_ip: 10.56.0.13
    src_port: 35414
...

Console

  1. Go to the Logs Viewer page in Cloud Console.

    Go to Logs Viewer

  2. In the filter field, click , then click Convert to advanced filter.

  3. Replace any text in the filter field with the following:

    resource.type="gce_subnetwork"
    logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows"
    jsonPayload.connection.src_ip="POD_1_IP_ADDRESS"
    

    Replace the following:

    • PROJECT_ID: your project ID.
    • POD_1_IP_ADDRESS: the IP address of pod-1.
  4. Expand the log entry that appears. Under jsonPayload you can see that the request was sent from pod-1 to pod-2. In the following example, pod-1 has an IP address of 10.56.0.13, and pod-2 has an IP address of 10.56.0.14.

    jsonPayload: {
      bytes_sent:  "0"
      connection: {
        dest_ip:  "10.56.0.14"
        dest_port:  8080
        protocol:  6
        src_ip:  "10.56.0.13"
        src_port:  35414
    

Recall that your cluster has only one node. So pod-1 and pod-2 are on the same node. Even so, flow log entries are available for the intranode communication between pod-1 and pod-2.

Enabling intranode visibility on an existing cluster

You can enable intranode visibility for an existing cluster using the gcloud tool or the Google Cloud Console.

gcloud

gcloud beta container clusters update CLUSTER_NAME \
    --enable-intra-node-visibility

Replace CLUSTER_NAME with the name of your existing cluster.

Console

  1. Go to the Google Kubernetes Engine page in Cloud Console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. Under Networking, next to the Intranode visibility field, click Edit intranode visibility.

  4. Select the Enable intranode visibility checkbox.

  5. Click Save Changes.

When you enable intranode visibility for an existing cluster, components in both the control plane and the worker nodes are restarted.

After you've enabled this feature, you can confirm it's activated by examining the routing rules on your node:

  1. Show the IP rules:

    ip rule show
    

    The output looks similar to this:

    0:  from all lookup local
    30001:  from all fwmark 0x4000/0x4000 lookup main
    30002:  from all iif lo lookup main
    30003:  not from all iif eth0 lookup 1
    32766:  from all lookup main
    32767:  from all lookup default
    
  2. Show the IP routes:

    ip route show table 1
    

    The output looks similar to this:

    default via GKE-node-subnet-gw dev eth0
    

Disabling intranode visibility

You can disable intranode visibility for an existing cluster using the gcloud tool or the Google Cloud Console.

gcloud

gcloud beta container clusters update CLUSTER_NAME \
    --no-enable-intra-node-visibility

Replace CLUSTER_NAME with the name of your existing cluster.

Console

  1. Go to the Google Kubernetes Engine page in Cloud Console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. Under Networking, next to the Intranode visibility field, click Edit intranode visibility.

  4. Clear the Enable intranode visibility checkbox.

  5. Click Save Changes.

When you disable intranode visibility for an existing cluster, components in both the control plane and the worker nodes are restarted.

Restrictions

If you enable intranode visibility, and use ip-masq-agent configured with the nonMasqueradeCIDRs parameter, you must include the Pod CIDR range in nonMasqueradeCIDRs to avoid experiencing intranode connectivity issues.

Known issues

There is currently an issue that when intranode visibility is enabled, there may be frequent DNS timeouts if the client Pod and kube-dns Pod are co-located on the same node. This is affecting clusters with version at:

  • 1.18.16-gke.300 and higher
  • 1.19.7-gke.1500 and higher
  • 1.20.2-gke.1500 and higher

This primarily affects Alpine-based workloads, although Debian-based pods with glibc 2.9 and later can run into this as well. The DNS timeout will mostly be for external queries like metadata.internal or googleapis.com. In-cluster names that are looked up by FQDN will work fine.

If you experience this issue, try one of the following workarounds:

  • Enable NodeLocalDNS.
  • Add dnsConfig to serialize DNS lookups. This workaround works only for non-Alpine pods. This is a per-Pod setting. Each Pod yaml should be changed. Use single-request or single-request-reopen.

    dnsConfig:
      options:
      - name: single-request
    
  • Disable searchpath expansion via dnsConfig by setting ndots to a low value (for example 2) to reduce searchpath expansion. This can be set to 0 to completely avoid searchpath expansion.

    dnsConfig:
      options:
        - name: ndots
          value: "2"
    
  • Disable intranode visibility.

What's next