This guide shows you how to set up intranode visibility on a Google Kubernetes Engine (GKE) cluster.
Intranode visibility configures networking on each node in the cluster so that traffic sent from one Pod to another Pod is processed by the cluster's Virtual Private Cloud (VPC) network, even if the Pods are on the same node.
Intranode visibility is disabled by default on Standard clusters and enabled by default in Autopilot clusters.
Architecture
Intranode visibility ensures that packets sent between Pods are always processed by the VPC network, which ensures that firewall rules, routes, flow logs, and packet mirroring configurations apply to the packets.
When a Pod sends a packet to another Pod on the same node, the packet leaves the node and is processed by the Google Cloud network. Then the packet is immediately sent back to the same node and forwarded to the destination Pod.
Intranode visibility deploys the netd
DaemonSet.
Benefits
Intranode visibility provides the following benefits:
- See flow logs for all traffic between Pods, including traffic between Pods on the same node.
- Create firewall rules that apply to all traffic among Pods, including traffic between Pods on the same node.
- Use Packet Mirroring to clone traffic, including traffic between Pods on the same node, and forward it for examination.
Requirements and limitations
Intranode visibility has the following requirements and limitations:
- Your cluster must be on GKE version 1.15 or later.
- Intranode visibility is not supported with Windows Server node pools.
- Intranode visibility is required if you want your cluster to use a VPC network where the maximum transmission unit (MTU) is 1500 bytes. For more information, see the maximum transmission unit overview.
- If you enable intranode visibility, and use the
ip-masq-agent
configured with thenonMasqueradeCIDRs
parameter, you must include the Pod CIDR range innonMasqueradeCIDRs
to avoid experiencing intranode connectivity issues. - When intranode visibility is enabled, you might experience frequent DNS timeouts if the client Pod and kube-dns Pod are located on the same node. For more information, see DNS timeouts.
- If you enable intranode visibility and network policy, you might experience resets for Pod-to-Pod traffic.
Firewall rules
When you enable intranode visibility, the VPC network processes all packets sent between Pods, including packets sent between Pods on the same node. This means VPC firewall rules and hierarchical firewall policies consistently apply to Pod-to-Pod communication, regardless of Pod location.
If you configure custom firewall rules for communication within the cluster, carefully evaluate your cluster's networking needs to determine the set of egress and ingress allow rules. You can use connectivity tests to ensure that legitimate traffic is not obstructed. For example, Pod-to-Pod communication is required for network policy to function.
Before you begin
Before you start, make sure you have performed the following tasks:
- Ensure that you have enabled the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- Ensure that you have installed the Google Cloud CLI.
- Set up default Google Cloud CLI settings for your project by using one of the following methods:
- Use
gcloud init
, if you want to be walked through setting project defaults. - Use
gcloud config
, to individually set your project ID, zone, and region. -
Run
gcloud init
and follow the directions:gcloud init
If you are using SSH on a remote server, use the
--console-only
flag to prevent the command from launching a browser:gcloud init --console-only
- Follow the instructions to authorize the gcloud CLI to use your Google Cloud account.
- Create a new configuration or select an existing one.
- Choose a Google Cloud project.
- Choose a default Compute Engine zone.
- Choose a default Compute Engine region.
- Set your default project ID:
gcloud config set project PROJECT_ID
- Set your default Compute Engine region (for example,
us-central1
):gcloud config set compute/region COMPUTE_REGION
- Set your default Compute Engine zone (for example,
us-central1-c
):gcloud config set compute/zone COMPUTE_ZONE
- Update
gcloud
to the latest version:gcloud components update
gcloud init
gcloud config
By setting default locations, you can avoid errors in gcloud CLI like the
following: One of [--zone, --region] must be supplied: Please specify location
.
Enable intranode visibility on a new cluster
You can create a cluster with intranode visibility enabled using the gcloud CLI or the Google Cloud console.
gcloud
To create a single-node cluster that has intranode visibility enabled,
use the --enable-intra-node-visibility
flag:
gcloud container clusters create CLUSTER_NAME \
--region=COMPUTE_REGION \
--enable-intra-node-visibility
Replace the following:
CLUSTER_NAME
: the name of your new cluster.COMPUTE_REGION
: the compute region for the cluster.
Console
To create a single-node cluster that has intranode visibility enabled, perform the following steps:
Go to the Google Kubernetes Engine page in Cloud console.
Click add_boxCreate.
Enter the Name for your cluster.
In the Configure cluster dialog, next to GKE Standard, click Configure.
Configure your cluster as needed.
From the navigation pane, under Cluster, click Networking.
Select the Enable intranode visibility checkbox.
Click Create.
Enable intranode visibility on an existing cluster
You can enable intranode visibility on an existing cluster using the gcloud CLI or the Google Cloud console.
When you enable intranode visibility for an existing cluster, GKE restarts components in both the control plane and the worker nodes.
gcloud
To enable intranode visibility on an existing cluster, use the
--enable-intra-node-visibility
flag:
gcloud container clusters update CLUSTER_NAME \
--enable-intra-node-visibility
Replace CLUSTER_NAME
with the name of your cluster.
Console
To enable intranode visibility on an existing cluster, perform the following steps:
Go to the Google Kubernetes Engine page in Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Networking, click edit Edit intranode visibility.
Select the Enable intranode visibility checkbox.
Click Save Changes.
Disable intranode visibility
You can disable intranode visibility on a cluster using the gcloud CLI or the Google Cloud console.
When you disable intranode visibility for an existing cluster, GKE restarts components in both the control plane and the worker nodes.
gcloud
To disable intranode visibility, use the --no-enable-intra-node-visibility
flag:
gcloud container clusters update CLUSTER_NAME \
--no-enable-intra-node-visibility
Replace CLUSTER_NAME
with the name of your cluster.
Console
To disable intranode visibility, perform the following steps:
Go to the Google Kubernetes Engine page in Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Networking, click edit Edit intranode visibility.
Clear the Enable intranode visibility checkbox.
Click Save Changes.
Exercise: Verify intranode visibility
This exercise shows you the steps required to enable intranode visibility and confirm that it is working for your cluster.
In this exercise, you perform the following steps:
- Enable flow logs for the default subnet in the
us-central1
region. - Create a single-node cluster with intranode visibility enabled in the
us-central1-a
zone. - Create two Pods in your cluster.
- Send an HTTP request from one Pod to another Pod.
- View the flow log entry for the Pod-to-Pod request.
Enable flow logs
Enable flow logs for the default subnet:
gcloud compute networks subnets update default \ --region=us-central1 \ --enable-flow-logs
Verify that the default subnet has flow logs enabled:
gcloud compute networks subnets describe default \ --region=us-central1
The output shows that flow logs are enabled, similar to the following:
... enableFlowLogs: true ...
Create a cluster
Create a single node cluster with intranode visibility enabled:
gcloud container clusters create flow-log-test \ --zone=us-central1-a \ --num-nodes=1 \ --enable-intra-node-visibility
Get the credentials for your cluster:
gcloud container clusters get-credentials flow-log-test \ --zone=us-central1-a
Create two Pods
Create a Pod.
Save the following manifest to a file named
pod-1.yaml
:apiVersion: v1 kind: Pod metadata: name: pod-1 spec: containers: - name: container-1 image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0
Apply the manifest to your cluster:
kubectl apply -f pod-1.yaml
Create a second Pod.
Save the following manifest to a file named
pod-2.yaml
:apiVersion: v1 kind: Pod metadata: name: pod-2 spec: containers: - name: container-2 image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0
Apply the manifest to your cluster:
kubectl apply -f pod-2.yaml
View the Pods:
kubectl get pod pod-1 pod-2 --output wide
The output shows the IP addresses of your Pods, similar to the following:
NAME READY STATUS RESTARTS AGE IP ... pod-1 1/1 Running 0 1d 10.52.0.13 ... pod-2 1/1 Running 0 1d 10.52.0.14 ...
Note the IP addresses of
pod-1
andpod-2
.
Send a request
Get a shell to the container in
pod-1
:kubectl exec -it pod-1 -- sh
In your shell, send a request to
pod-2
:wget -qO- POD_2_IP_ADDRESS:8080
Replace
POD_2_IP_ADDRESS
with the IP address ofpod-2
.The output shows the response from the container running in
pod-2
.Hello, world! Version: 2.0.0 Hostname: pod-2
Type exit to leave the shell and return to your main command-line environment.
View flow log entries
To view a flow log entry, use the following command:
gcloud logging read \
'logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" AND jsonPayload.connection.src_ip="POD_1_IP_ADDRESS" AND jsonPayload.connection.dest_ip="POD_2_IP_ADDRESS"'
Replace the following:
PROJECT_ID
: your project ID.POD_1_IP_ADDRESS
: the IP address ofpod-1
.POD_2_IP_ADDRESS
: the IP address ofpod-2
.
The output shows a flow log entry for a request from pod-1
to pod-2
. In this
example, pod-1
has IP address 10.56.0.13
, and pod-2
has IP address
10.56.0.14
.
...
jsonPayload:
bytes_sent: '0'
connection:
dest_ip: 10.56.0.14
dest_port: 8080
protocol: 6
src_ip: 10.56.0.13
src_port: 35414
...
Clean up
To avoid incurring unwanted charges on your account, perform the following steps to remove the resources you created:
Delete the cluster:
gcloud container clusters delete -q flow-log-test
Disable flow logs for the default subnet:
gcloud compute networks subnets update default --no-enable-flow-logs
Known issues
Connection resets
If you enable both intranode visibility and network policy on a cluster, you might see connection resets for traffic between Pods on the same node.
This issue affects clusters on the following GKE versions:
- 1.19.9-gke.1900 to 1.19.12-gke.500
- 1.20.5-gke.2000 to 1.20.8-gke.600
- 1.21.0-gke.400 to 1.21.2-gke.500
To fix this issue, upgrade your cluster to one of the following GKE versions:
- 1.19.12-gke.700 or later
- 1.20.8-gke.700 or later
- 1.21.2-gke.600 or later
DNS timeouts
When intranode visibility is enabled, you might experience DNS timeouts if the client Pod and kube-dns Pod are located on the same node.
This issue affects clusters on the following GKE versions:
- 1.18.16-gke.300 and later
- 1.19.7-gke.1500 and later
- 1.20.2-gke.1500 and later
This issue primarily affects Alpine-based workloads, although it can also affect
Debian-based Pods with glibc
2.9. DNS requests for external names such as
metadata.internal
or googleapis.com
timeout. DNS queries within the cluster
using fully qualified domain names (FQDNs) are not affected.
To fix this issue, upgrade your cluster to one of the following GKE versions:
- 1.22.2-gke.1300 or later.
- 1.21.5-gke.1300 or later.
- 1.20.13-gke.1000 or later.
- 1.19.16-gke.8700 or later.
Mitigation
To mitigate this issue, use one of the following workarounds:
Enable NodeLocal DNSCache.
Disable intranode visibility.
Serialize DNS lookups by adding the
single-request
orsingle-request-reopen
field to your Pod manifest:dnsConfig: options: - name: single-request
This workaround is only applicable for non-Alpine Pods. You must apply this change to each Pod.
Reduce search path expansion by setting the
ndots
option in your Pod manifest to a low value, such as 2:dnsConfig: options: - name: ndots value: "2"
You can also set
ndots
to 0 to avoid search path expansion. If you disable search path expansion, the Pod performs lookups on all service names by FQDN. Lookups similar tokubernetes.default
do not work. You must use a FQDN similar tokubernetes.default.svc.cluster.local
.
What's next
- Learn how to control the communication between your cluster's Pods and Services by creating a cluster network policy.
- Learn about the benefits of VPC-native clusters.