This guide shows you how to set up intranode visibility in a Google Kubernetes Engine (GKE) cluster, so that all network traffic in your cluster is seen by the Google Cloud network.
Intranode visibility allows you to:
- See flow logs for all traffic between Pods, including traffic between Pods on the same node.
- Create firewall rules that apply to all traffic among Pods, even Pods on the same node.
- Use Packet Mirroring to clone traffic between Pods on the same node and forward it for examination.
When a Pod sends a packet to another Pod on the same node, the packet leaves the node and is processed by the Google Cloud network. Then the packet is immediately sent back to the same node and forwarded to the destination Pod.
Intranode visibility is disabled by default.
Intranode visibility is required if you want your cluster to use a VPC network whose MTU is 1500 bytes. In addition, the cluster's GKE version must be 1.15 or higher. For more information on 1500 MTU VPC networks, see Maximum transmission unit in the VPC documentation.
Things to consider
Increased log volume
When intranode visibility is enabled, flow log volume may increase with more traffic being captured by the VPC. You can manage the costs associated with flow logging by adjusting the logging settings.
All Pod-to-Pod traffic is subject to firewalls
All traffic among Pods, including Pods deployed to the name node, is visible to the VPC when intranode visibility is enabled. Enabling intranode visibility might cause previously unrestricted traffic to be subject to firewall rules. Evaluate your node-level firewalls using Connectivity Tests to ensure that legitimate traffic is not obstructed.
Before you begin
Before you start, make sure you have performed the following tasks:
- Ensure that you have enabled the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- Ensure that you have installed the Cloud SDK.
Set up default gcloud
settings using one of the following methods:
- Using
gcloud init
, if you want to be walked through setting defaults. - Using
gcloud config
, to individually set your project ID, zone, and region.
Using gcloud init
If you receive the error One of [--zone, --region] must be supplied: Please specify
location
, complete this section.
-
Run
gcloud init
and follow the directions:gcloud init
If you are using SSH on a remote server, use the
--console-only
flag to prevent the command from launching a browser:gcloud init --console-only
-
Follow the instructions to authorize
gcloud
to use your Google Cloud account. - Create a new configuration or select an existing one.
- Choose a Google Cloud project.
- Choose a default Compute Engine zone for zonal clusters or a region for regional or Autopilot clusters.
Using gcloud config
- Set your default project ID:
gcloud config set project PROJECT_ID
- If you are working with zonal clusters, set your default compute zone:
gcloud config set compute/zone COMPUTE_ZONE
- If you are working with Autopilot or regional clusters, set your default compute region:
gcloud config set compute/region COMPUTE_REGION
- Update
gcloud
to the latest version:gcloud components update
Tasks overview
To demonstrate setting up intranode visibility, here's an overview of the tasks that you perform:
- Enable flow logs for the default subnet in the
us-central1
region. - Create a single-node cluster that has intranode visibility enabled.
- Create two Pods in your cluster.
- Send an HTTP request from one Pod to the other Pod.
- View the flow log entry for the Pod-to-Pod request.
Examples in this guide use:
us-central1
as the default region.us-central1-a
as the default zone.
Enabling flow logs for a subnet
You can enable flow logs for a subnet using the
gcloud
tool or the Google Cloud Console.
gcloud
Enable flow logs for the default subnet in the
us-central1
region:gcloud compute networks subnets update default --region us-central1 \ --enable-flow-logs
Verify that your subnet has flow logs enabled:
gcloud compute networks subnets describe default --region us-central1
The output shows that flow logs are enabled:
... enableFlowLogs: true ... ipCidrRange: 10.128.0.0/20 region: https://www.googleapis.com/compute/v1/projects/abc-712099/regions/us-central1
Console
Visit the Google Kubernetes Engine VPC networks page in Cloud Console.
In the us-central1 row, click default.
On the Subnet details page, click edit Edit.
Under Flow logs, select On.
Click Save.
Creating a cluster with intranode visibility enabled
You can create a cluster that has intranode visibility enabled using the
gcloud
tool or the Google Cloud Console.
gcloud
Create a single-node cluster that has intranode visibility enabled:
gcloud container clusters create cluster-name \
--zone us-central1-a \
--num-nodes 1 \
--enable-intra-node-visibility
Console
Visit the Google Kubernetes Engine menu in Cloud Console.
Click add_box Create.
Enter the Name for your cluster.
For the Location type, select Zonal.
In the Zone drop-down list, select us-central1-a.
From the navigation pane, under Node Pools, click default-pool.
Enter a Name for the node pool.
For static version nodes, choose the Node version.
For the Number of nodes, enter
1
.From the navigation pane, under Cluster, click Networking.
Select the Enable intranode visibility checkbox.
Click Create.
Getting credentials for your cluster
Get the credentials for your new cluster:
gcloud container clusters get-credentials cluster-name \
--zone us-central1-a
The credentials are saved in your kubeconfig file, which is typically at
$HOME/.kube/config
.
Creating two Pods
For the first Pod, create a file named
pod-1.yaml
based on the following sample manifest:apiVersion: v1 kind: Pod metadata: name: pod-1 spec: containers: - name: container-1 image: gcr.io/google-samples/hello-app:2.0
Create the Pod by running the following command:
kubectl apply -f pod-1.yaml
For the second Pod, create a file named
pod-2.yaml
based on the following sample manifest:apiVersion: v1 kind: Pod metadata: name: pod-2 spec: containers: - name: container-2 image: gcr.io/google-samples/node-hello:1.0
Create the Pod by running the following command:
kubectl apply -f pod-2.yaml
View the Pods:
kubectl get pod pod-1 pod-2 --output wide
The output shows the IP addresses of your Pods. Make a note of these addresses.
NAME READY STATUS RESTARTS AGE IP ... pod-1 1/1 Running 0 1d 10.52.0.13 ... pod-2 1/1 Running 0 1d 10.52.0.14 ...
Sending a request from pod-1
to pod-2
Get a shell to the container in
pod-1
:kubectl exec -it pod-1 sh
In your shell, send a request to
pod-2
:wget -qO- pod-2-ip-address:8080
where pod-2-ip-address is the IP address of
pod-2
that you noted earlier.The output shows the response from the container running in
pod-2
:Hello Kubernetes!
Type
exit
to leave the shell and return to your main command-line environment.
Viewing flow log entries
You can view flog log entries using the gcloud
tool or the Google Cloud Console.
gcloud
View a flow log entry for the request from pod-1
to pod-2
:
gcloud logging read \
'logName="projects/project-id/logs/compute.googleapis.com%2Fvpc_flows" AND jsonPayload.connection.src_ip="pod-1-ip-address"'
where:
- project-id is your project ID.
- pod-1-ip-address is the IP address of
pod-1
.
The output shows a flow log entry for a request from pod-1
to pod-2
. In
this example, pod-1
has IP address 10.56.0.13
, and pod-2
has IP address
10.56.0.14
.
...
jsonPayload:
bytes_sent: '0'
connection:
dest_ip: 10.56.0.14
dest_port: 8080
protocol: 6
src_ip: 10.56.0.13
src_port: 35414
...
Console
Visit the Google Kubernetes Engine Stackdriver logs page in Cloud Console.
In the filter field, click arrow_drop_down, then click Convert to advanced filter.
Replace any text in the filter field with the following:
resource.type="gce_subnetwork" logName="projects/project-id/logs/compute.googleapis.com%2Fvpc_flows" jsonPayload.connection.src_ip="pod-1-ip-address"
Replace the following:
- project-id is your project ID.
- pod-1-ip-address is the IP address of
pod-1
.
Expand the log entry that appears. Under
jsonPayload
you can see that the request was sent frompod-1
topod-2
. In the following example, pod-1 has an IP address of 10.56.0.13, and pod-2 has an IP address of 10.56.0.14.jsonPayload: { bytes_sent: "0" connection: { dest_ip: "10.56.0.14" dest_port: 8080 protocol: 6 src_ip: "10.56.0.13" src_port: 35414
Recall that your cluster has only one node. So pod-1
and pod-2
are on the
same node. Even so, flow log entries are available for the intranode
communication between pod-1
and pod-2
.
Enabling intranode visibility on an existing cluster
You can enable intranode visibility for an existing cluster using the
gcloud
tool or the Google Cloud Console.
gcloud
gcloud beta container clusters update cluster-name \
--enable-intra-node-visibility
where cluster-name is the name of your existing cluster.
Console
Visit the Google Kubernetes Engine menu in Cloud Console.
In the cluster list, click the name of the cluster you want to modify.
Under Networking, next to the Intranode visibility field, click edit Edit intranode visibility.
Select the Enable intranode visibility checkbox.
Click Save Changes.
When you enable intranode visibility for an existing cluster, components in both the control plane and the worker nodes are restarted.
After you've enabled this feature, you can confirm it's activated by examining the routing rules on your node:
Show the IP rules:
ip rule show
The output looks similar to this:
0: from all lookup local 30001: from all fwmark 0x4000/0x4000 lookup main 30002: from all iif lo lookup main 30003: not from all iif eth0 lookup 1 32766: from all lookup main 32767: from all lookup default
Show the IP routes:
ip route show table 1
The output looks similar to this:
default via GKE-node-subnet-gw dev eth0
Disabling intranode visibility
You can disable intranode visibility for an existing cluster using the
gcloud
tool or the Google Cloud Console.
gcloud
gcloud beta container clusters update cluster-name \
--no-enable-intra-node-visibility
where cluster-name is the name of your existing cluster.
Console
Visit the Google Kubernetes Engine page in Cloud Console.
In the cluster list, click the name of the cluster you want to modify.
Under Networking, next to the Intranode visibility field, click edit Edit intranode visibility.
Clear the Enable intranode visibility checkbox.
Click Save Changes.
When you disable intranode visibility for an existing cluster, components in both the control plane and the worker nodes are restarted.
Restrictions
Clusters with intranode visibility enabled have the following restrictions:
- If you enable intranode visibility, and use ip-masq-agent
configured with the
nonMasqueradeCIDRs
parameter, thenonMasqueradeCIDRs
needs to include the Pod CIDR, otherwise you can experience intranode connectivity issues.
What's next
Learn how to control the communication between your cluster's Pods and Services by creating a cluster network policy.
Learn about the benefits of Google Cloud alias IP ranges in VPC-native clusters.