About traffic flows

This page describes how VPC Flow Logs reports flow logs for common use cases. See the following sections for examples of traffic flows sampled by VPC Flow Logs.

VM-to-VM flows in the same VPC network

VM flows within a VPC network.
VM flows within a VPC network. (click to enlarge).

For VM-to-VM flows in the same VPC network, flow logs are reported from both requesting and responding VMs, as long as both VMs are in subnets that have VPC Flow Logs enabled. In this example, VM 10.10.0.2 sends a request with 1,224 bytes to VM 10.50.0.2, which is also in a subnet that has logging enabled. In turn, 10.50.0.2 responds to the request with a reply containing 5,342 bytes. Both the request and reply are recorded from both the requesting and responding VMs.

As reported by requesting VM (10.10.0.2)
request/reply connection.src_ip connection.dest_ip bytes_sent Annotations
request 10.10.0.2 10.50.0.2 1,224 src_instance.*
dest_instance.*
src_vpc.*
dest_vpc.*
reply 10.50.0.2 10.10.0.2 5,342 src_instance.*
dest_instance.*
src_vpc.*
dest_vpc.*
As reported by responding VM (10.50.0.2)
request/reply connection.src_ip connection.dest_ip bytes Annotations
request 10.10.0.2 10.50.0.2 1,224 src_instance.*
dest_instance.*
src_vpc.*
dest_vpc.*
reply 10.50.0.2 10.10.0.2 5,342 src_instance.*
dest_instance.*
src_vpc.*
dest_vpc.*

VM-to-external-IP-address flows

VM-to-external-IP-address flows.
VM-to-external-IP-address flows (click to enlarge).

For flows that traverse the internet between a VM that's in a VPC network and an endpoint with an external IP address, flow logs are reported from the VM that's in the VPC network only:

  • For egress flows, the logs are reported from the VPC network VM that is the source of the traffic.
  • For ingress flows, the logs are reported from the VPC network VM that is the destination of the traffic.

In this example, VM 10.10.0.2 exchanges packets over the internet with an endpoint that has the external IP address 203.0.113.5. The outbound traffic of 1,224 bytes sent from 10.10.0.2 to 203.0.113.5 is reported from the source VM, 10.10.0.2. The inbound traffic of 5,342 bytes sent from 203.0.113.5 to 10.10.0.2 is reported from the destination of the traffic, VM 10.10.0.2.

request/reply connection.src_ip connection.dest_ip bytes_sent Annotations
request 10.10.0.2 203.0.113.5 1,224 src_instance.*
src_vpc.*
dest_location.*
internet_routing_details.*
reply 203.0.113.5 10.10.0.2 5,342 dest_instance.*
dest_vpc.*
src_location.*

VM-to-on-premises flows

VM-to-on-premises flows.
VM-to-on-premises flows (click to enlarge).

For flows between a VM that's in a VPC network and an on-premises endpoint with an internal IP address, flow logs are reported from the VM that's in the VPC network only:

  • For egress flows, the logs are reported from the VPC network VM that is the source of the traffic.
  • For ingress flows, the logs are reported from the VPC network VM that is the destination of the traffic.

In this example, VM 10.10.0.2 and on-premises endpoint 10.30.0.2 are connected through a VPN gateway or Cloud Interconnect. The outbound traffic of 1,224 bytes sent from 10.10.0.2 to 10.30.0.2 is reported from the source VM, 10.10.0.2. The inbound traffic of 5,342 bytes sent from 10.30.0.2 to 10.10.0.2 is reported from the destination of the traffic, VM 10.10.0.2.

request/reply connection.src_ip connection.dest_ip bytes_sent Annotations
request 10.10.0.2 10.30.0.2 1,224 src_instance.*
src_vpc.*
reply 10.30.0.2 10.10.0.2 5,342 dest_instance.*
dest_vpc.*

VM-to-VM flows for Shared VPC

Shared VPC flows.
Shared VPC flows (click to enlarge).

For VM-to-VM flows for Shared VPC, you can enable VPC Flow Logs for the subnet in the host project. For example, subnet 10.10.0.0/20 belongs to a Shared VPC network defined in a host project. You can see flow logs from VMs belonging to this subnet, including ones created by service projects. In this example, the service projects are called "web server", "recommendation", "database".

For VM-to-VM flows, if both VMs are in the same project, or in the case of a shared network, the same host project, annotations for project ID and the like are provided for the other endpoint in the connection. If the other VM is in a different project, then annotations for the other VM are not provided.

The following table shows a flow as reported by either 10.10.0.10 or 10.10.0.20.

  • src_vpc.project_id and dest_vpc.project_id are for the host project because the VPC subnet belongs to the host project.
  • src_instance.project_id and dest_instance.project_id are for the service projects because the instances belong to the service projects.
connection
.src_ip
src_instance
.project_id
src_vpc
.project_id
connection
.dest_ip
dest_instance
.project_id
dest_vpc
.project_id
10.10.0.10 web server host_project 10.10.0.20 recommendation host_project

Service projects don't own the Shared VPC network and don't have access to the flow logs of the Shared VPC network.

VM-to-VM flows for VPC Network Peering

VPC Network Peering flows (click to enlarge).
VPC Network Peering flows (click to enlarge).

Unless both VMs are in the same Google Cloud project, VM-to-VM flows for peered VPC networks are reported in the same way as for external endpoints—project and other annotation information for the other VM are not provided. If both VMs are in the same project, even if in different networks, then project and other annotation information is provided for the other VM as well.

In this example, the subnets of VM 10.10.0.2 in project analytics-prod and VM 10.50.0.2 in project webserver-test are connected through VPC Network Peering. If VPC Flow Logs is enabled in project analytics-prod, the traffic (1224 bytes) sent from 10.10.0.2 to 10.50.0.2 is reported from VM 10.10.0.2, which is the source of the flow. The traffic (5342 bytes) sent from 10.50.0.2 to 10.10.0.2 is also reported from VM 10.10.0.2, which is the destination of the flow.

In this example, VPC Flow Logs is not turned on in project webserver-test, so no logs are recorded by VM 10.50.0.2.

reporter connection.src_ip connection.dest_ip bytes_sent Annotations
source 10.10.0.2 10.50.0.2 1,224 src_instance.*
src_vpc.*
destination 10.50.0.2 10.10.0.2 5,342 dest_instance.*
dest_vpc.*

VM-to-VM flows for internal passthrough Network Load Balancers

Internal passthrough Network Load Balancer flows (click to enlarge).
Internal passthrough Network Load Balancer flows (click to enlarge).

When you add a VM to the backend service for an internal passthrough Network Load Balancer, the Linux or Windows Guest Environment adds the IP address of the load balancer to the local routing table of the VM. This allows the VM to accept request packets with destinations set to the IP address of the load balancer. When the VM replies, it sends its response directly; however, the source IP address for the response packets is set to the IP address of the load balancer, not the VM being load balanced.

VM-to-VM flows sent through an internal passthrough Network Load Balancer are reported from both source and destination. For an example HTTP request / response pair, the following table explains the fields of the flow log entries observed. For the purpose of this illustration, consider the following network configuration:

  • Browser instance at 192.168.1.2
  • Internal passthrough Network Load Balancer at 10.240.0.200
  • Web server instance at 10.240.0.3
Traffic Direction reporter connection.src_ip connection.dest_ip connection.src_instance connection.dest_instance
Request SRC 192.168.1.2 10.240.0.200 Browser instance
Request DEST 192.168.1.2 10.240.0.200 Browser instance Web server instance
Response SRC 10.240.0.3 192.168.1.2 Web server instance Browser instance
Response DEST 10.240.0.200 192.168.1.2 Browser instance

The requesting VM does not know which VM will respond to the request. In addition, because the other VM sends a response with the internal load balancer IP as the source address, it does not know which VM has responded. For these reasons, the requesting VM cannot add dest_instance information to its report, only src_instance information. Because the responding VM does know the IP address of the other VM, it can supply both src_instance and dest_instance information.

Pod to ClusterIP flow

Pod to cluster IP flow (click to enlarge).
Pod to cluster IP flow (click to enlarge).

In this example, traffic is sent from client Pod (10.4.0.2) to cluster-service (10.0.32.2:80). The destination is resolved to the selected server Pod IP address (10.4.0.3) on the target port (8080).

On node edges, the flow is sampled twice with the translated IP address and port. For both sampling points, we will identify that the destination Pod is backing service cluster-service on port 8080, and annotate the record with the Service details as well as the Pod details. In case the traffic is routed to a Pod on the same node, the traffic doesn't leave the node and is not sampled at all.

In this example, the following records are found.

reporter connection.src_ip connection.dst_ip bytes_sent Annotations
SRC 10.4.0.2 10.4.0.3 1,224 src_instance.*
src_vpc.*
src_gke_details.cluster.*
src_gke_details.pod.*
dest_instance.*
dest_vpc.*
dest_gke_details.cluster.*
dest_gke_details.pod.*
dest_gke_details.service.*
DEST 10.4.0.2 10.4.0.3 1,224 src_instance.*
src_vpc.*
src_gke_details.cluster.*
src_gke_details.pod.*
dest_instance.*
dest_vpc.*
dest_gke_details.cluster.*
dest_gke_details.pod.*
dest_gke_details.service.*

GKE external LoadBalancer flows

External load balancer flows (click to enlarge).
External load balancer flows (click to enlarge).

Traffic from an external IP address to a GKE service (35.35.35.35) is routed to a node in the cluster, 10.0.12.2 in this example, for routing. By default, external passthrough Network Load Balancers distribute traffic across all nodes in the cluster, even those not running a relevant Pod. Traffic might take extra hops to get to the relevant Pod. For more information, see Networking outside the cluster.

The traffic is then routed from the node (10.0.12.2) to the selected server Pod (10.4.0.2). Both hops are logged because all node edges are sampled. In case the traffic is routed to a Pod on the same node, 10.4.0.3 in this example, the second hop wouldn't be logged, as it doesn't leave the node. The second hop is logged by both nodes' sampling points. For the first hop, we identify the Service based on the load balancer IP and Service port (80). For the second hop, we identify that the destination Pod is backing the Service on the target port (8080).

In this example, the following records are found.

reporter connection.src_ip connection.dst_ip bytes_sent Annotations
DEST 203.0.113.1 35.35.35.35 1,224 src_location.*
dest_instance.*
dest_vpc.*
dest_gke_details.cluster.*
dest_gke_details.service.*
SRC 10.0.12.2 10.4.0.2 1,224 src_instance.*
src_vpc.*
src_gke_details.cluster.*
dest_instance.*
dest_vpc.*
dest_gke_details.cluster.*
dest_gke_details.pod.*
dest_gke_details.service.*
DEST 10.0.12.2 10.4.0.2 1,224 src_instance.*
src_vpc.*
src_gke_details.cluster.*
dest_instance.*
dest_vpc.*
dest_gke_details.cluster.*
dest_gke_details.pod.*
dest_gke_details.service.*

GKE Ingress flows

Ingress flows (click to enlarge).
Ingress flows (click to enlarge).

A connection from a public IP address to an Ingress destination is terminated at the Load Balancer Service. The connection is mapped to a NodePort Service according to the URL. To serve the request, the load balancer (130.211.0.1) connects to one of the cluster nodes (10.0.12.2) for routing using the Service's NodePort. By default, when creating an Ingress, the GKE Ingress controller configures an HTTP(S) load balancer that distributes traffic across all nodes in the cluster, even those not running a relevant Pod. Traffic might take extra hops to get to the relevant Pod. For more information, see Networking outside the cluster. The traffic is then routed from the node (10.0.12.2) to the selected server Pod (10.4.0.2).

Both hops are logged because all node edges are sampled. For the first hop, we identify the Service based on the Service's NodePort (60000). For the second hop, we identify that the destination Pod is backing the Service on the target port (8080). The second hop is logged by both nodes' sampling points. However, in a case where the traffic is routed to a Pod on the same node (10.4.0.3), the second hop is not logged because the traffic didn't leave the node.

In this example, the following records are found.

reporter connection.src_ip connection.dst_ip bytes_sent Annotations
DEST 130.211.0.1 10.0.12.2 1,224 dest_instance.*
dest_vpc.*
dest_gke_details.cluster.*
dest_gke_details.service.*
SRC 10.0.12.2 10.4.0.2 1,224 src_instance.*
src_vpc.*
src_gke_details.cluster.*
dest_instance.*
dest_vpc.*
dest_gke_details.cluster.*
dest_gke_details.pod.*
dest_gke_details.service.*
DEST 10.0.12.2 10.4.0.2 1,224 src_instance.*
src_vpc.*
src_gke_details.cluster.*
dest_instance.*
dest_vpc.*
dest_gke_details.cluster.*
dest_gke_details.pod.*
dest_gke_details.service.*

GKE Ingress flows using container-native load balancing

Ingress flows using container-native load balancing (click to enlarge).
Ingress flows using container-native load balancing (click to enlarge).

Requests from a public IP address to an Ingress that is using container-native load balancing are terminated at the load balancer. In this type of Ingress, Pods are core objects for load balancing. A request is then sent from the load balancer (130.211.0.1) directly to a selected Pod (10.4.0.2). We identify that the destination Pod is backing the Service on the target port (8080).

In this example, the following record is found.

reporter connection.src_ip connection.dst_ip bytes_sent Annotations
DEST 130.211.0.1 10.4.0.2 1,224 dest_instance.*
dest_vpc.*
dest_gke_details.cluster.*
dest_gke_details.pod.*
dest_gke_details.service.*

Pod to external flows

Pod to external flow (click to enlarge).
Pod to external flow (click to enlarge).

Traffic from a Pod (10.4.0.3) to an external IP (203.0.113.1) is modified by IP masquerading so that the packets are sent from the node IP (10.0.12.2) instead of the Pod IP. By default, the GKE cluster is configured to masquerade traffic to external destinations. For more information, see IP masquerade agent.

In order to view Pod annotations for this traffic, you can configure the masquerade agent not to masquerade pod IPs. In such a case, to allow traffic to the internet, you can configure Cloud NAT, which processes the Pod IP addresses. For more information about Cloud NAT with GKE, review GKE interaction.

In this example, the following record is found.

reporter connection.src_ip connection.dst_ip bytes_sent Annotations
SRC 10.0.12.2 203.0.113.1 1,224 src_instance.*
src_vpc.*
src_gke_details.cluster.*
dest_location.*
internet_routing_details.*

What's next