Using VPC Flow Logs

VPC Flow Logs records a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization.

This page assumes you are familiar with the concepts described in VPC Flow Logs overview.

Enabling VPC flow logging

When you enable VPC Flow Logs, you enable for all VMs in a subnet. However, you can cut down the amount of information written to logging. Refer to Log sampling and aggregation for details on the parameters you can control.

Enabling VPC flow logging when you create a subnet

Console

  1. Go to the VPC networks page in the Google Cloud Console.
    Go to the VPC networks page
  2. Click the network where you want to add a subnet.
  3. Click Add subnet.
  4. Under Flow logs, select On.
  5. If you want to adjust log sampling and aggregation, click Configure logs and adjust any of the following:
    • Aggregation interval
    • whether or not to Include metadata in the final log entries
      By default, Include metadata only includes certain fields. Refer to Customizing metadata fields for details. To customize metadata fields, you must use the gcloud command-line interface or the API.
    • the Sample rate. 100% means that all entries are kept.
  6. Populate other fields as appropriate.
  7. Click Add.

gcloud

gcloud compute networks subnets create subnet-name \
    --enable-flow-logs \
    [--logging-aggregation-interval=aggregation-interval \
    [--logging-flow-sampling=0.0...1.0] \
    [--logging-filter-expr=expression] \
    [--logging-metadata=(include-all | exclude-all | custom)] \
    [--logging-metadata-fields=fields] \
    [other flags as needed]

where

  • --logging-aggregation-interval=aggregation-interval sets the aggregation interval for flow logs in that subnet. The interval can be set to any of the following: 5-sec (default), 30-sec, 1-min, 5-min, 10-min, or 15-min.
  • --logging-flow-sampling is the flow sampling rate. Flow sampling can be set from 0.0 (no sampling) to 1.0 (all logs). Default is 0.5.
  • --logging-filter-expr=expression limits log collection to only those logs that match the expression. For details, see Log filtering.
  • --logging-metadata=(include-all | exclude-all | custom) turns record metadata annotations on, off, or sets it to custom. If set to custom, also specify --logging-metadata-fields. Default is include-all. Note: Not all fields are included in include-all. Refer to Customizing metadata fields for details.
  • --logging-metadata-fields the comma-separated list of metadata fields you want to include in the logs. Example: --logging-metadata-fields=src_instance,dst_instance. Can only be set if --logging-metadata=custom.

API

Enable VPC Flow logs when you create a new subnet.

POST https://www.googleapis.com/compute/v1/projects/project-id/regions/region/subnetworks
{
  "logConfig": {
    "aggregationInterval": "aggregation-interval",
    "flowSampling": sampling-rate,
    "filterExpr": expression,
    "metadata": metadata-setting,
    "metadataFields": metadata-fields,
    "enable": true
  },
  "ipCidrRange": "ip-range",
  "network": "network-url",
  "name": "subnet-name"
}

Replace the placeholders with valid values:

  • project-id is the ID of the project where the subnet will be created.
  • region is the region where the subnet will be created.
  • aggregation-interval sets the aggregation interval for flow logs in the subnet. The interval can be set to any of the following: INTERVAL_5_SEC, INTERVAL_30_SEC, INTERVAL_1_MIN, INTERVAL_5_MIN, INTERVAL_10_MIN, or INTERVAL_15_MIN.
  • sampling-rate is the flow sampling rate. Flow sampling can be set from 0.0 (no sampling) to 1.0 (all logs). Default is .0.5.
  • expression is the filter expression you use to filter which logs are actually written. For details, see Log filtering.
  • metadata-setting specifies whether all metadata is logged (INCLUDE_ALL_METADATA), no metadata is logged (EXCLUDE_ALL_METADATA), or only specific metadata is logged (CUSTOM_METADATA). If this field is set to CUSTOM_METADATA, also populate the metadataFields field. Not all fields are included in INCLUDE_ALL_METADATA. Refer to Customizing metadata fields for details.
  • metadata-fields are the metadata fields you wish to capture when you have set metadata: CUSTOM_METADATA. This is a comma-separated list of metadata fields, such as src_instance, src_vpc.project_id.
  • ip-range is the primary internal IP address range of the subnet.
  • network-url is the VPC network URL where the subnet will be created.
  • subnet-name is a name for the subnet.

For more information, refer to the subnetworks.insert method.

Enabling VPC flow logging for an existing subnet

Console

  1. Go to the VPC networks page in the Google Cloud Console.
    Go to the VPC networks page
  2. Click the subnet you want to update.
  3. Click Edit.
  4. Under Flow logs, select On.
  5. If you want to adjust log sampling and aggregation, click Configure logs and adjust any of the following:
    • Aggregation interval
    • whether or not to Include metadata in the final log entries
      By default, Include metadata only includes certain fields. Refer to Customizing metadata fields for details. To customize metadata fields, you must use the gcloud command-line interface or the API.
    • the Sample rate. 100% means that all entries are kept.
  6. Click Save.

gcloud

gcloud compute networks subnets update subnet-name \
    --enable-flow-logs \
    [--logging-aggregation-interval=aggregation-interval] \
    [--logging-flow-sampling=0.0...1.0] \
    [--logging-filter-expr=expression] \
    [--logging-metadata=(include-all | exclude-all | custom)] \
    [--logging-metadata-fields=fields]

where

  • --logging-aggregation-interval=aggregation-interval sets the aggregation interval for flow logs in that subnet. The interval can be set to any of the following: 5-sec (default), 30-sec, 1-min, 5-min, 10-min, or 15-min.
  • --logging-flow-sampling is the flow sampling rate. Flow sampling can be set from 0.0 (no sampling) to 1.0 (all logs). Default is .0.5.
  • --logging-filter-expr=expression limits log collection to only those logs that match the expression. For details, see Log filtering.
  • --logging-metadata=(include-all | exclude-all | custom) turns record metadata annotations on, off, or sets it to custom. If set to custom, also specify --logging-metadata-fields. Default is on. Note: include-all does not include GKE annotations. To see GKE annotations, select custom and specify those annotations.
  • --logging-metadata-fields the comma-separated list of metadata fields you want to include in the logs. Example: --logging-metadata-fields=src_instance,dst_instance. Can only be set if --logging-metadata=custom.

API

Enable VPC Flow logs for an existing subnet.

PATCH https://www.googleapis.com/compute/v1/projects/project-id/regions/region/subnetworks/subnet-name
{
  "logConfig": {
    "enable": true
    ...other logging fields.
  },
  "fingerprint": "SUBNETWORK_FINGERPRINT"
}

Replace the placeholders with valid values:

  • project-id is the ID of the project where the subnet is located.
  • region is the region where the subnet is located.
  • subnet-name is the name of the existing subnet.
  • SUBNET_FINGERPRINT is the finger print ID for the existing subnet, which is provided when you describe a subnet.
  • For the other logging fields, see Enabling VPC Flow Logging when you create a subnet.

For more information, refer to the subnetworks.patch method.

Viewing estimated log volume for existing subnets

The Google Cloud Console provides an estimate of your log volume for existing subnets, which you can then use to estimate the cost of enabling flow logs. The estimate is based on flows captured at 5 second intervals for the subnet over the previous 7 days. Also, the size of each log depends on whether you enable metadata annotations.

  1. Go to the VPC networks page in the Google Cloud Console.
    Go to the VPC networks page
  2. Click the subnet that you want to estimate costs for.
  3. Click Edit.
  4. Under Flow logs, select On.
  5. Click Configure logs.
  6. View Estimated logs generated per day to see the estimate.
  7. Click Cancel so that none of your changes are saved.

Viewing which subnets have VPC Flow Logs enabled

You can check which subnets in a network have VPC Flow Logs enabled.

Console

  1. Go to the VPC networks page in the Google Cloud Console.
    Go to the VPC networks page
  2. View the Flow logs column to see if logging is on or off.

gcloud

gcloud compute networks subnets list \
    --project project-id \
    --filter="network=network-url" \
    --format="csv(name,logConfig.enable)"

where

  • project-id is the ID of the project you are querying.
  • network-url is the FQDN of the network containing the subnets.

Updating VPC flow logging parameters

You can modify log sampling parameters. See Log sampling and aggregation for details on the parameters you can control.

Console

  1. Go to the VPC networks page in the Google Cloud Console.
    Go to the VPC networks page
  2. Click the subnet you want to update.
  3. Click Edit.
  4. Click Configure logs to adjust log sampling and aggregation:
    • Aggregation interval
    • whether or not to Include metadata in the final log entries
      By default, Include metadata only includes certain fields. Refer to Customizing metadata fields for details. To customize metadata fields, you must use the gcloud command-line interface or the API.
    • the Sample rate. 100% means that all entries are kept.
  5. Click Save.

gcloud

gcloud compute networks subnets update subnet-name \
    [--logging-aggregation-interval=aggregation-interval] \
    [--logging-flow-sampling=0.0...1.0] \
    [--logging-filter-expr=expression] \
    [--logging-metadata=(include-all | exclude-all | custom)] \
    [--logging-metadata-fields=fields]

where

  • --logging-aggregation-interval=aggregation-interval sets the aggregation interval for flow logs in that subnet. The interval can be set to any of the following: 5-sec (default), 30-sec, 1-min, 5-min, 10-min, or 15-min.
  • --logging-flow-sampling is the flow sampling rate. Flow sampling can be set from 0.0 (no sampling) to 1.0 (all logs). Default is .0.5.
  • --logging-filter-expr=expression limits log collection to only those logs that match the expression. For details, see Log filtering.
  • --logging-metadata=(include-all | exclude-all | custom) turns record metadata annotations on, off, or sets it to custom. If set to custom, also specify --logging-metadata-fields. Default is on. Note: include-all does not include GKE annotations. To see GKE annotations, select custom and specify those annotations.
  • --logging-metadata-fields the comma-separated list of metadata fields you want to include in the logs. Example: --logging-metadata-fields=src_instance,dst_instance. Can only be set if --logging-metadata=custom.

API

Modify the log sampling fields to update VPC Flow logs behaviors.

PATCH https://www.googleapis.com/compute/v1/projects/project-id/regions/region/subnetworks/subnet-name
{
  "logConfig": {
    ...fields to modify
  },
  "fingerprint": "SUBNETWORK_FINGERPRINT"
}

Replace the placeholders with valid values:

  • project-id is the ID of the project where the subnet is located.
  • region is the region where the subnet is located.
  • subnet-name is the name of the existing subnet.
  • SUBNET_FINGERPRINT is the finger print ID for the existing subnet, which is provided when you describe a subnet.
  • For the fields that you can modify, see Enabling VPC Flow Logging when you create a subnet.

For more information, refer to the subnetworks.patch method.

Disabling VPC flow logging for a subnet

Console

  1. Go to the VPC networks page in the Google Cloud Console.
    Go to the VPC networks page
  2. Click the subnet you want to update.
  3. Click Edit.
  4. Under Flow logs, select Off.
  5. Click Save.

gcloud

gcloud compute networks subnets update subnet-name \
    --no-enable-flow-logs

API

Disable VPC Flow logs on a subnet to stop collecting log records.

PATCH https://www.googleapis.com/compute/v1/projects/project-id/regions/region/subnetworks/subnet-name
{
  "logConfig": {
    "enable": false
  },
  "fingerprint": "SUBNETWORK_FINGERPRINT"
}

Replace the placeholders with valid values:

  • project-id is the ID of the project where the subnet is located.
  • region is the region where the subnet is located.
  • subnet-name is the name of the existing subnet.
  • SUBNET_FINGERPRINT is the finger print ID for the existing subnet, which is provided when you describe a subnet.

For more information, refer to the subnetworks.patch method.

Accessing logs via Logging

Configuring IAM

Follow the access control guide for Logging.

View logs through the logs viewer page.

You need your project's project ID for these commands.

Accessing all flow logs

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. Select GCE Subnetwork in the first pull-down menu.
  3. Select vpc_flows in the second pull-down menu.
  4. Click OK.

Alternatively:

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
  3. Paste the following into the field. Replace project-id with your project ID.
    resource.type="gce_subnetwork"
    logName="projects/project-id/logs/compute.googleapis.com%2Fvpc_flows"
    
  4. Click Submit filter.

Accessing logs for a specific subnet

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In the first pull-down menu, move the cursor to GCE Subnetwork, then move it to the right to open up the individual subnet selection menu.
  3. In the second pull-down menu, select vpc_flows.
  4. Click OK.

Alternatively:

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
  3. Paste the following into the field. Replace project-id with your project ID and subnet-name with your subnetwork.
    resource.type="gce_subnetwork"
    logName="projects/project-id/logs/compute.googleapis.com%2Fvpc_flows"
    resource.labels.subnetwork_name="subnet-name"
    
  4. Click Submit filter.

Accessing logs for a specific VM

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
  3. Paste the following into the field. Replace project-id with your project ID and VM_NAME with your VM.
    resource.type="gce_subnetwork"
    logName="projects/project-id/logs/compute.googleapis.com%2Fvpc_flows"
    jsonPayload.src_instance.vm_name="VM_NAME"
    
  4. Click Submit filter.

Accessing logs for traffic to a specific subnet range

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
  3. Paste the following into the field. Replace project-id with your project ID and subnet-range with a CIDR range (192.168.1.0/24).
    resource.type="gce_subnetwork"
    logName="projects/project-id/logs/compute.googleapis.com%2Fvpc_flows"
    ip_in_net(jsonPayload.connection.dest_ip, subnet-range)
    
  4. Click Submit filter.

Accessing logs for a specific GKE cluster

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
  3. Paste the following into the field. Replace project-id with your project ID and subnet-name with your subnetwork.
    resource.type="gce_subnetwork"
    logName="projects/{#project_id}/logs/vpc_flows"
    jsonPayload.src_gke_details.cluster.cluster_name="{#cluster_name}" OR jsonPayload.dest_gke_details.cluster.cluster_name="{#cluster_name}"
    
  4. Click Submit filter.

Accessing logs for specific ports and protocols

For an individual destination port

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
  3. Paste the following into the field. Replace project-id with your project ID, PORT with the destination port, and PROTOCOL with the protocol.
    resource.type="gce_subnetwork"
    logName="projects/project-id/logs/compute.googleapis.com%2Fvpc_flows"
    jsonPayload.connection.dest_port=PORT
    jsonPayload.connection.protocol=PROTOCOL
    
  4. Click Submit filter.

For more than one destination port

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
  3. Paste the following into the field. Replace project-id with your project ID, PORT1 and PORT2 with the destination ports, and PROTOCOL with the protocol.
    resource.type="gce_subnetwork"
    logName="projects/project-id/logs/compute.googleapis.com%2Fvpc_flows"
    jsonPayload.connection.dest_port=(PORT1 OR PORT2)
    jsonPayload.connection.protocol=PROTOCOL
    
  4. Click Submit filter.

Exporting logs to BigQuery, Pub/Sub, and custom targets

You can export flow logs from Logging to a destination of your choice as described in the Logging documentation. Refer to the previous section for example filters.

Troubleshooting

No vpc_flows appear in Logging under the gce_subnetwork resource

  • Confirm that logging is enabled for the given subnet.
  • VPC flows are only supported for VPC networks. If you have a legacy network, you will not see any logs.
  • In Shared VPC networks, logs only appear in the host project, not the service projects. Make sure you look for the logs in the host project.
  • Logging exclusion filters block specified logs. Make sure there are no exclusion rules that discard VPC Flow Logs.
    1. Go to Resource usage.
    2. Click the Exclusions tab.
    3. Make sure there are no exclusion rules that might discard VPC Flow Logs.

No RTT or byte values on some of the logs

  • RTT measurements may be missing if not enough packets were sampled to capture RTT. This is more likely to happen for low volume connections.
  • No RTT values are available for UDP flows.
  • Some packets are sent with no payload. If header-only packets were sampled, the bytes value will be 0.

Some flows are missing

  • Only UDP and TCP protocols are supported. VPC Flow Logs does not support any other protocols.
  • Logs are sampled. Some packets in very low volume flows might be missed.

Missing GKE annotations in some logs

Refer to GKE annotations to understand details of GKE annotations.

  • Make sure Google Kubernetes Engine Monitoring is enabled in the cluster. In some cases, some of the annotations may be missing if GKE Monitoring is not enabled. To check if GKE Monitoring is enabled in the cluster, follow the instructions.
  • If GKE Monitoring is enabled in the cluster and you are still seeing missing GKE annotations, you can check if the agent which sends metadata updates to Monitoring is sending updates successfully, by visiting the Monitoring API dashboard for your project in the Cloud Console. In some cases, there may be errors due to exceeding the quota for the API. Please go to the quotas dashboard for the API and check if there are any quota exceeded errors. If there are quota exceeded errors, please follow the instructions in Managing your quota to request a quota increase.

Missing logs for some GKE flows

Make sure Intranode visibility is enabled in the cluster. Otherwise, flows between Pods on the same node are not logged.

Flow logs appear to be disabled even though you enabled them

  • When you're configuring a proxy-only subnet for internal HTTP(S) load balancers and you're using the gcloud compute networks subnets command to enable VPC Flow Logs, the command appears to succeed, but flow logs aren't actually enabled. The --enable-flow-logs flag doesn't take effect when you also include the --purpose=INTERNAL_HTTPS_LOAD_BALANCER flag.

    When you use the Cloud Console or the API to enable flow logs, you see the error message: "Invalid value for field 'resource.enableFlowLogs': 'true'. Invalid field set in subnetwork with purpose INTERNAL_HTTPS_LOAD_BALANCER."

    Because proxy-only subnets have no VMs, VPC Flow Logs aren't supported. This is intended behavior.

Pricing

Standard pricing for Logging, BigQuery, or Pub/Sub apply. VPC flow logs pricing is described in Network Telemetry pricing.

What's next