Using VPC Flow Logs

VPC Flow Logs records a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization.

This page assumes you are familiar with the concepts described in VPC Flow Logs overview.

Enabling VPC Flow Logs

When you enable VPC Flow Logs, you enable for all VMs in a subnet. However, you can cut down the amount of information written to logging. Refer to Log sampling and aggregation for details on the parameters you can control.

Enabling VPC Flow Logs when you create a subnet

Console

  1. Go to the VPC networks page in the Google Cloud Console.
    Go to the VPC networks page
  2. Click the network where you want to add a subnet.
  3. Click Add subnet.
  4. Under Flow logs, select On.
  5. If you want to adjust log sampling and aggregation, click Configure logs and adjust any of the following:
    • the Aggregation interval
    • whether or not to Include metadata in the final log entries
      By default, Include metadata includes all fields. To customize metadata fields, you must use the gcloud command-line interface or the API.
    • the Sample rate. 100% means that all entries are kept.
  6. Populate other fields as appropriate.
  7. Click Add.

gcloud

gcloud compute networks subnets create SUBNET_NAME \
    --enable-flow-logs \
    [--logging-aggregation-interval=AGGREGATION_INTERVAL \
    [--logging-flow-sampling=SAMPLE_RATE] \
    [--logging-filter-expr=FILTER_EXPRESSION] \
    [--logging-metadata=LOGGING_METADATA \
    [--logging-metadata-fields=METADATA_FIELDS] \
    [other flags as needed]

Replace the following:

  • AGGREGATION_INTERVAL: the aggregation interval for flow logs in that subnet. The interval can be set to any of the following: 5-sec (default), 30-sec, 1-min, 5-min, 10-min, or 15-min.
  • SAMPLE_RATE: the flow sampling rate. Flow sampling can be set from 0.0 (no sampling) to 1.0 (all logs). Default is 0.5.
  • FILTER_EXPRESSION is an expression that defines what logs you want to keep. For details, see Log filtering.
  • LOGGING_METADATA: the metadata annotations that you want to include in the logs:

    • include-all to include all metadata annotations
    • exclude-all to exclude all metadata annotations (default)
    • custom to include a custom list of metadata fields that you specify in METADATA_FIELDS.
  • METADATA_FIELDS: a comma-separated list of metadata fields you want to include in the logs. For example, src_instance,dst_instance. Can only be set if LOGGING_METADATA is set to custom.

API

Enable VPC Flow Logs when you create a new subnet.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks
{
  "logConfig": {
    "aggregationInterval": "AGGREGATION_INTERVAL",
    "flowSampling": SAMPLING_RATE,
    "filterExpr": EXPRESSION,
    "metadata": METADATA_SETTING,
    "metadataFields": METADATA_FIELDS,
    "enable": true
  },
  "ipCidrRange": "IP_RANGE",
  "network": "NETWORK_URL",
  "name": "SUBNET_NAME"
}

Replace the placeholders with valid values:

  • PROJECT_ID is the ID of the project where the subnet will be created.
  • REGION is the region where the subnet will be created.
  • AGGREGATION_INTERVAL sets the aggregation interval for flow logs in the subnet. The interval can be set to any of the following: INTERVAL_5_SEC, INTERVAL_30_SEC, INTERVAL_1_MIN, INTERVAL_5_MIN, INTERVAL_10_MIN, or INTERVAL_15_MIN.
  • SAMPLING_RATE is the flow sampling rate. Flow sampling can be set from 0.0 (no sampling) to 1.0 (all logs). Default is .0.5.
  • EXPRESSION is the filter expression you use to filter which logs are actually written. For details, see Log filtering.
  • METADATA_SETTING specifies whether all metadata is logged (INCLUDE_ALL_METADATA), no metadata is logged (EXCLUDE_ALL_METADATA), or only specific metadata is logged (CUSTOM_METADATA). If this field is set to CUSTOM_METADATA, also populate the metadataFields field. Default is EXCLUDE_ALL_METADATA. Refer to metadata annotations for details.

  • METADATA_FIELDS are the metadata fields you wish to capture when you have set metadata: CUSTOM_METADATA. This is a comma-separated list of metadata fields, such as src_instance, src_vpc.project_id.

  • IP_RANGE is the primary internal IP address range of the subnet.

  • NETWORK_URL is the VPC network URL where the subnet will be created.

  • SUBNET_NAME is a name for the subnet.

For more information, refer to the subnetworks.insert method.

Terraform

You can use a Terraform module to create a custom mode VPC network and subnets.

The following example creates three subnets as follows:

  • subnet-01 has VPC Flow Logs disabled. When you create a subnet, VPC Flow Logs are disabled unless you explicitly enable them.
  • subnet-02 has VPC Flow Logs enabled with the default flow log settings.
  • subnet-03 has VPC Flow Logs enabled with some custom settings.
module "test-vpc-module" {
  source       = "terraform-google-modules/network/google"
  version      = "~> 3.2.0"
  project_id   = var.project_id # Replace this with your project ID in quotes
  network_name = "my-custom-mode-network"
  mtu          = 1460

  subnets = [
    {
      subnet_name   = "subnet-01"
      subnet_ip     = "10.10.10.0/24"
      subnet_region = "us-west1"
    },
    {
      subnet_name           = "subnet-02"
      subnet_ip             = "10.10.20.0/24"
      subnet_region         = "us-west1"
      subnet_private_access = "true"
      subnet_flow_logs      = "true"
    },
    {
      subnet_name               = "subnet-03"
      subnet_ip                 = "10.10.30.0/24"
      subnet_region             = "us-west1"
      subnet_flow_logs          = "true"
      subnet_flow_logs_interval = "INTERVAL_10_MIN"
      subnet_flow_logs_sampling = 0.7
      subnet_flow_logs_metadata = "INCLUDE_ALL_METADATA"
    }
  ]
}

Enabling VPC Flow Logs for an existing subnet

Console

  1. Go to the VPC networks page in the Google Cloud Console.
    Go to the VPC networks page
  2. Click the subnet you want to update.
  3. Click Edit.
  4. Under Flow logs, select On.
  5. If you want to adjust log sampling and aggregation, click Configure logs and adjust any of the following:
    • the Aggregation interval
    • whether or not to Include metadata in the final log entries
      By default, Include metadata includes all fields. To customize metadata fields, you must use the gcloud command-line interface or the API.
    • the Sample rate. 100% means that all entries are kept.
  6. Click Save.

gcloud

gcloud compute networks subnets update SUBNET_NAME \
    --enable-flow-logs \
    [--logging-aggregation-interval=AGGREGATION_INTERVAL \
    [--logging-flow-sampling=SAMPLE_RATE] \
    [--logging-filter-expr=FILTER_EXPRESSION] \
    [--logging-metadata=LOGGING_METADATA \
    [--logging-metadata-fields=METADATA_FIELDS] \
    [other flags as needed]

Replace the following:

  • AGGREGATION_INTERVAL: the aggregation interval for flow logs in that subnet. The interval can be set to any of the following: 5-sec (default), 30-sec, 1-min, 5-min, 10-min, or 15-min.
  • SAMPLE_RATE: the flow sampling rate. Flow sampling can be set from 0.0 (no sampling) to 1.0 (all logs). Default is 0.5.
  • FILTER_EXPRESSION is an expression that defines what logs you want to keep. For details, see Log filtering.
  • LOGGING_METADATA: the metadata annotations that you want to include in the logs:

    • include-all to include all metadata annotations
    • exclude-all to exclude all metadata annotations (default)
    • custom to include a custom list of metadata fields that you specify in METADATA_FIELDS.
  • METADATA_FIELDS: a comma-separated list of metadata fields you want to include in the logs. For example, src_instance,dst_instance. Can only be set if LOGGING_METADATA is set to custom.

API

Enable VPC Flow Logs for an existing subnet.

PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME
{
  "logConfig": {
    "enable": true
    ...other logging fields.
  },
  "fingerprint": "SUBNETWORK_FINGERPRINT"
}

Replace the placeholders with valid values:

  • PROJECT_ID is the ID of the project where the subnet is located.
  • REGION is the region where the subnet is located.
  • SUBNET_NAME is the name of the existing subnet.
  • SUBNET_FINGERPRINT is the fingerprint ID for the existing subnet, which is provided when you describe a subnet.
  • For the other logging fields, see Enabling VPC Flow Logging when you create a subnet.

For more information, refer to the subnetworks.patch method.

Viewing estimated log volume for existing subnets

The Google Cloud Console provides an estimate of your log volume for existing subnets, which you can then use to estimate the cost of enabling flow logs. The estimate is based on flows captured at 5 second intervals for the subnet over the previous 7 days. Also, the size of each log depends on whether you enable metadata annotations.

  1. Go to the VPC networks page in the Google Cloud Console.
    Go to the VPC networks page
  2. Click the subnet that you want to estimate costs for.
  3. Click Edit.
  4. Under Flow logs, select On.
  5. Click Configure logs.
  6. View Estimated logs generated per day to see the estimate.
  7. Click Cancel so that none of your changes are saved.

Viewing which subnets have VPC Flow Logs enabled

You can check which subnets in a network have VPC Flow Logs enabled.

Console

  1. Go to the VPC networks page in the Google Cloud Console.
    Go to the VPC networks page
  2. View the Flow logs column to see if logging is on or off.

gcloud

gcloud compute networks subnets list \
    --project PROJECT_ID \
    --filter="network=NETWORK_URL" \
    --format="csv(name,logConfig.enable)"

Replace the following:

  • PROJECT_ID: the ID of the project you are querying.
  • NETWORK_URL: the FQDN of the network containing the subnets.

Updating VPC Flow Logs parameters

You can modify log sampling parameters. See Log sampling and aggregation for details on the parameters you can control.

Console

  1. Go to the VPC networks page in the Google Cloud Console.
    Go to the VPC networks page
  2. Click the subnet you want to update.
  3. Click Edit.
  4. Click Configure logs to adjust log sampling and aggregation:
    • the Aggregation interval
    • whether or not to Include metadata in the final log entries
      By default, Include metadata includes all fields. To customize metadata fields, you must use the gcloud command-line interface or the API.
    • the Sample rate. 100% means that all entries are kept.
  5. Click Save.

gcloud

gcloud compute networks subnets update SUBNET_NAME \
    [--logging-aggregation-interval=AGGREGATION_INTERVAL \
    [--logging-flow-sampling=SAMPLE_RATE] \
    [--logging-filter-expr=FILTER_EXPRESSION] \
    [--logging-metadata=LOGGING_METADATA \
    [--logging-metadata-fields=METADATA_FIELDS] \

Replace the following:

  • AGGREGATION_INTERVAL: the aggregation interval for flow logs in that subnet. The interval can be set to any of the following: 5-sec (default), 30-sec, 1-min, 5-min, 10-min, or 15-min.
  • SAMPLE_RATE: the flow sampling rate. Flow sampling can be set from 0.0 (no sampling) to 1.0 (all logs). Default is 0.5.
  • FILTER_EXPRESSION is an expression that defines what logs you want to keep. For details, see Log filtering.
  • LOGGING_METADATA: the metadata annotations that you want to include in the logs:

    • include-all to include all metadata annotations
    • exclude-all to exclude all metadata annotations (default)
    • custom to include a custom list of metadata fields that you specify in METADATA_FIELDS.
  • METADATA_FIELDS: a comma-separated list of metadata fields you want to include in the logs. For example, src_instance,dst_instance. Can only be set if LOGGING_METADATA is set to custom.

API

Modify the log sampling fields to update VPC Flow Logs behaviors.

PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME
{
  "logConfig": {
    ...fields to modify
  },
  "fingerprint": "SUBNETWORK_FINGERPRINT"
}

Replace the placeholders with valid values:

  • PROJECT_ID is the ID of the project where the subnet is located.
  • REGION is the region where the subnet is located.
  • SUBNET_NAME is the name of the existing subnet.
  • SUBNET_FINGERPRINT is the fingerprint ID for the existing subnet, which is provided when you describe a subnet.
  • For the fields that you can modify, see Enabling VPC Flow Logging when you create a subnet.

For more information, refer to the subnetworks.patch method.

Disabling VPC Flow Logs for a subnet

Console

  1. Go to the VPC networks page in the Google Cloud Console.
    Go to the VPC networks page
  2. Click the subnet you want to update.
  3. Click Edit.
  4. Under Flow logs, select Off.
  5. Click Save.

gcloud

gcloud compute networks subnets update SUBNET_NAME \
    --no-enable-flow-logs

API

Disable VPC Flow Logs on a subnet to stop collecting log records.

PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME
{
  "logConfig": {
    "enable": false
  },
  "fingerprint": "SUBNETWORK_FINGERPRINT"
}

Replace the placeholders with valid values:

  • PROJECT_ID is the ID of the project where the subnet is located.
  • REGION is the region where the subnet is located.
  • SUBNET_NAME is the name of the existing subnet.
  • SUBNET_FINGERPRINT is the fingerprint ID for the existing subnet, which is provided when you describe a subnet.

For more information, refer to the subnetworks.patch method.

Accessing logs using Logging

Configuring IAM

Follow the access control guide for Logging.

View logs through the logs viewer page.

You need your project's project ID for these commands.

Accessing all flow logs

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. Select Subnetwork in the first pull-down menu.
  3. Select compute.googleapis.com/vpc_flows in the second pull-down menu.
  4. Click OK.

Alternatively:

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
  3. Paste the following into the field. Replace PROJECT_ID with your project ID.
    resource.type="gce_subnetwork"
    logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows"
    
  4. Click Submit filter.

Accessing logs for a specific subnet

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In the first pull-down menu, move the cursor to Subnetwork, then move it to the right to open up the individual subnet selection menu.
  3. In the second pull-down menu, select compute.googleapis.com/vpc_flows.
  4. Click OK.

Alternatively:

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
  3. Paste the following into the field. Replace PROJECT_ID with your project ID and SUBNET_NAME with your subnetwork.
    resource.type="gce_subnetwork"
    logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows"
    resource.labels.subnetwork_name="SUBNET_NAME"
    
  4. Click Submit filter.

Accessing logs for a specific VM

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
  3. Paste the following into the field. Replace PROJECT_ID with your project ID and VM_NAME with your VM.
    resource.type="gce_subnetwork"
    logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows"
    jsonPayload.src_instance.vm_name="VM_NAME"
    
  4. Click Submit filter.

Accessing logs for traffic to a specific subnet range

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
  3. Paste the following into the field. Replace PROJECT_ID with your project ID and SUBNET_RANGE with a CIDR range (192.168.1.0/24).
    resource.type="gce_subnetwork"
    logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows"
    ip_in_net(jsonPayload.connection.dest_ip, SUBNET_RANGE)
    
  4. Click Submit filter.

Accessing logs for a specific GKE cluster

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
  3. Paste the following into the field. Replace PROJECT_ID with your project ID and SUBNET_NAME with your subnetwork.
    resource.type="gce_subnetwork"
    logName="projects/{#project_id}/logs/vpc_flows"
    jsonPayload.src_gke_details.cluster.cluster_name="{#cluster_name}" OR jsonPayload.dest_gke_details.cluster.cluster_name="{#cluster_name}"
    
  4. Click Submit filter.

Accessing logs for specific ports and protocols

For an individual destination port

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
  3. Paste the following into the field. Replace PROJECT_ID with your project ID, PORT with the destination port, and PROTOCOL with the protocol.
    resource.type="gce_subnetwork"
    logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows"
    jsonPayload.connection.dest_port=PORT
    jsonPayload.connection.protocol=PROTOCOL
    
  4. Click Submit filter.

For more than one destination port

  1. Go to the Logs page in the Google Cloud Console.
    Go to the Logs page
  2. In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
  3. Paste the following into the field. Replace PROJECT_ID with your project ID, PORT1 and PORT2 with the destination ports, and PROTOCOL with the protocol.
    resource.type="gce_subnetwork"
    logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows"
    jsonPayload.connection.dest_port=(PORT1 OR PORT2)
    jsonPayload.connection.protocol=PROTOCOL
    
  4. Click Submit filter.

Exporting logs to BigQuery, Pub/Sub, and custom targets

You can export flow logs from Logging to a destination of your choice as described in the Logging documentation. Refer to the previous section for example filters.

Troubleshooting

No vpc_flows appear in Logging under the gce_subnetwork resource

  • Confirm that logging is enabled for the given subnet.
  • VPC flows are only supported for VPC networks. If you have a legacy network, you will not see any logs.
  • In Shared VPC networks, logs only appear in the host project, not the service projects. Make sure you look for the logs in the host project.
  • Logging exclusion filters block specified logs. Make sure there are no exclusion rules that discard VPC Flow Logs.
    1. Go to Resource usage.
    2. Click the Exclusions tab.
    3. Make sure there are no exclusion rules that might discard VPC Flow Logs.

No RTT or byte values on some of the logs

  • RTT measurements may be missing if not enough packets were sampled to capture RTT. This is more likely to happen for low volume connections.
  • RTT values are available only for TCP flows.
  • Some packets are sent with no payload. If header-only packets were sampled, the bytes value will be 0.

Some flows are missing

  • Only TCP, UDP, ICMP, and GRE protocols are supported. VPC Flow Logs does not support any other protocols.
  • Logs are sampled. Some packets in very low volume flows might be missed.

Missing GKE annotations in some logs

Refer to GKE annotations to understand details of GKE annotations.

  • Make sure Google Kubernetes Engine Monitoring is enabled in the cluster. In some cases, some of the annotations may be missing if GKE Monitoring is not enabled. To check if GKE Monitoring is enabled in the cluster, follow the instructions.
  • If GKE Monitoring is enabled in the cluster and you are still seeing missing GKE annotations, you can check if the agent which sends metadata updates to Monitoring is sending updates successfully, by visiting the Monitoring API dashboard for your project in the Cloud Console. In some cases, there may be errors due to exceeding the quota for the API. Please go to the quotas dashboard for the API and check if there are any quota exceeded errors. If there are quota exceeded errors, please follow the instructions in Managing your quota to request a quota increase.

Missing logs for some GKE flows

Make sure Intranode visibility is enabled in the cluster. Otherwise, flows between Pods on the same node are not logged.

Flow logs appear to be disabled even though you enabled them

  • When you're configuring a proxy-only subnet for internal HTTP(S) load balancers and you're using the gcloud compute networks subnets command to enable VPC Flow Logs, the command appears to succeed, but flow logs aren't actually enabled. The --enable-flow-logs flag doesn't take effect when you also include the --purpose=INTERNAL_HTTPS_LOAD_BALANCER flag.

    When you use the Cloud Console or the API to enable flow logs, you see the error message: "Invalid value for field 'resource.enableFlowLogs': 'true'. Invalid field set in subnetwork with purpose INTERNAL_HTTPS_LOAD_BALANCER."

    Because proxy-only subnets have no VMs, VPC Flow Logs is not supported. This is intended behavior.

What's next