VPC Flow Logs records a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization.
This page assumes you are familiar with the concepts described in VPC Flow Logs overview.
Enabling VPC Flow Logs
When you enable VPC Flow Logs, you enable for all VMs in a subnet. However, you can cut down the amount of information written to logging. Refer to Log sampling and aggregation for details on the parameters you can control.
Enabling VPC Flow Logs when you create a subnet
Console
- Go to the VPC networks page in the Google Cloud Console.
Go to the VPC networks page - Click the network where you want to add a subnet.
- Click Add subnet.
- Under Flow logs, select On.
- If you want to adjust log sampling and
aggregation,
click Configure logs and adjust any of the following:
- the Aggregation interval
- whether or not to Include metadata in the final log entries
By default, Include metadata includes all fields. To customize metadata fields, you must use thegcloud
command-line interface or the API. - the Sample rate.
100%
means that all entries are kept.
- Populate other fields as appropriate.
- Click Add.
gcloud
gcloud compute networks subnets create SUBNET_NAME \ --enable-flow-logs \ [--logging-aggregation-interval=AGGREGATION_INTERVAL \ [--logging-flow-sampling=SAMPLE_RATE] \ [--logging-filter-expr=FILTER_EXPRESSION] \ [--logging-metadata=LOGGING_METADATA \ [--logging-metadata-fields=METADATA_FIELDS] \ [other flags as needed]
Replace the following:
AGGREGATION_INTERVAL
: the aggregation interval for flow logs in that subnet. The interval can be set to any of the following: 5-sec (default), 30-sec, 1-min, 5-min, 10-min, or 15-min.SAMPLE_RATE
: the flow sampling rate. Flow sampling can be set from0.0
(no sampling) to1.0
(all logs). Default is0.5
.FILTER_EXPRESSION
is an expression that defines what logs you want to keep. For details, see Log filtering.LOGGING_METADATA
: the metadata annotations that you want to include in the logs:include-all
to include all metadata annotationsexclude-all
to exclude all metadata annotations (default)custom
to include a custom list of metadata fields that you specify inMETADATA_FIELDS
.
METADATA_FIELDS
: a comma-separated list of metadata fields you want to include in the logs. For example,src_instance,dst_instance
. Can only be set ifLOGGING_METADATA
is set tocustom
.
API
Enable VPC Flow Logs when you create a new subnet.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks { "logConfig": { "aggregationInterval": "AGGREGATION_INTERVAL", "flowSampling": SAMPLING_RATE, "filterExpr": EXPRESSION, "metadata": METADATA_SETTING, "metadataFields": METADATA_FIELDS, "enable": true }, "ipCidrRange": "IP_RANGE", "network": "NETWORK_URL", "name": "SUBNET_NAME" }
Replace the placeholders with valid values:
PROJECT_ID
is the ID of the project where the subnet will be created.REGION
is the region where the subnet will be created.AGGREGATION_INTERVAL
sets the aggregation interval for flow logs in the subnet. The interval can be set to any of the following:INTERVAL_5_SEC
,INTERVAL_30_SEC
,INTERVAL_1_MIN
,INTERVAL_5_MIN
,INTERVAL_10_MIN
, orINTERVAL_15_MIN
.SAMPLING_RATE
is the flow sampling rate. Flow sampling can be set from0.0
(no sampling) to1.0
(all logs). Default is.0.5
.EXPRESSION
is the filter expression you use to filter which logs are actually written. For details, see Log filtering.METADATA_SETTING
specifies whether all metadata is logged (INCLUDE_ALL_METADATA
), no metadata is logged (EXCLUDE_ALL_METADATA
), or only specific metadata is logged (CUSTOM_METADATA
). If this field is set toCUSTOM_METADATA
, also populate themetadataFields
field. Default isEXCLUDE_ALL_METADATA
. Refer to metadata annotations for details.METADATA_FIELDS
are the metadata fields you wish to capture when you have setmetadata: CUSTOM_METADATA
. This is a comma-separated list of metadata fields, such assrc_instance, src_vpc.project_id
.IP_RANGE
is the primary internal IP address range of the subnet.NETWORK_URL
is the VPC network URL where the subnet will be created.SUBNET_NAME
is a name for the subnet.
For more information, refer to the
subnetworks.insert
method.
Enabling VPC Flow Logs for an existing subnet
Console
- Go to the VPC networks page in the Google Cloud Console.
Go to the VPC networks page - Click the subnet you want to update.
- Click Edit.
- Under Flow logs, select On.
- If you want to adjust log sampling and
aggregation,
click Configure logs and adjust any of the following:
- the Aggregation interval
- whether or not to Include metadata in the final log entries
By default, Include metadata includes all fields. To customize metadata fields, you must use thegcloud
command-line interface or the API. - the Sample rate.
100%
means that all entries are kept.
- Click Save.
gcloud
gcloud compute networks subnets update SUBNET_NAME \ --enable-flow-logs \ [--logging-aggregation-interval=AGGREGATION_INTERVAL \ [--logging-flow-sampling=SAMPLE_RATE] \ [--logging-filter-expr=FILTER_EXPRESSION] \ [--logging-metadata=LOGGING_METADATA \ [--logging-metadata-fields=METADATA_FIELDS] \ [other flags as needed]
Replace the following:
AGGREGATION_INTERVAL
: the aggregation interval for flow logs in that subnet. The interval can be set to any of the following: 5-sec (default), 30-sec, 1-min, 5-min, 10-min, or 15-min.SAMPLE_RATE
: the flow sampling rate. Flow sampling can be set from0.0
(no sampling) to1.0
(all logs). Default is0.5
.FILTER_EXPRESSION
is an expression that defines what logs you want to keep. For details, see Log filtering.LOGGING_METADATA
: the metadata annotations that you want to include in the logs:include-all
to include all metadata annotationsexclude-all
to exclude all metadata annotations (default)custom
to include a custom list of metadata fields that you specify inMETADATA_FIELDS
.
METADATA_FIELDS
: a comma-separated list of metadata fields you want to include in the logs. For example,src_instance,dst_instance
. Can only be set ifLOGGING_METADATA
is set tocustom
.
API
Enable VPC Flow Logs for an existing subnet.
PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME { "logConfig": { "enable": true ...other logging fields. }, "fingerprint": "SUBNETWORK_FINGERPRINT" }
Replace the placeholders with valid values:
PROJECT_ID
is the ID of the project where the subnet is located.REGION
is the region where the subnet is located.SUBNET_NAME
is the name of the existing subnet.SUBNET_FINGERPRINT
is the fingerprint ID for the existing subnet, which is provided when you describe a subnet.- For the other logging fields, see Enabling VPC Flow Logging when you create a subnet.
For more information, refer to the
subnetworks.patch
method.
Viewing estimated log volume for existing subnets
The Google Cloud Console provides an estimate of your log volume for existing subnets, which you can then use to estimate the cost of enabling flow logs. The estimate is based on flows captured at 5 second intervals for the subnet over the previous 7 days. Also, the size of each log depends on whether you enable metadata annotations.
- Go to the VPC networks page in the Google Cloud Console.
Go to the VPC networks page - Click the subnet that you want to estimate costs for.
- Click Edit.
- Under Flow logs, select On.
- Click Configure logs.
- View Estimated logs generated per day to see the estimate.
- Click Cancel so that none of your changes are saved.
Viewing which subnets have VPC Flow Logs enabled
You can check which subnets in a network have VPC Flow Logs enabled.
Console
- Go to the VPC networks page in the Google Cloud Console.
Go to the VPC networks page - View the Flow logs column to see if logging is on or off.
gcloud
gcloud compute networks subnets list \ --project PROJECT_ID \ --filter="network=NETWORK_URL" \ --format="csv(name,logConfig.enable)"
Replace the following:
PROJECT_ID
: the ID of the project you are querying.NETWORK_URL
: the FQDN of the network containing the subnets.
Updating VPC Flow Logs parameters
You can modify log sampling parameters. See Log sampling and aggregation for details on the parameters you can control.
Console
- Go to the VPC networks page in the Google Cloud Console.
Go to the VPC networks page - Click the subnet you want to update.
- Click Edit.
- Click Configure logs to adjust log sampling and
aggregation:
- the Aggregation interval
- whether or not to Include metadata in the final log entries
By default, Include metadata includes all fields. To customize metadata fields, you must use thegcloud
command-line interface or the API. - the Sample rate.
100%
means that all entries are kept.
- Click Save.
gcloud
gcloud compute networks subnets update SUBNET_NAME \ [--logging-aggregation-interval=AGGREGATION_INTERVAL \ [--logging-flow-sampling=SAMPLE_RATE] \ [--logging-filter-expr=FILTER_EXPRESSION] \ [--logging-metadata=LOGGING_METADATA \ [--logging-metadata-fields=METADATA_FIELDS] \
Replace the following:
AGGREGATION_INTERVAL
: the aggregation interval for flow logs in that subnet. The interval can be set to any of the following: 5-sec (default), 30-sec, 1-min, 5-min, 10-min, or 15-min.SAMPLE_RATE
: the flow sampling rate. Flow sampling can be set from0.0
(no sampling) to1.0
(all logs). Default is0.5
.FILTER_EXPRESSION
is an expression that defines what logs you want to keep. For details, see Log filtering.LOGGING_METADATA
: the metadata annotations that you want to include in the logs:include-all
to include all metadata annotationsexclude-all
to exclude all metadata annotations (default)custom
to include a custom list of metadata fields that you specify inMETADATA_FIELDS
.
METADATA_FIELDS
: a comma-separated list of metadata fields you want to include in the logs. For example,src_instance,dst_instance
. Can only be set ifLOGGING_METADATA
is set tocustom
.
API
Modify the log sampling fields to update VPC Flow Logs behaviors.
PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME { "logConfig": { ...fields to modify }, "fingerprint": "SUBNETWORK_FINGERPRINT" }
Replace the placeholders with valid values:
PROJECT_ID
is the ID of the project where the subnet is located.REGION
is the region where the subnet is located.SUBNET_NAME
is the name of the existing subnet.SUBNET_FINGERPRINT
is the fingerprint ID for the existing subnet, which is provided when you describe a subnet.- For the fields that you can modify, see Enabling VPC Flow Logging when you create a subnet.
For more information, refer to the
subnetworks.patch
method.
Disabling VPC Flow Logs for a subnet
Console
- Go to the VPC networks page in the Google Cloud Console.
Go to the VPC networks page - Click the subnet you want to update.
- Click Edit.
- Under Flow logs, select Off.
- Click Save.
gcloud
gcloud compute networks subnets update SUBNET_NAME \ --no-enable-flow-logs
API
Disable VPC Flow Logs on a subnet to stop collecting log records.
PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME { "logConfig": { "enable": false }, "fingerprint": "SUBNETWORK_FINGERPRINT" }
Replace the placeholders with valid values:
PROJECT_ID
is the ID of the project where the subnet is located.REGION
is the region where the subnet is located.SUBNET_NAME
is the name of the existing subnet.SUBNET_FINGERPRINT
is the fingerprint ID for the existing subnet, which is provided when you describe a subnet.
For more information, refer to the
subnetworks.patch
method.
Accessing logs using Logging
Configuring IAM
Follow the access control guide for Logging.
View logs through the logs viewer page.
You need your project's project ID for these commands.
Accessing all flow logs
- Go to the Logs page in the Google Cloud Console.
Go to the Logs page - Select Subnetwork in the first pull-down menu.
- Select compute.googleapis.com/vpc_flows in the second pull-down menu.
- Click OK.
Alternatively:
- Go to the Logs page in the Google Cloud Console.
Go to the Logs page - In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
- Paste the following into the field. Replace
PROJECT_ID
with your project ID.resource.type="gce_subnetwork" logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows"
- Click Submit filter.
Accessing logs for a specific subnet
- Go to the Logs page in the Google Cloud Console.
Go to the Logs page - In the first pull-down menu, move the cursor to Subnetwork, then move it to the right to open up the individual subnet selection menu.
- In the second pull-down menu, select compute.googleapis.com/vpc_flows.
- Click OK.
Alternatively:
- Go to the Logs page in the Google Cloud Console.
Go to the Logs page - In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
- Paste the following into the field. Replace
PROJECT_ID
with your project ID andSUBNET_NAME
with your subnetwork.resource.type="gce_subnetwork" logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" resource.labels.subnetwork_name="SUBNET_NAME"
- Click Submit filter.
Accessing logs for a specific VM
- Go to the Logs page in the Google Cloud Console.
Go to the Logs page - In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
- Paste the following into the field. Replace
PROJECT_ID
with your project ID and VM_NAME with your VM.resource.type="gce_subnetwork" logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" jsonPayload.src_instance.vm_name="VM_NAME"
- Click Submit filter.
Accessing logs for traffic to a specific subnet range
- Go to the Logs page in the Google Cloud Console.
Go to the Logs page - In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
- Paste the following into the field. Replace
PROJECT_ID
with your project ID andSUBNET_RANGE
with a CIDR range (192.168.1.0/24
).resource.type="gce_subnetwork" logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" ip_in_net(jsonPayload.connection.dest_ip, SUBNET_RANGE)
- Click Submit filter.
Accessing logs for a specific GKE cluster
- Go to the Logs page in the Google Cloud Console.
Go to the Logs page - In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
- Paste the following into the field. Replace
PROJECT_ID
with your project ID andSUBNET_NAME
with your subnetwork.resource.type="gce_subnetwork" logName="projects/{#project_id}/logs/vpc_flows" jsonPayload.src_gke_details.cluster.cluster_name="{#cluster_name}" OR jsonPayload.dest_gke_details.cluster.cluster_name="{#cluster_name}"
- Click Submit filter.
Accessing logs for specific ports and protocols
For an individual destination port
- Go to the Logs page in the Google Cloud Console.
Go to the Logs page - In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
- Paste the following into the field. Replace
PROJECT_ID
with your project ID, PORT with the destination port, and PROTOCOL with the protocol.resource.type="gce_subnetwork" logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" jsonPayload.connection.dest_port=PORT jsonPayload.connection.protocol=PROTOCOL
- Click Submit filter.
For more than one destination port
- Go to the Logs page in the Google Cloud Console.
Go to the Logs page - In right side of the Filter by label or text search field, click the down arrow and select Convert to advanced filter.
- Paste the following into the field. Replace
PROJECT_ID
with your project ID, PORT1 and PORT2 with the destination ports, and PROTOCOL with the protocol.resource.type="gce_subnetwork" logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" jsonPayload.connection.dest_port=(PORT1 OR PORT2) jsonPayload.connection.protocol=PROTOCOL
- Click Submit filter.
Exporting logs to BigQuery, Pub/Sub, and custom targets
You can export flow logs from Logging to a destination of your choice as described in the Logging documentation. Refer to the previous section for example filters.
Troubleshooting
No vpc_flows
appear in Logging under the gce_subnetwork
resource
- Confirm that logging is enabled for the given subnet.
- VPC flows are only supported for VPC networks. If you have a legacy network, you will not see any logs.
- In Shared VPC networks, logs only appear in the host project, not the service projects. Make sure you look for the logs in the host project.
- Logging exclusion filters block specified logs.
Make sure there are no exclusion rules that discard VPC Flow Logs.
- Go to Resource usage.
- Click the Exclusions tab.
- Make sure there are no exclusion rules that might discard VPC Flow Logs.
No RTT or byte values on some of the logs
- RTT measurements may be missing if not enough packets were sampled to capture RTT. This is more likely to happen for low volume connections.
- No RTT values are available for UDP flows.
- Some packets are sent with no payload. If header-only packets were sampled, the bytes value will be 0.
Some flows are missing
- Only UDP and TCP protocols are supported. VPC Flow Logs does not support any other protocols.
- Logs are sampled. Some packets in very low volume flows might be missed.
Missing GKE annotations in some logs
Refer to GKE annotations to understand details of GKE annotations.
- Make sure Google Kubernetes Engine Monitoring is enabled in the cluster. In some cases, some of the annotations may be missing if GKE Monitoring is not enabled. To check if GKE Monitoring is enabled in the cluster, follow the instructions.
- If GKE Monitoring is enabled in the cluster and you are still seeing missing GKE annotations, you can check if the agent which sends metadata updates to Monitoring is sending updates successfully, by visiting the Monitoring API dashboard for your project in the Cloud Console. In some cases, there may be errors due to exceeding the quota for the API. Please go to the quotas dashboard for the API and check if there are any quota exceeded errors. If there are quota exceeded errors, please follow the instructions in Managing your quota to request a quota increase.
Missing logs for some GKE flows
Make sure Intranode visibility is enabled in the cluster. Otherwise, flows between Pods on the same node are not logged.
Flow logs appear to be disabled even though you enabled them
When you're configuring a proxy-only subnet for internal HTTP(S) load balancers and you're using the
gcloud compute networks subnets
command to enable VPC Flow Logs, the command appears to succeed, but flow logs aren't actually enabled. The--enable-flow-logs
flag doesn't take effect when you also include the--purpose=INTERNAL_HTTPS_LOAD_BALANCER
flag.When you use the Cloud Console or the API to enable flow logs, you see the error message: "Invalid value for field 'resource.enableFlowLogs': 'true'. Invalid field set in subnetwork with purpose INTERNAL_HTTPS_LOAD_BALANCER."
Because proxy-only subnets have no VMs, VPC Flow Logs is not supported. This is intended behavior.
What's next
- View Logging documentation
- View Logging export documentation