Use VPC Flow Logs
VPC Flow Logs records a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization.
This page assumes you are familiar with the concepts described in VPC Flow Logs.
Enable VPC Flow Logs
When you enable VPC Flow Logs, you enable for all VMs in a subnet. However, you can cut down the amount of information written to logging. Refer to Log sampling and aggregation for details on the parameters you can control.
VPC Flow Logs and GKE annotations
Viewing Google Kubernetes Engine annotations in VPC Flow Logs is supported from GKE version 1.12.7. The GKE version you use determines whether you need to enable Cloud Operations for GKE.
GKE cluster version 1.15 or later: Cloud Operations for GKE is not required to view GKE annotations.
GKE cluster version 1.14: Cloud Operations for GKE is required to view GKE annotations, but it is enabled by default. No further action is required.
GKE cluster version earlier than 1.14: Cloud Operations for GKE is required to view GKE annotations. Enabling Cloud Operations for GKE support in the GKE cluster ensures that GKE metadata updates are sent to Logging.
Cloud Operations for GKE support can be configured when you create new clusters or when you modify existing clusters. For more information, see Configuring Cloud Operations for GKE.
Enable VPC Flow Logs when you create a subnet
Console
In the Google Cloud console, go to the VPC networks page.
Click the network where you want to add a subnet.
Click Add subnet.
For Flow logs, select On.
If you want to adjust log sampling and aggregation, click Configure logs and adjust any of the following:
- The Aggregation interval.
- Whether to include metadata in the final log entries. By default, Include metadata includes all fields. To customize metadata fields, you must use the Google Cloud CLI or the API.
- The Sample rate.
100%
means that all entries are kept.
Populate other fields as appropriate.
Click Add.
gcloud
gcloud compute networks subnets create SUBNET_NAME \ --enable-flow-logs \ [--logging-aggregation-interval=AGGREGATION_INTERVAL] \ [--logging-flow-sampling=SAMPLE_RATE] \ [--logging-filter-expr=FILTER_EXPRESSION] \ [--logging-metadata=LOGGING_METADATA] \ [--logging-metadata-fields=METADATA_FIELDS] \ [other flags as needed]
Replace the following:
AGGREGATION_INTERVAL
: the aggregation interval for flow logs in that subnet. The interval can be set to any of the following: 5-sec (default), 30-sec, 1-min, 5-min, 10-min, or 15-min.SAMPLE_RATE
: the flow sampling rate. Flow sampling can be set from0.0
(no sampling) to1.0
(all logs). Default is0.5
.FILTER_EXPRESSION
is an expression that defines what logs you want to keep. For details, see Log filtering.LOGGING_METADATA
: the metadata annotations that you want to include in the logs:include-all
to include all metadata annotationsexclude-all
to exclude all metadata annotations (default)custom
to include a custom list of metadata fields that you specify inMETADATA_FIELDS
.
METADATA_FIELDS
: a comma-separated list of metadata fields you want to include in the logs. For example,src_instance,dst_instance
. Can only be set ifLOGGING_METADATA
is set tocustom
.
API
Enable VPC Flow Logs when you create a new subnet.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks { "logConfig": { "aggregationInterval": "AGGREGATION_INTERVAL", "flowSampling": SAMPLING_RATE, "filterExpr": EXPRESSION, "metadata": METADATA_SETTING, "metadataFields": METADATA_FIELDS, "enable": true }, "ipCidrRange": "IP_RANGE", "network": "NETWORK_URL", "name": "SUBNET_NAME" }
Replace the placeholders with valid values:
PROJECT_ID
is the ID of the project where the subnet will be created.REGION
is the region where the subnet will be created.AGGREGATION_INTERVAL
sets the aggregation interval for flow logs in the subnet. The interval can be set to any of the following:INTERVAL_5_SEC
,INTERVAL_30_SEC
,INTERVAL_1_MIN
,INTERVAL_5_MIN
,INTERVAL_10_MIN
, orINTERVAL_15_MIN
.SAMPLING_RATE
is the flow sampling rate. Flow sampling can be set from0.0
(no sampling) to1.0
(all logs). Default is.0.5
.EXPRESSION
is the filter expression you use to filter which logs are actually written. For details, see Log filtering.METADATA_SETTING
specifies whether all metadata is logged (INCLUDE_ALL_METADATA
), no metadata is logged (EXCLUDE_ALL_METADATA
), or only specific metadata is logged (CUSTOM_METADATA
). If this field is set toCUSTOM_METADATA
, also populate themetadataFields
field. Default isEXCLUDE_ALL_METADATA
. Refer to metadata annotations for details.METADATA_FIELDS
are the metadata fields you wish to capture when you have setmetadata: CUSTOM_METADATA
. This is a comma-separated list of metadata fields, such assrc_instance, src_vpc.project_id
.IP_RANGE
is the primary internal IP address range of the subnet.NETWORK_URL
is the VPC network URL where the subnet will be created.SUBNET_NAME
is a name for the subnet.
For more information, refer to the
subnetworks.insert
method.
Terraform
You can use a Terraform module to create a custom mode VPC network and subnets.
The following example creates three subnets as follows:
subnet-01
has VPC Flow Logs disabled. When you create a subnet, VPC Flow Logs are disabled unless you explicitly enable them.subnet-02
has VPC Flow Logs enabled with the default flow log settings.subnet-03
has VPC Flow Logs enabled with some custom settings.
To learn how to apply or remove a Terraform configuration, see Basic Terraform commands.
Enable VPC Flow Logs for an existing subnet
Console
In the Google Cloud console, go to the VPC networks page.
Click the subnet that you want to update.
Click Edit.
For Flow logs, select On.
If you want to adjust log sampling and aggregation, click Configure logs and adjust any of the following:
- The Aggregation interval.
- Whether to include metadata in the final log entries. By default, Include metadata includes all fields. To customize metadata fields, you must use the Google Cloud CLI or the API.
- The Sample rate.
100%
means that all entries are kept.
Click Save.
gcloud
gcloud compute networks subnets update SUBNET_NAME \ --enable-flow-logs \ [--logging-aggregation-interval=AGGREGATION_INTERVAL] \ [--logging-flow-sampling=SAMPLE_RATE] \ [--logging-filter-expr=FILTER_EXPRESSION] \ [--logging-metadata=LOGGING_METADATA] \ [--logging-metadata-fields=METADATA_FIELDS] \ [other flags as needed]
Replace the following:
AGGREGATION_INTERVAL
: the aggregation interval for flow logs in that subnet. The interval can be set to any of the following: 5-sec (default), 30-sec, 1-min, 5-min, 10-min, or 15-min.SAMPLE_RATE
: the flow sampling rate. Flow sampling can be set from0.0
(no sampling) to1.0
(all logs). Default is0.5
.FILTER_EXPRESSION
is an expression that defines what logs you want to keep. For details, see Log filtering.LOGGING_METADATA
: the metadata annotations that you want to include in the logs:include-all
to include all metadata annotationsexclude-all
to exclude all metadata annotations (default)custom
to include a custom list of metadata fields that you specify inMETADATA_FIELDS
.
METADATA_FIELDS
: a comma-separated list of metadata fields you want to include in the logs. For example,src_instance,dst_instance
. Can only be set ifLOGGING_METADATA
is set tocustom
.
API
Enable VPC Flow Logs for an existing subnet.
PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME { "logConfig": { "enable": true ...other logging fields. }, "fingerprint": "SUBNETWORK_FINGERPRINT" }
Replace the placeholders with valid values:
PROJECT_ID
is the ID of the project where the subnet is located.REGION
is the region where the subnet is located.SUBNET_NAME
is the name of the existing subnet.SUBNET_FINGERPRINT
is the fingerprint ID for the existing subnet, which is provided when you describe a subnet.- For the other logging fields, see Enabling VPC Flow Logging when you create a subnet.
For more information, refer to the
subnetworks.patch
method.
View estimated log volume for existing subnets
The Google Cloud console provides an estimate of your log volume for existing subnets, which you can then use to estimate the cost of enabling flow logs. The estimate is based on flows captured at 5-second intervals for the subnet over the previous 7 days. Also, the size of each log depends on whether you enable metadata annotations.
Console
In the Google Cloud console, go to the VPC networks page.
Click the subnet that you want to estimate costs for.
Click Edit.
For Flow logs, select On.
Click Configure logs.
View Estimated logs generated per day to see the estimate.
Click Cancel so that none of your changes are saved.
View which subnets have VPC Flow Logs enabled
You can check which subnets in a network have VPC Flow Logs enabled.
Console
In the Google Cloud console, go to the VPC networks page.
View the Flow logs column to see if logging is on or off.
gcloud
gcloud compute networks subnets list \ --project PROJECT_ID \ --network="NETWORK" \ --format="csv(name,region,logConfig.enable)"
Replace the following:
PROJECT_ID
: the ID of the project you are querying.NETWORK
: the name of the network containing the subnets.
Update VPC Flow Logs parameters
You can modify log sampling parameters. See Log sampling and aggregation for details on the parameters you can control.
Console
In the Google Cloud console, go to the VPC networks page.
Click the subnet that you want to update.
Click Edit.
If you want to adjust log sampling and aggregation, click Configure logs and adjust any of the following:
- The Aggregation interval.
- Whether to include metadata in the final log entries. By default, Include metadata includes all fields. To customize metadata fields, you must use the Google Cloud CLI or the API.
- The Sample rate.
100%
means that all entries are kept.
Click Save.
gcloud
gcloud compute networks subnets update SUBNET_NAME \ [--logging-aggregation-interval=AGGREGATION_INTERVAL] \ [--logging-flow-sampling=SAMPLE_RATE] \ [--logging-filter-expr=FILTER_EXPRESSION] \ [--logging-metadata=LOGGING_METADATA] \ [--logging-metadata-fields=METADATA_FIELDS] \
Replace the following:
AGGREGATION_INTERVAL
: the aggregation interval for flow logs in that subnet. The interval can be set to any of the following: 5-sec (default), 30-sec, 1-min, 5-min, 10-min, or 15-min.SAMPLE_RATE
: the flow sampling rate. Flow sampling can be set from0.0
(no sampling) to1.0
(all logs). Default is0.5
.FILTER_EXPRESSION
is an expression that defines what logs you want to keep. For details, see Log filtering.LOGGING_METADATA
: the metadata annotations that you want to include in the logs:include-all
to include all metadata annotationsexclude-all
to exclude all metadata annotations (default)custom
to include a custom list of metadata fields that you specify inMETADATA_FIELDS
.
METADATA_FIELDS
: a comma-separated list of metadata fields you want to include in the logs. For example,src_instance,dst_instance
. Can only be set ifLOGGING_METADATA
is set tocustom
.
API
Modify the log sampling fields to update VPC Flow Logs behaviors.
PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME { "logConfig": { ...fields to modify }, "fingerprint": "SUBNETWORK_FINGERPRINT" }
Replace the placeholders with valid values:
PROJECT_ID
is the ID of the project where the subnet is located.REGION
is the region where the subnet is located.SUBNET_NAME
is the name of the existing subnet.SUBNET_FINGERPRINT
is the fingerprint ID for the existing subnet, which is provided when you describe a subnet.- For the fields that you can modify, see Enabling VPC Flow Logging when you create a subnet.
For more information, refer to the
subnetworks.patch
method.
Disable VPC Flow Logs for a subnet
Console
In the Google Cloud console, go to the VPC networks page.
Click the subnet that you want to update.
Click Edit.
For Flow logs, select Off.
Click Save.
gcloud
gcloud compute networks subnets update SUBNET_NAME \ --no-enable-flow-logs
API
Disable VPC Flow Logs on a subnet to stop collecting log records.
PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME { "logConfig": { "enable": false }, "fingerprint": "SUBNETWORK_FINGERPRINT" }
Replace the placeholders with valid values:
PROJECT_ID
is the ID of the project where the subnet is located.REGION
is the region where the subnet is located.SUBNET_NAME
is the name of the existing subnet.SUBNET_FINGERPRINT
is the fingerprint ID for the existing subnet, which is provided when you describe a subnet.
For more information, refer to the
subnetworks.patch
method.
Access logs by using Logging
You can view VPC Flow Logs by using the Logs Explorer. To use the following queries, you need your project's project ID.
Configure IAM
To configure access control for logging, see the access control guide for Logging.
Access all flow logs
Console
In the Google Cloud console, go to the Logs Explorer page.
Click Resource.
In the Select resource list, click Subnetwork, and then click Apply.
Click Log name.
In the Select log names list, click vpc_flows, and then click Apply.
Alternatively:
Console
In the Google Cloud console, go to the Logs Explorer page.
If you don't see the query editor field in the Query pane, click the Show query toggle.
Paste the following into the query editor field. Replace
PROJECT_ID
with your project ID.resource.type="gce_subnetwork" logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows"
Click Run query.
Access logs for a specific subnet
Console
In the Google Cloud console, go to the Logs Explorer page.
Click Resource.
In the Select resource list, click Subnetwork.
In the Subnetwork ID list, select the subnet, and then click Apply.
In the Select log names list, click vpc_flows, and then click Apply.
Alternatively:
Console
In the Google Cloud console, go to the Logs Explorer page.
If you don't see the query editor field in the Query pane, click the Show query toggle.
Paste the following into the query editor field. Replace
PROJECT_ID
with your project ID andSUBNET_NAME
with your subnetwork.resource.type="gce_subnetwork" logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" resource.labels.subnetwork_name="SUBNET_NAME"
Click Run query.
Access logs for a specific VM
Console
In the Google Cloud console, go to the Logs Explorer page.
If you don't see the query editor field in the Query pane, click the Show query toggle.
Paste the following into the query editor field. Replace
PROJECT_ID
with your project ID andVM_NAME
with the name of your VM.resource.type="gce_subnetwork" logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" jsonPayload.src_instance.vm_name="VM_NAME"
Click Run query.
Access logs for traffic to a specific subnet range
Console
In the Google Cloud console, go to the Logs Explorer page.
If you don't see the query editor field in the Query pane, click the Show query toggle.
Paste the following into the query editor field. Replace
PROJECT_ID
with your project ID andSUBNET_RANGE
with a CIDR range such as192.168.1.0/24
.resource.type="gce_subnetwork" logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" ip_in_net(jsonPayload.connection.dest_ip, SUBNET_RANGE)
Click Run query.
Access logs for a specific GKE cluster
Console
In the Google Cloud console, go to the Logs Explorer page.
If you don't see the query editor field in the Query pane, click the Show query toggle.
Paste the following into the query editor field. Replace
PROJECT_ID
with your project ID andSUBNET_NAME
with the name of your subnetwork.resource.type="k8s_cluster" logName="projects/PROJECT_ID/logs/vpc_flows" resource.labels.cluster_name="CLUSTER_NAME"
Click Run query.
Access logs for only egress traffic from a subnet
Console
In the Google Cloud console, go to the Logs Explorer page.
If you don't see the query editor field in the Query pane, click the Show query toggle.
Paste the following into the query editor field. Replace
PROJECT_ID
with the ID of your project andSUBNET_NAME
with the name of the subnet to view egress traffic from.logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" AND jsonPayload.reporter="SRC" AND jsonPayload.src_vpc.subnetwork_name="SUBNET_NAME" AND (jsonPayload.dest_vpc.subnetwork_name!="SUBNET_NAME" OR NOT jsonPayload.dest_vpc.subnetwork_name:*)
Click Run query.
Access logs for all egress traffic from a VPC network
Console
In the Google Cloud console, go to the Logs Explorer page.
If you don't see the query editor field in the Query pane, click the Show query toggle.
Paste the following into the query editor field. Replace
PROJECT_ID
with the ID of your project andVPC_NAME
with the name of your VPC network.logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" AND jsonPayload.reporter="SRC" AND jsonPayload.src_vpc.vpc_name="VPC_NAME" AND (jsonPayload.dest_vpc.vpc_name!="VPC_NAME" OR NOT jsonPayload.dest_vpc:*)
Click Run query.
Access logs for specific ports and protocols
For an individual destination port
Console
In the Google Cloud console, go to the Logs Explorer page.
If you don't see the query editor field in the Query pane, click the Show query toggle.
Paste the following into the query editor field. Replace
PROJECT_ID
with your project ID,PORT
with the destination port, andPROTOCOL
with the protocol.resource.type="gce_subnetwork" logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" jsonPayload.connection.dest_port=PORT jsonPayload.connection.protocol=PROTOCOL
Click Run query.
For more than one destination port
Console
In the Google Cloud console, go to the Logs Explorer page.
If you don't see the query editor field in the Query pane, click the Show query toggle.
Paste the following into the query editor field. Replace
PROJECT_ID
with your project ID,PORT1
andPORT2
with the destination ports, andPROTOCOL
with the protocol.resource.type="gce_subnetwork" logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" jsonPayload.connection.dest_port=(PORT1 OR PORT2) jsonPayload.connection.protocol=PROTOCOL
Click Run query.
Route logs to BigQuery, Pub/Sub, and custom targets
You can route flow logs from Logging to a destination of your choice as described in the Routing and storage overview in the Logging documentation. Refer to the previous section for example filters.
Troubleshooting
No vpc_flows
appear in Logging for the gce_subnetwork
resource
- Confirm that logging is enabled for the given subnet.
- VPC flows are only supported for VPC networks. If you have a legacy network, you will not see any logs.
- In Shared VPC networks, logs only appear in the host project, not the service projects. Make sure you look for the logs in the host project.
- Logging exclusion filters block specified logs.
Make sure there are no exclusion rules that discard VPC Flow Logs.
- Go to Resource usage.
- Click the Exclusions tab.
- Make sure there are no exclusion rules that might discard VPC Flow Logs.
No RTT or byte values on some of the logs
- RTT measurements may be missing if not enough packets were sampled to capture RTT. This is more likely to happen for low volume connections.
- RTT values are available only for TCP flows.
- Some packets are sent with no payload. If header-only packets were sampled, the bytes value will be 0.
Some flows are missing
- Only TCP, UDP, ICMP, ESP, and GRE protocols are supported. VPC Flow Logs does not support any other protocols.
- Logs are sampled. Some packets in very low volume flows might be missed.
Missing GKE annotations in some logs
- Make sure that your GKE cluster is at version 1.12.7 or later.
If your GKE cluster is at version 1.14 or earlier, make sure that Cloud Operations for GKE is enabled in the cluster. In some cases, some of the annotations may be missing if Cloud Operations for GKE is not enabled. To check if Cloud Operations for GKE is enabled in the cluster, see Which monitoring and logging support does my cluster use.
If Cloud Operations for GKE is enabled in the cluster and you are still seeing missing GKE annotations, there might be an issue with quotas or permissions.
- Check the Stackdriver quota for Requests per minute per user If the quota is exceeded, you can request an increase. For more information, see Managing your quota.
- Check that the service account used by the cluster to send data has the
Stackdriver Resource Metadata
Writer
(
roles/stackdriver.resourceMetadata.writer
) role. To identify which service account is used by the cluster, go to the Stackdriver API metrics page. In the Select Graphs drop-down menu, select Traffic by credential.
Missing logs for some GKE flows
Make sure Intranode visibility is enabled in the cluster. Otherwise, flows between Pods on the same node are not logged.
Flow logs appear to be disabled even though you enabled them
When you're configuring a proxy-only subnet for internal HTTP(S) load balancers and you're using the
gcloud compute networks subnets
command to enable VPC Flow Logs, the command appears to succeed, but flow logs aren't actually enabled. The--enable-flow-logs
flag doesn't take effect when you also include the--purpose=INTERNAL_HTTPS_LOAD_BALANCER
flag.When you use the Google Cloud console or the API to enable flow logs, you see the error message: "Invalid value for field 'resource.enableFlowLogs': 'true'. Invalid field set in subnetwork with purpose INTERNAL_HTTPS_LOAD_BALANCER."
Because proxy-only subnets have no VMs, VPC Flow Logs is not supported. This is intended behavior.
What's next
- View Logging documentation
- View Logging sinks documentation