This page provides a conceptual overview of export sinks, which control how Cloud Logging routes logs.
Using sinks, you can route some or all of your logs to supported destinations or exclude log entries from being stored in Cloud Logging. You might want to control how your logs are routed for the following reasons:
- To store logs that are unlikely to be read but that must be retained for compliance purposes.
- To use big-data analysis tools on your logs.
- To stream your logs to other applications, other repositories, or third parties.
How sinks work
All log entries written to the Cloud Logging API pass through the Log Router. Every time a log entry arrives in a Cloud project, folder, billing account, or organization resource, Logging compares the log entry to the filters of the sinks associated with the resource.
Depending on the log sink's configuration, which includes filters and a destination, every log entry received by Cloud Logging falls into one or more of these categories:
- Stored in Cloud Logging and not routed elsewhere
- Stored in Cloud Logging and routed to a supported destination
- Not stored in Cloud Logging but routed to a supported destination
- Neither stored in Cloud Logging nor routed elsewhere
- These logs are excluded entirely
Cloud Logging provides two predefined log sinks for each Google Cloud project:
_Default. All logs that are generated in a Google Cloud project
are automatically processed through these two log sinks and then are stored in
the correspondingly named
Log sinks act independently of each other. Regardless of how the predefined log sinks process your log entries, you can create sinks to route some or all of your logs to various supported destinations or to exclude them entirely from being stored by Cloud Logging.
Sinks are usually created in Google Cloud projects. To set up sinks at the organization, folder, or billing account levels, use aggregated sinks.
For more details on how sinks route logs, see Log Router overview.
Routing logs involves creating a sink with a filter that selects the log entries you want to route, and choosing a destination from the following options:
- Cloud Storage: JSON files stored in Cloud Storage buckets; provides inexpensive, long-term storage.
- BigQuery: Tables created in BigQuery datasets; provides big data analysis capabilities.
- Pub/Sub: JSON-formatted messages delivered to Pub/Sub topics; supports third-party integrations, such as Splunk, with Logging.
- Cloud Logging: Log entries held in log buckets; provides storage in Cloud Logging with customizable retention periods.
Sink properties and terminology
Sinks have a number of properties, including the aforementioned filters and a destination:
Sink identifier: A name for the sink. For example,
Parent resource: The Google Cloud resource in which you create the sink. The parent can be any of the following:
Sinks route logs that belong to its parent resource. You can also use aggregated sinks to combine and route logs from all the Cloud projects, folders, and billing accounts of a Google Cloud organization.
The full resource name of a sink includes its parent resource and sink identifier. For example:
Inclusion filter: Selects which log entries to route through this sink. For inclusion filter examples, see Sample queries.
Exclusion filter: Selects which log entries to explicitly exclude from routing, even if the log entries match the sink's inclusion filter.
Note that a sink can contain multiple exclusion filters. If any log entry matches any of the filters, the log entry is excluded from routing.
Destination: A place to send the log entries matching your filter. The following are the names of the supported destinations:
Cloud Storage buckets :
Logging log buckets:
You can route logs to destinations in any Cloud project, if the destination authorizes the sink's service account as a writer.
Writer identity: A service account name. The destination's owner must give this service account permission to write to the destination. When routing logs, Logging adopts this identity for authorization. For increased security, new sinks get a unique service account:
For more information, see destination permissions.
includeChildren: This property is described in Aggregated sinks. It is only relevant to sinks created for organizations, folders, or billing accounts.
For more details about sinks, see the LogSink type.
To create or modify a sink, you must have the Identity and Access Management roles Owner or Logging/Logs Configuration Writer in the sink's parent resource. To view existing sinks, you must have the IAM roles Viewer or Logging/Logs Viewer in the sink's parent resource. For more information, see Access control.
To route logs to a destination, the sink's writer service account must be permitted to write to the destination. For more information about writer identities, read Sink properties on this page.
Cloud Logging doesn't charge to route logs, but destination charges might apply. For details, review the appropriate service's pricing details:
Note also that if you send and then exclude your Virtual Private Cloud flow logs from Cloud Logging, VPC flow log generation charges apply in addition to the destination charges.
Route your logs
To learn how to create and manage sinks to route your logs, see the following pages:
- To use the Google Cloud Console, see Exporting with the Log Router.
- To use the Logging API, see Exporting logs in the API.
- To use the
gcloudcommand-line tool, see gcloud logging.
Find and use your logs
To learn about the format of routed log entries and how the logs are organized in destinations, see Using exported logs.
Explore Logging routing scenarios
The following tutorials describe scenarios for which you might want to route logs. Each tutorial details the requirements, setup, and usage, and shows how to share the routed logs.
- Scenario – Export for compliance requirements
- Scenario – Export for security and access analytics
- Scenario – Export to Splunk
- Scenario – Export to Elasticsearch