This document explains how Cloud Logging processes log entries, and describes the key components of Logging routing and storage. Routing refers to the process that Cloud Logging uses to determine what to do with a newly-arrived log entry. You can route log entries to destinations like Logging buckets, which store the log entry, or to Pub/Sub. By subscribing to the Pub/Sub topic, you can export the log entry from Google Cloud and then forward it to another destination, like a third-party platform.
At a high level, this is how Cloud Logging routes and stores log entries:
Ingesting and routing logs with the Logs Router
The following sections explain how logs are ingested by Logging and routed through the Logs Router using sinks.
Cloud Logging receives log entries through the Cloud Logging API where they pass through the Logs Router. The sinks in the Logs Router check each log entry against the existing inclusion filter and exclusion filters that determine which destinations, including Cloud Logging buckets, that the log entry should be sent to. You can use combinations of sinks to route logs to multiple destinations.
To reliably route logs, the Logs Router also stores the logs temporarily (not depicted in the image), which buffers against temporary disruptions on any sink. Note that the Logs Router's temporary storage is distinct from the longer term storage provided by Logging buckets.
Incoming log entries with timestamps that are more than the logs retention period in the past or that are more than 24 hours in the future are discarded.
Sinks control how Cloud Logging routes logs. Using sinks, you can route some or all of your logs to supported destinations. Some of the reasons that you might want to control how your logs are routed include the following:
- To store logs that are unlikely to be read but that must be retained for compliance purposes.
- To organize your logs in buckets in a format that is useful to you.
- To use big-data analysis tools on your logs.
- To stream your logs to other applications, other repositories, or third parties. For example, if you want to export your logs from Google Cloud so that you can view them on a third-party platform, then configure a sink to route your log entries to Pub/Sub.
Sinks belong to a given Google Cloud resource: Cloud projects, billing accounts, folders, and organizations. When the resource receives a log entry, it routes the log entry according to the sinks contained by that resource and, if enabled, any ancestral sinks belonging under the resource hierarchy. The log entry is sent to the destination associated with each matching sink.
Cloud Logging provides two predefined sinks for each Cloud project,
billing account, folder, and organization:
_Default. All logs
that are generated in a resource are automatically processed through these two
sinks and then are stored either in the correspondingly named
Sinks act independently of each other. Regardless of how the predefined sinks process your log entries, you can create your own sinks to route some or all of your logs to various supported destinations or to exclude them from being stored by Cloud Logging.
The routing behavior for each sink is controlled by configuring the inclusion filter and exclusion filters for that sink. Depending on the sink's configuration, every log entry received by Cloud Logging falls into one or more of these categories:
Stored in Cloud Logging and not routed elsewhere.
Stored in Cloud Logging and routed to a supported destination.
Not stored in Cloud Logging but routed to a supported destination.
Neither stored in Cloud Logging nor routed elsewhere.
You can't route log entries that Logging received before your sink was created because routing happens as logs pass through the Logging API, and new routing rules only apply to logs written after those rules have been created. If you need to route log entries retroactively, see Copy logs.
For any new sink, if you don't specify filters, all logs match and are routed to the sink's destination. You can configure the sink to select specific logs by setting an inclusion filter. You can also set one or more exclusion filters to exclude logs from the sink's destination.
Every log entry received by Logging is routed based on these filtering rules:
The sink's exclusion filters override any of its defined inclusion filters. If a log matches any exclusion filter in the sink, then it doesn't match the sink regardless of any inclusion filters defined. The log entry isn't routed to that sink's destination.
If the sink doesn't contain an inclusion filter, then the following happens:
- If the log entry matches any exclusion filter, it isn't routed to the sink's destination.
- If the log entry doesn't match any exclusion filter, it is routed to the sink's destination. An empty inclusion filter selects all logs.
If the sink contains an inclusion filter, then the following happens:
- If the log entry matches the inclusion filter, it is routed to the sink's destination.
- If the log entry doesn't match the inclusion filter, it isn't routed to the sink's destination.
When you create a sink, you can set multiple exclusion filters, letting you exclude matching log entries from being routed to the sink's destination or from being ingested by Cloud Logging. You create exclusion filters by using the Logging query language.
Log entries are excluded after they are received by the
Logging API and therefore these log entries consume
entries.write API quota. You can't reduce
the number of
entries.write API calls by
excluding log entries.
Excluded log entries aren't available in the Logs Explorer or Cloud Debugger.
Log entries that aren't routed to at least one log bucket, either explicitly with exclusion filters or because they don't match any sinks with a Logging storage destination, are also excluded from Error Reporting. Therefore, these logs aren't available to help troubleshoot failures.
User-defined log-based metrics are computed from log entries in both included and excluded logs. For more information, see Monitor your logs.
Log entries that aren't excluded might result in charges. For more information, see Cloud Logging pricing.
You can use the Logs Router to route certain logs to supported destinations in any Cloud project. Logging supports the following sink destinations:
- Cloud Storage: JSON files stored in Cloud Storage buckets; provides inexpensive, long-term storage.
- BigQuery: Tables created in BigQuery datasets; provides big data analysis capabilities.
- Pub/Sub: JSON-formatted messages delivered to Pub/Sub topics; supports third-party integrations, such as Splunk, with Logging.
- Cloud Logging: Log entries held in log buckets; provides storage in Cloud Logging with customizable retention periods.
For more information on routing logs to supported destinations, see Configure sinks.
Storing, viewing, and managing logs
The following section details how logs are stored in Cloud Logging, and how you can view and manage them.
Cloud Logging uses log buckets as containers in your Google Cloud projects, billing accounts, folders, and organizations to store and organize your logs data. The logs that you store in Cloud Logging are indexed, optimized, and delivered to let you analyze your logs in real time. Cloud Logging buckets are different storage entities than the similarly named Cloud Storage buckets.
For each Cloud project, billing account, folder, and organization,
Logging automatically creates two log buckets:
_Default. Logging automatically creates sinks named
_Default that, in the default configuration, route logs to the
correspondingly named buckets.
You can disable the
_Default sink, which routes logs to the
bucket. To change the behavior of
_Default sinks created for any
new Cloud projects or folders created in your organization, you can
default settings for your organization.
You can't change routing rules for the
Additionally, you can create user-defined buckets for any Cloud project.
You create sinks to route all, or just a subset, of your logs to any log bucket. This flexibility allows you to choose the Cloud project in which your logs are stored and what other logs are stored with them.
For more information, see Configure log buckets.
_Required log bucket
Cloud Logging automatically routes the following types of logs to the
Cloud Logging retains the logs in this bucket for 400 days; you can't change this retention period.
You can't modify or delete the
_Required bucket. You can't disable the
_Required sink, which routes logs to the
Neither ingestion pricing nor storage pricing applies to the logs data stored in
_Required log bucket.
_Default log bucket
Any log entry that isn't ingested by the
_Required bucket is routed by the
_Default sink to the
_Default bucket, unless you disable or otherwise edit
_Default sink. For instructions on modifying sinks, see
You can't delete the
Logs held in the
_Default bucket are retained for
30 days, unless you
configure custom retention for the
Cloud Logging pricing applies
to the logs data held in the
User-defined log buckets
You can also create user-defined log buckets in any Cloud project. By applying sinks to your user-defined log buckets, you can route any subset of your logs to any log bucket, letting you choose which Cloud project your logs are stored in and which other logs are stored with them.
For example, for any log generated in Project-A, you can configure a sink to route that log to user-defined buckets in Project-A or Project-B.
Cloud Logging pricing applies to the logs data held in this bucket, regardless of the log type.
You can configure custom retention for the bucket.
For information on managing your user-defined log buckets, including deleting or updating them, see Configure and manage log buckets.
Log buckets are regional resources. The infrastructure that stores, indexes, and searches your logs is located in a specific geographical location. Google manages that infrastructure so that your applications are available redundantly across the zones within that region.
|Region name||Region description|
||Salt Lake City|
|Region name||Region description|
|Region name||Region description|
|Region name||Region description|
In addition to these regions, you can set the location to
global, which means
that you don't need to specify where your logs are physically stored.
If you want to automatically apply a particular storage region to the
_Required buckets created in your organization, you can configure a
default resource location.
For more information on logs data location, see Data regionality for Cloud Logging.
You can create an organization policy to ensure that your organization meets your compliance and regulatory needs. Using an organization policy, you can specify in which regions your organization can create new log buckets. You can also restrict your organization from creating new log buckets in specified regions.
Cloud Logging doesn't enforce your newly created organization policy on existing log buckets; it only enforces the policy on new log buckets.
For information on creating a location-based organization policy, see Restrict resource locations.
In addition, you can configure a
default resource location to
choose which storage region to apply to the
created in your organization.
Cloud Logging retains logs according to retention rules applying to the log bucket type where the logs are held.
You can configure Cloud Logging to retain logs between 1 day and 3650 days. Custom retention rules apply to all the logs in a bucket, regardless of the log type or whether that log has been copied from another location.
For information on setting retention rules for a log bucket, see Configure custom retention.
Log views let you control who has access to the logs within your log buckets.
For every log bucket, Cloud Logging automatically creates the
view, which shows all logs stored in that bucket. Cloud Logging also creates
a view for the
_Default bucket called
_Default view for the
_Default bucket shows all logs except Data Access audit logs. The
_Default views aren't editable and you can't delete them.
Custom log views provide you with an advanced and granular way to control access to your logs data. For example, consider a scenario in which you store all of your organization's logs in a central Cloud project. Because log buckets can contain logs from multiple Cloud projects, you might want to control which Cloud projects different users can view logs from. Using custom log views, you can give one user access to logs only from a single Cloud project, while you give another user access to logs from all the Cloud projects.
For information about configuring log views, see Manage access to log buckets.
Using logs in the Google Cloud ecosystem
The following section provides information on using logs in the broader Google Cloud.
Log-based metrics are Cloud Monitoring metrics that are derived from the content of log entries. For example, if Cloud Logging receives a log entry for a Cloud project that matches the filters of one of the Cloud project's metrics, then that log entry is counted in the metric data.
Log-based metrics interact with routing differently, depending on whether the log-based metrics are defined by the system or by you. The following sections describe these differences.
Log-based metrics and exclusion filters
Sink exclusion filters apply to system-defined log-based metrics, which count only logs that are included for ingestion by the Cloud project.
Sink exclusion filters don't apply to user-defined log-based metrics. Even if you exclude logs from being ingested by Cloud Logging API and the logs aren't stored in any Logging buckets, you could see those logs counted in these metrics.
Scope of log-based metrics
System-defined log-based metrics apply at the Cloud project level. These metrics are calculated by the Logs Router and apply to logs only in the Cloud project in which they're received.
User-defined log-based metrics can apply at either the Cloud project level or at the level of a specific log bucket:
- Project-level metrics are calculated like system-defined log-based metrics; these user-defined log-based metrics apply to logs only in the Cloud project in which they're received.
Preview: Bucket-level metrics apply to logs in the log bucket in which they're received, regardless of the Cloud project in which the log entries originated.
With bucket-level log-based metrics, you can create log-based metrics that can evaluate logs in the following cases:
- Logs that are routed from one project to a bucket in another project.
- Logs that are routed into a bucket through an aggregated sink.
For more information, see Log-based metrics overview.
Finding logs in supported destinations
To learn about the format of routed log entries and how the logs are organized in destinations, see View logs in sink destinations.
Common use cases
To address common use cases for routing and storing logs, see the following documents and tutorials:
For best practices on using routing for data governance, see the following documents:
Access control with IAM
To understand ingestion and storage pricing, see the Cloud Logging pricing information.
Cloud Logging doesn't charge to route logs, but destination charges might apply. For details, review the appropriate service's pricing details:
Note also that if you send and then exclude your Virtual Private Cloud flow logs from Cloud Logging, VPC flow log generation charges apply in addition to the destination charges.
To help you route and store Cloud Logging data, see the following documents:
To create sinks to route logs to supported destinations, see Configure sinks.
For routing and sinks troubleshooting information, see Troubleshoot routing and sinks.