This document explains how Cloud Logging processes log entries, and describes the key components of Logging routing and storage. Routing refers to the process that Cloud Logging uses to determine what to do with a newly-arrived log entry. You can route log entries to destinations like Logging buckets, which store the log entry, or to Pub/Sub. To export your logs into third-party destinations, route your logs to a Pub/Sub topic, and then authorize the third-party destination to subscribe to the Pub/Sub topic.
At a high level, this is how Cloud Logging routes and stores log entries:
Route logs with the Log Router
The following sections explain how Logging routes logs with the Log Router by using sinks.
Log Router
A log entry is sent to the
Google Cloud resource
specified in its logName
field
during its
entries.write
call.
Cloud Logging receives log entries with the Cloud Logging API where they pass through the Log Router. The sinks in the Log Router check each log entry against their inclusion filter and exclusion filters, and then determine which destinations, including Cloud Logging buckets, that the log entry should be sent to. You can use combinations of sinks to route a log entry to multiple destinations.
The Log Router stores the log entry temporarily. This behavior buffers against temporary disruptions and outages that might occur when a sink routes a log entry to a destination. The buffering doesn't protect against sink configuration errors. If your sink is configured incorrectly, then it doesn't route log entries, an error log is generated, and an email notifying you of a sink configuration error is sent. When log entries can't be routed, they are discarded.
The Log Router's temporary storage is distinct from the longer term storage provided by Logging buckets.
Incoming log entries with timestamps that are more than the logs retention period in the past or that are more than 24 hours in the future are discarded.
Sinks
Sinks control how Cloud Logging routes logs. Using sinks, you can route some or all of your logs to supported destinations. Some of the reasons that you might want to control how your logs are routed include the following:
- To store logs that are unlikely to be read but that must be retained for compliance purposes.
- To organize your logs in buckets in a format that is useful to you.
- To use big-data analysis tools on your logs.
- To stream your logs to other applications, other repositories, or third parties. For example, if you want to export your logs from Google Cloud so that you can view them on a third-party platform, then configure a sink to route your log entries to Pub/Sub.
Sinks belong to a given Google Cloud resource: Google Cloud projects, billing accounts, folders, and organizations. When the resource receives a log entry, it routes the log entry according to the sinks contained by that resource and, if enabled, any ancestral sinks belonging under the resource hierarchy. The log entry is sent to the destination associated with each matching sink.
Cloud Logging provides two predefined sinks for each Google Cloud project,
billing account, folder, and organization: _Required
and _Default
. All logs
that are generated in a resource are automatically processed through these two
sinks and then are stored either in the correspondingly named
_Required
or _Default
buckets.
Sinks act independently of each other. Regardless of how the predefined sinks process your log entries, you can create your own sinks to route some or all of your logs to various supported destinations or to exclude them from being stored by Cloud Logging.
Which log entries are routed by a sink is controlled by configuring the sink's inclusion filter and exclusion filters. Depending on the sink's configuration, every log entry received by Cloud Logging falls into one or more of these categories:
Stored in Cloud Logging and not routed elsewhere.
Stored in Cloud Logging and routed to a supported destination.
Not stored in Cloud Logging but routed to a supported destination.
Neither stored in Cloud Logging nor routed elsewhere.
You usually create sinks at the Google Cloud project level, but if you want to combine and route logs from the resources contained by a Google Cloud organization or folder, you can create aggregated sinks.
Sinks only route log entries that arrive after the sink is created because routing happens as logs pass through the Logging API. If you need to route log entries retroactively, see Copy logs.Inclusion filters
When a sink doesn't specify any filters, all log entries match and are routed to the sink's destination. You can configure the sink to select specific log entries by setting an inclusion filter. You can also set one or more exclusion filters to exclude log entries from being routed.
When you configure a sink, you specify its filters by using the Logging query language.
A log entry is routed by a sink based on these rules:
If the log entry doesn't match the inclusion filter, then it isn't routed. When a sink doesn't specify an inclusion filter, then every log entry matches that filter.
If the log entry matches the inclusion filter and at least one exclusion filter, then it isn't routed.
If the log entry matches the inclusion filter and doesn't match any exclusion filter, then it is routed to the sink's destination.
Exclusion filters
When you create a sink, you can set multiple exclusion filters. Exclusion filters let you exclude log entries that match the inclusion filter from being routed to the sink's destination or from being stored in a log bucket. You define exclusion filters by using the Logging query language.
Excluded log entries consume
entries.write
API quota because they are
excluded after they are received by the Logging API.
You can't reduce the number of
entries.write
API calls by
excluding log entries.
Excluded log entries aren't available in the Logs Explorer.
Log entries that aren't routed to at least one log bucket, either explicitly with exclusion filters or because they don't match any sinks with a Logging storage destination, are also excluded from Error Reporting. Therefore, these log entries aren't available to help troubleshoot failures. User-defined log-based metrics are computed from log entries in both included and excluded log entries. For more information, see Monitor your logs.Supported destinations
You can use the Log Router to route certain log entries to supported destinations in any Google Cloud project. Logging supports the following sink destinations:
Cloud Logging bucket: Provides storage in Cloud Logging. A log bucket can store log entries that are received by multiple Google Cloud projects. The log bucket can be in the same project in which log entries originate, or in a different project. For information about viewing log entries stored in log buckets, see Query and view logs overview and View logs routed to Cloud Logging buckets.
You can combine your Cloud Logging data with other data by upgrading a log bucket to use Log Analytics, and then creating a linked dataset, which is a read-only dataset that can be queried by the BigQuery Studio and Looker Studio pages.
BigQuery dataset: Provides storage of log entries in a writeable BigQuery dataset. The BigQuery dataset can be in the same project in which log entries originate, or in a different project. You can use big data analysis capabilities on the stored log entries. For information about viewing log entries routed to BigQuery, see View logs routed to BigQuery.
- Cloud Storage bucket: Provides storage of log entries in Cloud Storage. The Cloud Storage bucket can be in the same project in which log entries originate, or in a different project. Log entries are stored as JSON files. For information about viewing log entries routed to Cloud Storage, see View logs routed to Cloud Storage.
Pub/Sub topic: Provides support for third-party integrations. Log entries are formatted into JSON and then routed to a Pub/Sub topic. The topic can be in the same project in which log entries originate, or in a different project. For information about viewing log entries routed to Pub/Sub, see View logs routed to Pub/Sub.
Google Cloud project: Route log entries to another Google Cloud project. In this configuration, the sinks in the destination project processes the log entries.
For more information, see Route logs to supported destinations.
Storing, viewing, and managing logs
The following section details how logs are stored in Cloud Logging, and how you can view and manage them.
Log buckets
Cloud Logging uses log buckets as containers in your Google Cloud projects, billing accounts, folders, and organizations to store and organize your logs data. The log entries that you store in Cloud Logging are indexed, optimized, and delivered to let you analyze your logs in real time. Cloud Logging buckets are different storage entities than the similarly named Cloud Storage buckets.
For each Google Cloud project, billing account, folder, and organization,
Logging automatically creates two log buckets: _Required
and
_Default
. Logging automatically creates sinks named
_Required
and _Default
that, in the default configuration, route
log entries to the correspondingly named buckets.
You can disable the _Default
sink, which routes log entries to the _Default
log bucket. You can also change the behavior of the _Default
sinks created for
any new Google Cloud projects or folders. For more information, see
Configure default settings for organizations and folders.
You can't change routing rules for the _Required
bucket.
Additionally, you can create user-defined buckets for any Google Cloud project.
You create sinks to route all, or just a subset, of your log entries to any log bucket. This flexibility lets you choose the Google Cloud project in which your log entries are stored, and lets you store log entries from multiple resources in one location.
For more information, see Configure log buckets.
_Required
log bucket
Cloud Logging automatically routes the following types of log entries to the
_Required
bucket:
- Admin Activity audit logs
- System Event audit logs
- Google Workspace Admin Audit logs
- Enterprise Groups Audit logs
- Login Audit logs
- Access Transparency logs. For information about enabling Access Transparency logs, see the Access Transparency logs documentation.
Cloud Logging retains the log entries in the _Required
bucket for
400 days; you can't change this retention period.
You can't modify or delete the _Required
bucket. You can't disable the
_Required
sink, which routes log entries to the _Required
bucket.
_Default
log bucket
Any log entry that isn't stored in the _Required
bucket is routed by the
_Default
sink to the _Default
bucket, unless you disable or otherwise edit
the _Default
sink. For instructions on modifying sinks, see
Manage sinks.
For example, Cloud Logging automatically routes the following types of
log entries to the _Default
bucket:
Cloud Logging retains the log entries in the _Default
bucket for
30 days, unless you
configure custom retention for the
bucket.
You can't delete the _Default
bucket.
User-defined log buckets
You can also create user-defined log buckets in any Google Cloud project. By applying sinks to your user-defined log buckets, you can route any subset of your log entries to any log bucket, letting you choose the Google Cloud project in which your log entries are stored, and lets you store log entries from multiple resources in one location.
For example, for any log generated in project A
, you can configure a sink to
route those log entries to a user-defined bucket in project A
or to a
log bucket in project B
.
For information about managing your user-defined log buckets, including deleting or updating them, see Configure and manage log buckets.
Regionalization
Log buckets are regional resources. The infrastructure that stores, indexes, and searches your log entries is located in a specific geographical location. Google Cloud manages that infrastructure so that your applications are available redundantly across the zones within that region.
When you create a log bucket or set an organization-level regional policy, you can choose where to store your logs.
The following regions are supported by Cloud Logging:
Global
Region name | Region description |
---|---|
global |
Logs stored in any data centers in the world. Logs might be moved to different data centers. No additional redundancy guarantees. |
Multi-regions: EU and US
Region name | Region description |
---|---|
eu |
Logs stored in any data centers within the European Union. Logs might be moved to different data centers. No additional redundancy guarantees. |
us |
Logs stored in any data centers within the United States. Logs might be moved to different data centers. No additional redundancy guarantees. |
Africa
Region name | Region description |
---|---|
africa-south1 |
Johannesburg |
Americas
Region name | Region description |
---|---|
northamerica-northeast1 |
Montréal |
northamerica-northeast2 |
Toronto |
northamerica-south1 |
Mexico |
southamerica-east1 |
São Paulo |
southamerica-west1 |
Santiago |
us-central1 |
Iowa |
us-east1 |
South Carolina |
us-east4 |
North Virginia |
us-east5 |
Columbus |
us-south1 |
Dallas |
us-west1 |
Oregon |
us-west2 |
Los Angeles |
us-west3 |
Salt Lake City |
us-west4 |
Las Vegas |
Asia Pacific
Region name | Region description |
---|---|
asia-east1 |
Taiwan |
asia-east2 |
Hong Kong |
asia-northeast1 |
Tokyo |
asia-northeast2 |
Osaka |
asia-northeast3 |
Seoul |
asia-south1 |
Mumbai |
asia-south2 |
Delhi |
asia-southeast1 |
Singapore |
asia-southeast2 |
Jakarta |
australia-southeast1 |
Sydney |
australia-southeast2 |
Melbourne |
Europe
Region name | Region description |
---|---|
europe-central2 |
Warsaw |
europe-north1 |
Finland |
europe-southwest1 |
Madrid |
europe-west1 |
Belgium |
europe-west2 |
London |
europe-west3 |
Frankfurt |
europe-west4 |
Netherlands |
europe-west6 |
Zurich |
europe-west8 |
Milan |
europe-west9 |
Paris |
europe-west10 |
Berlin |
europe-west12 |
Turin |
Middle East
Region name | Region description |
---|---|
me-central1 |
Doha |
me-central2 |
Dammam |
me-west1 |
Tel Aviv |
When you set the location to global
, you don't need to specify where your
log entries are physically stored.
You can automatically apply a particular storage region to the _Default
and _Required
buckets created in an organization or folder.
For more information, see
Configure default settings for organizations and folders.
Organization policy
You can create an organization policy to ensure that your organization meets your compliance and regulatory needs. Using an organization policy, you can specify in which regions your organization can create new log buckets. You can also restrict your organization from creating new log buckets in specified regions.
Cloud Logging doesn't enforce your newly created organization policy on existing log buckets; it only enforces the policy on new log buckets.
For information about creating a location-based organization policy, see Restrict resource locations.
In addition, you can configure a default storage location for the
_Default
and _Required
buckets in an organization or in a folder.
If you configure an organization policy that constrains where data can be
stored, then you must ensure that the default storage location you specify is
consistent with that constraint. For more information, see
Configure default settings for organizations and folders.
Retention
Cloud Logging retains logs according to retention rules applying to the log bucket type where the logs are held. For information about the retention periods for different types of logs, see Quotas and limits.
You can configure Cloud Logging to retain logs between 1 day and 3650 days. Custom retention rules apply to all the logs in a bucket, regardless of the log type or whether that log has been copied from another location.
For information about setting retention rules for a log bucket, see Configure custom retention.
Log views
Log views let you grant a user access to only a subset of the log entries stored in a log bucket. For information about how to configure log views, and how to grant access to specific log views, see Configure log views on a log bucket.
For every log bucket, Cloud Logging automatically creates the _AllLogs
view, which shows all logs stored in that bucket. Cloud Logging also creates
a view for the _Default
bucket called _Default
. The _Default
view for the
_Default
bucket shows all logs except Data Access audit logs. The
_AllLogs
and _Default
views aren't editable, and you can't delete
the _Default
log view.
Custom log views provide you with an advanced and granular way to control access to your logs data. For example, consider a scenario in which you store all of your organization's logs in a central Google Cloud project. Because log buckets can contain logs from multiple Google Cloud projects, you might want to control which Google Cloud projects different users can view logs from. Using custom log views, you can give one user access to logs only from a single Google Cloud project, while you give another user access to logs from all the Google Cloud projects.
Using logs in the Google Cloud ecosystem
The following section provides information about how to use logs in the broader Google Cloud.
Log-based metrics
Log-based metrics are Cloud Monitoring metrics that are derived from the content of log entries. For example, if Cloud Logging receives a log entry for a Google Cloud project that matches the filters of one of the Google Cloud project's metrics, then that log entry is counted in the metric data.
Log-based metrics interact with routing differently, depending on whether the log-based metrics are defined by the system or by you. The following sections describe these differences.
Log-based metrics and exclusion filters
Sink exclusion filters apply to system-defined log-based metrics, which count only logs that are stored in log buckets.
Sink exclusion filters don't apply to user-defined log-based metrics. Even if you exclude logs from being stored in any Logging buckets, you could see those logs counted in these metrics.
Scope of log-based metrics
System-defined log-based metrics apply at the Google Cloud project level. These metrics are calculated by the Log Router and apply to logs only in the Google Cloud project in which they're received.
User-defined log-based metrics can apply at either the Google Cloud project level or at the level of a specific log bucket:
- Project-level metrics are calculated like system-defined log-based metrics; these user-defined log-based metrics apply to logs only in the Google Cloud project in which they're received.
Bucket-scoped metrics apply to logs in the log bucket in which they're received, regardless of the Google Cloud project in which the log entries originated.
With bucket-scoped log-based metrics, you can create log-based metrics that can evaluate logs in the following cases:
- Logs that are routed from one project to a bucket in another project.
- Logs that are routed into a bucket through an aggregated sink.
For more information, see Log-based metrics overview.
Finding logs in supported destinations
To learn about the format of routed log entries and how the logs are organized in destinations, see View logs in sink destinations.
Common use cases
To address common use cases for routing and storing logs, see the following documents and tutorials:
Compliance needs
For best practices about using routing for data governance, see the following documents:
Access control with IAM
For information about how you use Identity and Access Management (IAM) roles and permissions to control access to Cloud Logging data, see the Access control with IAM.
Pricing
Cloud Logging doesn't charge to route logs to a
supported destination; however, the destination might apply charges.
With the exception of the _Required
log bucket,
Cloud Logging charges to stream logs into log buckets and
for storage longer than the default retention period of the log bucket.
Cloud Logging doesn't charge for copying logs, for defining log scopes, or for queries issued through the Logs Explorer or Log Analytics pages.
For more information, see the following documents:
- Cloud Logging pricing summary
Destination costs:
- VPC flow log generation charges apply when you send and then exclude your Virtual Private Cloud flow logs from Cloud Logging.
What's next
To help you route and store Cloud Logging data, see the following documents:
To create sinks to route log entries to supported destinations, see Route logs to supported destinations.
To learn how to create aggregated sinks that can route log entries from the resources in folders or organizations, see Collate and route organization- and folder-level logs to supported destinations.
To learn how to grant access to a subset of log entries that are stored in a log bucket, see Configure log views on a log bucket.
- Error Reporting can analyze log entries and report errors to you. For more information about this service, including information about when log entries can be analyzed, see Error Reporting overview.