This document provides an overview of Cloud Logging, which is a real-time log-management system with storage, search, analysis, and monitoring support. Cloud Logging automatically collects logs from Google Cloud resources. You can also collect logs from your applications, on-prem resources, and resources from other cloud providers. You can configure alerts to notify you if certain kinds of events are reported in your logs, and for regulatory or security reasons, you can determine where your log data is stored.
Collect logs from your applications and third-party software
You can collect logs from applications that you write by instrumenting your
application by using a
client library. However, it's not always
necessary to instrument your application. For example, for some
configurations you can use the Ops Agent
to send logs that were written to
stderr to your Cloud project.
You can also collect log data from your third-party applications, like
by installing the Ops Agent and then configuring it to write logs
from that application to your Google Cloud project.
See Which should you use: Logging agent or client library? for information that can help you decide which approach best suits your requirements.
Troubleshoot and analyze logs
You can view and analyze your log data by using the Google Cloud console, either with the Logs Explorer or the Log Analytics pages. You can query and view logs with both interfaces; however, they use different query languages and they have different capabilities.
When you want to troubleshoot and analyze the performance of your services and applications, we recommend that you use the Logs Explorer. This interface is designed to let you view individual log entries and find related log entries. For example, when a log entry is part of an error group, that entry is annotated with a menu of options that you access more information about the error.
When you're interested in performing aggregate operations on your logs, for example, to compute the average latency for HTTP requests issued to a specific URL over time, use the Log Analytics interface. With this interface, you use SQL to query your log data, and therefore you can leverage the capabilities of SQL to help you understand your log data.
If you prefer to query your log data programmatically, you can use the Cloud Logging API or the Google Cloud CLI to export log data from your Cloud project.
For more information, see Query and view logs overview.
Monitor your logs
You can configure Cloud Logging to notify you when certain kinds of events occur in your logs. These notifications might be sent when a particular pattern appears in a log entry, or when a trend is detected in your log data. If you're interested in viewing the error rates of your Google Cloud services, then you can view the Cloud Logging dashboard, which is preconfigured.
For example, if you want to be notified when a particular message, like a critical security-related event occurs, then you can create a log-based alert. A log-based alert monitors your logs for a specific pattern, and if that pattern is found, it sends a notification and creates an incident. Log-based alerts are well suited for important but rare events, like the following:
- You want to be notified when an event appears in an audit log; for example, a user accesses the security key of a service account.
- Your application writes deployment messages to logs, and you want to be notified when a deployment change is logged.
Alternatively, you might want to monitor trends or the occurrence of events over time. For these situations, you can create a log-based metric. A log-based metric can count the number of log entries that match some criterion, or they can extract and organize information like response times into histograms. You can also configure alerts that notify you when performance changes occur, for example, the response time increases to an unacceptable level. Log-based metrics are suitable when you want to do any of the following:
- Count the occurrences of a message, like a warning or error, in your logs and receive a notification when the number of occurrences crosses a threshold.
- Observe trends in your data, like latency values in your logs, and receive a notification if the values change in an unacceptable way.
- Create charts to display the numeric data extracted from your logs.
For more information, see Monitor your logs.
You don't have to configure the location where logs are stored. By default, your Cloud project automatically stores all logs it receives in a Cloud Logging log bucket. For example, if your Cloud project contains a Compute Engine instance, then all logs Compute Engine generates are automatically stored for you. However, if you need to, you can configure a number of aspects about your log storage, such as which logs are stored, which are discarded, and where the logs are stored.
You can route, or forward, log entries to the following destinations, which can be in the same Google Cloud project or in a different Google Cloud project:
- Cloud Logging log buckets: Provides storage in Cloud Logging. A log bucket can store logs ingested by multiple Google Cloud projects. You can combine your Cloud Logging data with other data by storing your logs in log buckets that are upgraded to use Log Analytics, and then creating a linked BigQuery dataset. For information about viewing logs, see Query and view logs overview and View logs routed to Cloud Logging buckets.
- Google Cloud projects: Route log entries to a different Google Cloud project. When you route logs to a different Cloud project, the destination project's Log Router receives the logs and processes them. The project from which you are routing logs doesn't need to define how the destination project handles the logs.
- Pub/Sub topics: Provides support for third-party integrations, such as Splunk, with Logging. Log entries are formatted into JSON and then delivered to a Pub/Sub topic. For information about viewing these logs, their organization, and how to configure a third-party integration, see View logs routed to Pub/Sub.
- BigQuery datasets: Provides storage of log entries in BigQuery datasets. You can use big data analysis capabilities on the stored logs. To combine your Cloud Logging data with other data sources, we recommend that you store your logs in log buckets that are upgraded to use Log Analytics and then create a linked BigQuery dataset. For information about viewing logs routed to BigQuery, see View logs routed to BigQuery.
- Cloud Storage buckets: Provides inexpensive, long-term storage of log data in Cloud Storage. Log entries are stored as JSON files. For information about viewing these logs, how they are organized, and how late-arriving logs are handled, see View logs routed to Cloud Storage.
For more information, including data regionality support, see Routing and storage overview.
Categories of logs
Log categories are meant to help describe the logging information available to you; the categories aren't mutually exclusive:
Platform logs are logs written by your Google Cloud services. These logs can help you debug and troubleshoot issues, and help you better understand the Google Cloud services you're using. For example VPC Flow Logs record a sample of network flows sent from and received by VM instances.
Component logs are similar to platform logs, but they are generated by Google-provided software components that run on your systems. For example, GKE provides software components that users can run on their own VM or in their own data center. Logs are generated from the user's GKE instances and sent to a user's Cloud project. GKE uses the logs or their metadata to provide user support.
Security logs help you answer "who did what, where, and when":
Cloud Audit Logs provide information about administrative activities and accesses within your Google Cloud resources. Enabling audit logs helps your security, auditing, and compliance entities monitor Google Cloud data and systems for possible vulnerabilities or external data misuse. For a list of Google Cloud supported services, see Google services with audit logs.
Access Transparency provides you with logs of actions taken by Google staff when accessing your Google Cloud content. Access Transparency logs can help you track compliance with your legal and regulatory requirements for your organization. For a list of Google Cloud supported services, see Google services with Access Transparency logs.
User-written logs are logs written by custom applications and services. Typically, these logs are written to Cloud Logging by using one of the following methods:
Ops Agent or the Logging agent: For a list of the logs available, see Default logging agent logs.
Multi-cloud logs and Hybrid-cloud logs refer to logs from other cloud providers like Microsoft Azure and logs from on-premises infrastructure.
Data model for logs
The data model that Cloud Logging uses to organize your log data
determines the dimensions over which you can query that data. For example,
because a log is a named collection of individual entries, you can query
your data by the name of the log. Similarly, because each log is composed
of log entries which are formatted as
LogEntry objects, you
can write queries that retrieve only those log entries where the value of
LogEntry field matches some criteria. For example, you can display only
those log entries whose
severity field has the value of
Each log entry records status or describes a specific event, such as the creation of a VM instance, and minimally consists of the following:
- A timestamp that indicates either when the event took place or when it was received by Cloud Logging.
- Information about the source of the log entry. This source is called the monitored resource. Examples of monitored resources include individual Compute Engine VM instances and Google Kubernetes Engine containers. For a complete listing of monitored resource types, see Monitored resources and services.
- A payload, also known as a message, either provided as unstructured textual data or as structured textual data in JSON format.
The name of the log to which it belongs. The name of a log includes the full path of the resource to which the log entries belong, followed by an identifier. The following are examples of log names:
Identity and Access Management roles control the ability for a principal to access logs. You can grant predefined roles to principals, or you can create custom roles. For more information about required permissions, see Access control.
Log entries are stored in log buckets for a specified length of time and are then deleted. For more information, see Routing and storage overview: retention.
For information about pricing, see Cloud Logging pricing.