This page explains how you can find and use log entries that you export from Cloud Logging.
Exporting log entries involves writing a filter that selects the log entries you want to export, and choosing a destination from the following options:
- Cloud Storage: JSON files stored in Cloud Storage buckets.
- BigQuery: Tables created in BigQuery datasets.
- Pub/Sub: JSON messages delivered to Pub/Sub topics. Supports third-party integrations, such as Splunk, with Logging.
- Another Google Cloud Cloud project: Log entries held in Cloud Logging logs buckets.
The filter and destination are held in an object called a sink. Sinks can be created in Google Cloud projects, organizations, folders, and billing accounts.
For a conceptual overview of exporting logs using Cloud Logging, see Overview of logs exports.
Exporting logs
For instructions on how to create sinks in Cloud Logging to export your logs, refer to the following pages:
- To use the Cloud Console, go to Exporting logs with the Google Cloud Console.
- To use the Logging API, go to Exporting logs in the API.
- To use the
gcloud
tool, go togcloud logging
.
Cloud Storage
To view your exported logs in Cloud Storage, do the following:
Go to Cloud Storage Browser in the Cloud Console:
Select the Cloud Storage bucket you are using for logs export.
For details on how logs are organized in the Cloud Storage bucket, go to Cloud Storage organization on this page.
Exported logs availability
If there aren't any exported logs, check the export system metrics. The export system metrics can tell you how many log entries are exported and how many are dropped due to errors. If the export system metrics indicate that no log entries were exported, check your filter to verify that log entries matching your filter have recently arrived in Logging:
Log entries are saved to Cloud Storage buckets in hourly batches. It might take from 2 to 3 hours before the first entries begin to appear.
Exported logs organization
When you export logs to a Cloud Storage bucket, Logging writes a set of files to the bucket.
The files are organized in directory hierarchies by log type and date. The log
type, referred to as [LOG_ID]
in the LogEntry reference, can be a
simple name like syslog
or a compound name like
appengine.googleapis.com/request_log
. If these logs were stored in a bucket
named my-gcs-bucket
,
then the directories would be named as in the following example:
my-gcs-bucket/syslog/YYYY/MM/DD/
my-gcs-bucket/appengine.googleapis.com/request_log/YYYY/MM/DD/
A single Cloud Storage bucket can contain logs from multiple resource types.
Logging doesn't guarantee deduplication of log entries from sinks containing identical or overlapping queries; log entries from those sinks might be written multiple times to a Cloud Storage bucket.
The leaf directories (DD/
) contain multiple files, each of which holds the
exported log entries for a time period specified in the file name. The files
are sharded and their names end in a shard number,
Sn
or An
(n=0, 1, 2,
...). For example, here are two files that might be stored within the directory
my-gcs-bucket/syslog/2015/01/13/
:
08:00:00_08:59:59_S0.json 08:00:00_08:59:59_S1.json
These two files together contain the syslog
log entries for all instances during the hour beginning 0800 UTC. The log entry
timestamps are expressed in UTC (Coordinated Universal Time).
To get all the log entries, you must read all the shards for each time period—in this case, file shards 0 and 1. The number of file shards written can change for every time period depending on the volume of log entries.
Within the individual sharded files, log entries are stored as a list
of LogEntry
objects. For an example of a syslog
entry, go to LogEntry type on this page.
Note that sort order of log entries within the files is neither uniform nor otherwise guaranteed.
Filters
For examples of filters for exporting logs to Cloud Storage, go to Sample queries.
BigQuery
To view your exported logs in BigQuery, do the following:
Go to the BigQuery page in the Cloud Console:
Select the dataset used as your sink's destination.
Select one of the dataset's tables. The log entries are visible on the Details tab, or you can query the table to return your data.
For more information, go to Table organization to learn how the tables are organized, and to Exporting Logs and the BigQuery schema to learn how the exported log entry fields are named.
Exported logs availability
If there aren't any exported logs, check the export system metrics. The export system metrics can tell you how many log entries are exported and how many are dropped due to errors. If the export system metrics indicate that no log entries were exported, check your filter to verify that log entries matching your filter have recently arrived in Logging:
When a new table is created as Logging exports the log entries to BigQuery, it might take several minutes before the first log entries appear in the new table. Subsequent log entries usually appear within a minute. For more information, read Table organization below.
Table organization
When you export logs to a BigQuery dataset, Logging creates dated tables to hold the exported log entries. Log entries are placed in tables whose names are based on the entries' log names and timestamps1. The following table shows examples of how log names and timestamps are mapped to table names:
Log name | Log entry timestamp1 | BigQuery table name |
---|---|---|
syslog | 2017-05-23T18:19:22.135Z | syslog_20170523 |
apache-access | 2017-01-01T00:00:00.000Z | apache_access_20170101 |
compute.googleapis.com/activity_log | 2017-12-31T23:59:59.999Z | compute_googleapis_com_activity_log_20171231 |
1 The log entry timestamps are expressed in UTC (Coordinated Universal Time).
Schemas and fields
BigQuery table schemas for exported logs are based on the structure of the LogEntry type and the contents of the log payloads. You can view the table schema by selecting a table with exported log entries in the BigQuery Web UI.
The BigQuery table schema used to represent complex log entry payloads can be confusing and, in the case of exported audit logs, some special naming rules are used. For more information, read BigQuery schema for exported Logs.
Queries
For examples of queries for exporting logs to BigQuery, go to Sample queries.
For more information on BigQuery query syntax, review the Query reference. Especially useful are Table wildcard functions, which allow making queries across multiple tables, and the Flatten operator, which allows you to display data from repeated fields.
A sample Compute Engine logs query
The following BigQuery query retrieves log entries from multiple days and multiple log types:
The query searches the last three days of the logs
syslog
andapache-access
. The query was made on 23-Feb-2015 and it covers all log entries received on 21-Feb and 22-Feb, plus log entries received on 23-Feb up to the time the query was issued.The query retrieves results for a single Compute Engine instance,
1554300700000000000
.
SELECT timestamp AS Time, logName as Log, textPayload AS Message FROM (TABLE_DATE_RANGE(my_bq_dataset.syslog_, DATE_ADD(CURRENT_TIMESTAMP(), -2, 'DAY'), CURRENT_TIMESTAMP())), (TABLE_DATE_RANGE(my_bq_dataset.apache_access_, DATE_ADD(CURRENT_TIMESTAMP(), -2, 'DAY'), CURRENT_TIMESTAMP())) WHERE resource.type == 'gce_instance' AND resource.labels.instance_id == '1554300700000000000' ORDER BY time;
Here are some example output rows:
Row | Time | Log | Message --- | ----------------------- | ------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- 5 | 2015-02-21 03:40:14 UTC | projects/project-id/logs/syslog | Feb 21 03:40:14 my-gce-instance collectd[24281]: uc_update: Value too old: name = 15543007601548826368/df-tmpfs/df_complex-used; value time = 1424490014.269; last cache update = 1424490014.269; 6 | 2015-02-21 04:17:01 UTC | projects/project-id/logs/syslog | Feb 21 04:17:01 my-gce-instance /USR/SBIN/CRON[8082]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) 7 | 2015-02-21 04:49:58 UTC | projects/project-id/logs/apache-access | 128.61.240.66 - - [21/Feb/2015:04:49:58 +0000] "GET / HTTP/1.0" 200 536 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)" 8 | 2015-02-21 05:17:01 UTC | projects/project-id/logs/syslog | Feb 21 05:17:01 my-gce-instance /USR/SBIN/CRON[9104]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) 9 | 2015-02-21 05:30:50 UTC | projects/project-id/log/syslogapache-access | 92.254.50.61 - - [21/Feb/2015:05:30:50 +0000] "GET /tmUnblock.cgi HTTP/1.1" 400 541 "-" "-"
A sample App Engine logs query
The following BigQuery query retrieves unsuccessful App Engine requests from the last month:
SELECT timestamp AS Time, protoPayload.host AS Host, protoPayload.status AS Status, protoPayload.resource AS Path FROM (TABLE_DATE_RANGE(my_bq_dataset.appengine_googleapis_com_request_log_, DATE_ADD(CURRENT_TIMESTAMP(), -1, 'MONTH'), CURRENT_TIMESTAMP())) WHERE protoPayload.status != 200 ORDER BY time
Here are some of the results:
Row | Time | Host | Status | Path --- | ----------------------- | ------------------------------------- | ------ | ------ 6 | 2015-02-12 19:35:02 UTC | default.my-gcp-project-id.appspot.com | 404 | /foo?thud=3 7 | 2015-02-12 19:35:21 UTC | default.my-gcp-project-id.appspot.com | 404 | /foo 8 | 2015-02-16 20:17:19 UTC | my-gcp-project-id.appspot.com | 404 | /favicon.ico 9 | 2015-02-16 20:17:34 UTC | my-gcp-project-id.appspot.com | 404 | /foo?thud=%22what???%22
Pub/Sub
We recommend using Pub/Sub for integrating Cloud Logging logs with third-party software.
Logs exported to Pub/Sub are generally available within seconds, with 99% of logs available in less than 60 seconds.
To view your exported logs as they are streamed through Pub/Sub, do the following:
Go to the Pub/Sub page in the Cloud Console:
Find or create a subscription to the topic used for logs export, and pull a log entry from it. You might have to wait for a new log entry to be published.
For details on how logs are organized in Pub/Sub, see Exported logs organization on this page.
Exported logs availability
If there aren't any exported logs, check the export system metrics. The export system metrics can tell you how many log entries are exported and how many are dropped due to errors. If the export system metrics indicate that no log entries were exported, check your filter to verify that log entries matching your filter have recently arrived in Logging:
When you export logs to a Pub/Sub topic, Logging publishes each log entry as a Pub/Sub message as soon as Logging receives that log entry.
Exported logs organization
The data
field of each message is a base64-encoded LogEntry
object.
As an example, a Pub/Sub subscriber might pull the following
object from a topic that is receiving log entries.
The object shown contains a list with a single message, although
Pub/Sub might return several messages if several log entries are
available.
The data
value (about 600 characters) and the ackId
value
(about 200 characters) have been shortened to make the example easier to read:
{ "receivedMessages": [ { "ackId": "dR1JHlAbEGEIBERNK0EPKVgUWQYyODM...QlVWBwY9HFELH3cOAjYYFlcGICIjIg", "message": { "data": "eyJtZXRhZGF0YSI6eyJzZXZ0eSI6Il...Dk0OTU2G9nIjoiaGVsbG93b3JsZC5sb2cifQ==", "attributes": { "compute.googleapis.com/resource_type": "instance", "compute.googleapis.com/resource_id": "123456" }, "messageId": "43913662360" } } ] }
If you decode the data
field and format it, you get the following
LogEntry object:
{ "log": "helloworld.log", "insertId": "2015-04-15|11:41:00.577447-07|10.52.166.198|-1694494956", "textPayload": "Wed Apr 15 20:40:51 CEST 2015 Hello, world!", "timestamp": "2015-04-15T18:40:56Z", "labels": { "compute.googleapis.com\/resource_type": "instance", "compute.googleapis.com\/resource_id": "123456" }, "severity": "WARNING" } }
Third-party integration with Pub/Sub
Logging supports logging integration with third parties, such as Splunk. For a current list of integrations, see Partners for Google Cloud's operations suite integrations.
You export your logs through a Pub/Sub topic and the third party receives your logs by subscribing to the same topic.
To perform the integration, expect to do something like the following:
Obtain from the third party a Google Cloud service account name created from their Google Cloud project. For example,
12345-xyz@developer.gserviceaccount.com
. You use this name to give the third party permission to receive your logs.In your project containing the logs,
- Enable the Pub/Sub API.
Create a Pub/Sub topic. You can do this when you configure a log sink, or by following these steps:
- Go to the Pub/Sub topic list.
Select Create topic and enter a topic name. For example,
projects/my-project-id/topics/my-pubsub-topic
. You will export your logs to this topic.Each message sent to the topic include the timestamp of the exported log entry in the Pub/Sub message
attributes
; for example:"attributes": { "logging.googleapis.com/timestamp": "2018-10-01T00:00:00Z" }
Click Create.
Authorize Logging to export logs to the topic. For instructions, go to Setting permissions for Pub/Sub.
Authorize the third party to subscribe to your topic:
- Stay in the Pub/Sub topic list for your project in the Cloud Console.
- Select your new topic.
- Select Permissions.
- Enter the third party's service account name.
- In the Select a role menu, select Pub/Sub Subscriber.
- Click Add.
Provide the third party with the name of your Pub/Sub topic; for example,
projects/my-project-number/topics/my-pubsub-topic
. They should subscribe to the topic before you start exporting.Start exporting the logs once your third party has subscribed to the topic:
- In your project containing the logs you want to export, click Create Export above the search-query box. This opens the Edit Export panel.
- Enter a Sink Name.
- In the Sink Service menu, select Cloud Pub/Sub.
- In the Sink Destination menu, select the Pub/Sub topic to which the third party is subscribed.
- Select Create Sink to begin the export.
- A dialogue Sink created appears. This indicates that your export sink was successfully created with permissions to write future matching logs to the destination you selected.
Your third party should begin receiving the log entries right away.
For a further exploration of common logging export scenarios using Pub/Sub, refer to Design patterns for exporting to Cloud Logging: logging export scenarios.
Cloud Logging
Logs buckets are Cloud Logging storage containers in your Google Cloud projects that hold your logs data. You can create logs sinks to route all, or just a subset, of your logs to any logs bucket. This flexibility allows you to choose which Cloud project your logs are stored in and what other logs are stored with them.
For instructions on creating and then listing the logs buckets associated with your Google Cloud project, see Managing buckets.
Log entries organization
Logging log entries are objects of type LogEntry.
Log entries with the same log type, referred to as [LOG_ID]
in the
LogEntry reference, usually have the same format. The following
table shows sample log entries:
syslog
The Compute Engine syslog
is a custom log type produced by the logging agent,
google-fluentd
, which runs on
virtual machine instances:
{
logName: "projects/my-gcp-project-id/logs/syslog",
timestamp: "2015-01-13T19:17:01Z",
resource: {
type: "gce_instance",
labels: {
instance_id: "12345",
zone: "us-central1-a",
project_id: "my-gcp-project-id"
}
},
insertId: "abcde12345",
textPayload: "Jan 13 19:17:01 my-gce-instance /USR/SBIN/CRON[29980]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)"
}
request_log
The App Engine request_log
has log entries containing
protoPayload
fields which hold objects of type
RequestLog
:
{
logName: "projects/my-gcp-project-id/logs/appengine.googleapis.com%2Frequest_log",
timestamp: "2015-01-13T19:00:39.796169Z",
resource: {
type: "gae_app",
labels: {
module_id: "default",
zone: "us6",
project_id: "my-gcp-project-id",
version_id: "20150925t173233"
}
}
httpRequest: {
status: 200
}
insertId: "abcde12345",
operation: {
id: "abc123",
producer: "appengine.googleapis.com/request_id",
first: true,
last: true
}
protoPayload: {
@type: "type.googleapis.com/google.appengine.logging.v1.RequestLog"
versionId: "20150925t173233",
status: 200,
startTime: "2017-01-13T19:00:39.796169Z",
# ...
appId: "s~my-gcp-project-id",
appEngineRelease: "1.9.17",
}
}
activity
The activity
log is an Admin Activity audit log.
Its payload is a JSON representation of the
AuditLog
type:
{
logName: "projects/my-gcp-project-id/logs/cloudaudit.googleapis.com%2Factivity"
timestamp: "2017-04-22T13:41:32.245Z"
severity: "NOTICE"
resource: {
type: "gce_instance"
labels: {
instance_id: "2403273232180765234"
zone: "us-central1-b"
project_id: "my-gcp-project-id"
}
}
insertId: "54DC1882F4B49.A4996C2.6A02F4C1"
operation: {
id: "operation-1492868454262-54dc185e9a4f0-249fe233-f73d472a"
producer: "compute.googleapis.com"
last: true
}
protoPayload: {
@type: "type.googleapis.com/google.cloud.audit.AuditLog"
authenticationInfo: {
principalEmail: "649517127304@cloudservices.gserviceaccount.com"
}
requestMetadata: {…}
serviceName: "compute.googleapis.com"
methodName: "v1.compute.instances.delete"
resourceName: "projects/my-gcp-project-id/zones/us-central1-b/instances/abc123"
}
}
Late-arriving log entries
Exported log entries are saved to Cloud Storage buckets in hourly batches.
It might take from 2 to
3 hours before the first entries begin to
appear. Exported log file shards with the suffix An
("Append") hold log entries that arrived late.
If the export destination experiences an outage, then Cloud Logging buffers the data until the outage is over.
App Engine log entries
App Engine combines multiple sub-entries of type
google.appengine.logging.v1.LogLine
(also called AppLog or
AppLogLine) under a primary log entry of type
google.appengine.logging.v1.RequestLog
for the request that
causes the log activity. The log lines each have a "request ID" that identifies
the primary entry. The Logs Explorer displays the log lines with the request
log entry. Logging attempts to put all the log lines into the
batch with the original request, even if their timestamps would place them in
the next batch. If that isn't possible, the request log entry might be missing
some log lines, and there might be "orphan" log lines without a request in the
next batch. If this possibility is important to you, be prepared to reconnect
the pieces of the request when you process your logs.
Pricing
Exported logs don't incur Cloud Logging charges, but destination charges might apply. For details, review the appropriate product's pricing page:
Note also that if you send and then exclude your Virtual Private Cloud flow logs from Cloud Logging, VPC flow log generation charges apply in addition to the destination charges.