View logs in sink destinations

Stay organized with collections Save and categorize content based on your preferences.

This document explains how you can find log entries that you routed from Cloud Logging to supported destinations:

Destination Routing frequency
Cloud Storage buckets Hourly batches
BigQuery tables Near real-time
Pub/Sub topics Near real-time
Cloud Logging buckets Near real-time
Third-party destinations (Splunk) Near real-time

For a conceptual discussion of sinks, see Routing and storage overview: Sinks.

For instructions on how to route your logs, see Configure and manage sinks.

Cloud Storage

To view your routed logs in Cloud Storage, do the following:

  1. Go to Cloud Storage Browser in the Google Cloud console:

    Go to Cloud Storage browser

  2. Select the Cloud Storage bucket you are using as your routing destination.

For details about how logs are organized in the Cloud Storage bucket, see Cloud Storage organization in this document.

Routing frequency

Log entries are saved to Cloud Storage buckets in hourly batches. It might take from 2 to 3 hours before the first entries begin to appear.

Logs organization

When you route logs to a Cloud Storage bucket, Logging writes a set of files to the bucket.

The files are organized in directory hierarchies by log type and date. The log type, referred to as [LOG_ID] in the LogEntry reference, can be a simple name like syslog or a compound name like If these logs were stored in a bucket named my-gcs-bucket, then the directories would be named as in the following example:


A single Cloud Storage bucket can contain logs from multiple resource types. Each file is approximately 3.5 GiB.

Logging doesn't guarantee deduplication of log entries from sinks containing identical or overlapping queries; log entries from those sinks might be written multiple times to a Cloud Storage bucket.

The leaf directories (DD/) contain multiple files, each of which holds the routed log entries for a time period specified in the file name. The files are sharded and their names end in a shard number, Sn or An (n=0, 1, 2, ...). For example, here are two files that might be stored within the directory my-gcs-bucket/syslog/2015/01/13/:


These two files together contain the syslog log entries for all instances during the hour beginning 08:00:00 UTC and ending 08:59:59 UTC. The log entry timestamps are expressed in UTC (Coordinated Universal Time).

Log entries that arrive with a receiveTimestamp within the 60-minute aligned window of their timestamp are written to main shard files. For example, a log entry with a timestamp of 08:00:00 and a receiveTimestamp of 08:10:00 is stored in the main shard file.

These files include a numbered main shard in the suffix: _Sn.json.

Log entries that arrive with a timestamp in a different 60-minute aligned window than their receiveTimestamp are written to addendum shard files. For example, A log entry with a timestamp of 08:00:00 and a receiveTimestamp of 09:10:00 is stored in an addendum shard file.

These files include a numbered addendum shard with the suffix: _An:Unix_timestamp.json.

For example, a log entry that has a timestamp between 08:00:00 and 08:59:59 but a receiveTimestamp in a different 60-minute aligned window is written to a file with the _An:Unix_timestamp.json suffix, where the Unix timestamp identifies the time the file was routed to Cloud Storage. If a log entry had a timestamp of 08:50:00 and a receiveTimestamp of 09:10:00, and was routed at 09:15:00 on March, 25 2021, the addendum file would be written as follows:


To get all the log entries, you must read all the shards for each time period—in this case, file shards 0 and 1. The number of file shards written can change for every time period depending on the volume of log entries.

Within the individual sharded files, log entries are stored as a list of LogEntry objects. For an example of a syslog entry, see LogEntry type in this document.

Note that sort order of log entries within the files is neither uniform nor otherwise guaranteed.


For examples of filters for routing logs to Cloud Storage, see Sample queries.


To view your routed logs in BigQuery, do the following:

  1. Go to the BigQuery page in the Google Cloud console:

    Go to BigQuery

  2. Select the dataset used as your sink's destination.

  3. Select one of the dataset's tables. The log entries are visible on the Details tab, or you can query the table to return your data.

To learn how the tables are organized, see Table organization.

To learn how the routed log entry fields are named, see BigQuery schema for logs.

Routing frequency

When a new table is created as Logging routes the log entries to BigQuery, it might take several minutes before the first log entries appear in the new table. Subsequent log entries usually appear within a minute.

Table organization

When you route logs to a BigQuery dataset, Logging creates dated tables to hold the routed log entries. Log entries are placed in tables whose names are based on the entries' log names and timestamps1. The following table shows examples of how log names and timestamps are mapped to table names:

Log name Log entry timestamp1 BigQuery table name
syslog 2017-05-23T18:19:22.135Z syslog_20170523
apache-access 2017-01-01T00:00:00.000Z apache_access_20170101 2017-12-31T23:59:59.999Z compute_googleapis_com_activity_log_20171231

1 The log entry timestamps are expressed in UTC (Coordinated Universal Time).

Schemas and fields

BigQuery table schemas for routed logs are based on the structure of the LogEntry type and the contents of the log payloads. You can view the table schema by selecting a table with routed log entries in the BigQuery UI.

The BigQuery table schema used to represent complex log entry payloads can be confusing and, in the case of routed audit logs, some special naming rules are used. For more information, see BigQuery schema for logs.


For examples of queries for routing logs to BigQuery, see Sample queries.

For more information on BigQuery query syntax, see Query reference. Especially useful are table wildcard functions, which allows you to query across multiple tables, and the flatten operator, which allows you to display data from repeated fields.

Sample Compute Engine query

The following BigQuery query retrieves log entries from multiple days and multiple log types:

  • The query searches the last three days of the logs syslog and apache-access. The query was made on 23-Feb-2015 and it covers all log entries received on 21-Feb and 22-Feb, plus log entries received on 23-Feb up to the time the query was issued.

  • The query retrieves results for a single Compute Engine instance, 1554300700000000000.

  timestamp AS Time,
  logName as Log,
  textPayload AS Message
  resource.type == 'gce_instance'
  AND resource.labels.instance_id == '1554300700000000000'
ORDER BY time;

Here are some example output rows:

Row | Time                    | Log                                         | Message
--- | ----------------------- | ------------------------------------------- | ----------------------------------------------------------------------------------------------------------------
 5  | 2015-02-21 03:40:14 UTC | projects/project-id/logs/syslog             | Feb 21 03:40:14 my-gce-instance collectd[24281]: uc_update: Value too old: name = 15543007601548826368/df-tmpfs/df_complex-used; value time = 1424490014.269; last cache update = 1424490014.269;
 6  | 2015-02-21 04:17:01 UTC | projects/project-id/logs/syslog             | Feb 21 04:17:01 my-gce-instance /USR/SBIN/CRON[8082]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
 7  | 2015-02-21 04:49:58 UTC | projects/project-id/logs/apache-access      | - - [21/Feb/2015:04:49:58 +0000] "GET / HTTP/1.0" 200 536 "-" "masscan/1.0 ("
 8  | 2015-02-21 05:17:01 UTC | projects/project-id/logs/syslog             | Feb 21 05:17:01 my-gce-instance /USR/SBIN/CRON[9104]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
 9  | 2015-02-21 05:30:50 UTC | projects/project-id/log/syslogapache-access | - - [21/Feb/2015:05:30:50 +0000] "GET /tmUnblock.cgi HTTP/1.1" 400 541 "-" "-"

Sample App Engine query

The following BigQuery query retrieves unsuccessful App Engine requests from the last month:

  timestamp AS Time, AS Host,
  protoPayload.status AS Status,
  protoPayload.resource AS Path
  protoPayload.status != 200

Here are some of the results:

Row | Time                    | Host                                  | Status | Path
--- | ----------------------- | ------------------------------------- | ------ | ------
 6  | 2015-02-12 19:35:02 UTC | |    404 | /foo?thud=3
 7  | 2015-02-12 19:35:21 UTC | |    404 | /foo
 8  | 2015-02-16 20:17:19 UTC |         |    404 | /favicon.ico
 9  | 2015-02-16 20:17:34 UTC |         |    404 | /foo?thud=%22what???%22


We recommend using Pub/Sub for integrating Cloud Logging logs with third-party software.

Logs routed to Pub/Sub are generally available within seconds, with 99% of logs available in less than 60 seconds.

To view your routed logs as they are streamed through Pub/Sub, do the following:

  1. Go to the Pub/Sub page in the Google Cloud console:

    Go to Pub/Sub

  2. Find or create a subscription to the topic used in the log sink, and pull a log entry from it. You might have to wait for a new log entry to be published.

For details on how logs are organized in Pub/Sub, see Logs organization in this document.

Routing frequency

When you route logs to a Pub/Sub topic, Logging publishes each log entry as a Pub/Sub message as soon as Logging receives that log entry.

Logs organization

The data field of each message is a base64-encoded LogEntry object. As an example, a Pub/Sub subscriber might pull the following object from a topic that is receiving log entries. The object shown contains a list with a single message, although Pub/Sub might return several messages if several log entries are available. The data value (about 600 characters) and the ackId value (about 200 characters) have been shortened to make the example easier to read:

 "receivedMessages": [
   "message": {
    "data": "eyJtZXRhZGF0YSI6eyJzZXZ0eSI6Il...Dk0OTU2G9nIjoiaGVsbG93b3JsZC5sb2cifQ==",
    "attributes": {
     "": "instance",
     "": "123456"
    "messageId": "43913662360"

If you decode the data field and format it, you get the following LogEntry object:

  "log": "helloworld.log",
  "insertId": "2015-04-15|11:41:00.577447-07||-1694494956",
  "textPayload": "Wed Apr 15 20:40:51 CEST 2015 Hello, world!",
  "timestamp": "2015-04-15T18:40:56Z",
  "labels": {
    "\/resource_type": "instance",
    "\/resource_id": "123456"
  "severity": "WARNING"

Third-party integration with Pub/Sub

Logging supports logging integration with third parties, such as Splunk. For a current list of integrations, see Partners for Google Cloud's operations suite integrations.

You route your logs through a Pub/Sub topic and the third party receives your logs by subscribing to the same topic.

To perform the integration, expect to do something like the following:

  1. Obtain from the third party a Google Cloud service account name created from their Google Cloud project. For example, You use this name to give the third party permission to receive your logs.

  2. In your project containing the logs, enable the Pub/Sub API.

  3. Enable the Pub/Sub API.

    Enable the API

  4. Create a Pub/Sub topic. You can create a topic when you configure a log sink, or by following these steps:

    1. Go to the Pub/Sub topic list.
    2. Select Create topic and enter a topic name. For example, projects/my-project-id/topics/my-pubsub-topic. You will route your logs to this topic.

      Each message sent to the topic include the timestamp of the routed log entry in the Pub/Sub message attributes; for example:

      "attributes": {
        "": "2018-10-01T00:00:00Z"
    3. Click Create.

    4. Authorize Logging to route logs to the topic. For instructions, see Setting permissions for Pub/Sub.

  5. Authorize the third party to subscribe to your topic:

    1. Stay in the Pub/Sub topic list for your project in the Google Cloud console.
    2. Select your new topic.
    3. Select Permissions.
    4. Enter the third party's service account name.
    5. In the Select a role menu, select Pub/Sub Subscriber.
    6. Click Add.
  6. Provide the third party with the name of your Pub/Sub topic; for example, projects/my-project-number/topics/my-pubsub-topic. They should subscribe to the topic before you start routing.

  7. Start routing the logs once your third party has subscribed to the topic:

    1. In your project containing the logs you want to route, click Create Export above the search-query box. This opens the Edit Export panel.
    2. Enter a Sink Name.
    3. In the Sink Service menu, select Cloud Pub/Sub.
    4. In the Sink Destination menu, select the Pub/Sub topic to which the third party is subscribed.
    5. Select Create Sink.
    6. A dialog with the message Sink created appears. This message indicates that your sink was successfully created with permissions to write future matching logs to the destination you selected.

Your third party should begin receiving the log entries right away.

For an exploration of common logs routing scenarios using Pub/Sub, see Design patterns for routing to Cloud Logging: logging export scenarios.

Cloud Logging

Logs buckets are Cloud Logging storage containers in your Google Cloud projects that hold your logs data. You can create logs sinks to route all, or just a subset, of your logs to any bucket in Cloud Logging. This flexibility allows you to choose which Cloud project your logs are stored in and what other logs are stored with them.

For instructions on creating and then listing the logs buckets associated with your Cloud project, see Configure and manage log buckets.

Log entries organization

Logging log entries are objects of type LogEntry.

Log entries with the same log type, referred to as [LOG_ID] in the LogEntry reference, usually have the same format. The following table shows sample log entries:


The Compute Engine syslog is a custom log type produced by the logging agent, google-fluentd, which runs on virtual machine instances:

  logName: "projects/my-gcp-project-id/logs/syslog",
  timestamp: "2015-01-13T19:17:01Z",
  resource: {
    type: "gce_instance",
    labels: {
      instance_id: "12345",
      zone: "us-central1-a",
      project_id: "my-gcp-project-id"
  insertId: "abcde12345",
  textPayload: "Jan 13 19:17:01 my-gce-instance /USR/SBIN/CRON[29980]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)"


The App Engine request_log has log entries containing protoPayload fields which hold objects of type RequestLog:

  logName: "projects/my-gcp-project-id/logs/",
  timestamp: "2015-01-13T19:00:39.796169Z",
  resource: {
    type: "gae_app",
    labels: {
      module_id: "default",
      zone: "us6",
      project_id: "my-gcp-project-id",
      version_id: "20150925t173233"
  httpRequest: {
    status: 200
  insertId: "abcde12345",
  operation: {
    id: "abc123",
    producer: "",
    first: true,
    last: true
  protoPayload: {
    @type: ""
    versionId: "20150925t173233",
    status: 200,
    startTime: "2017-01-13T19:00:39.796169Z",
    # ...
    appId: "s~my-gcp-project-id",
    appEngineRelease: "1.9.17",


The activity log is an Admin Activity audit log. Its payload is a JSON representation of the AuditLog type:

 logName: "projects/my-gcp-project-id/logs/"
 timestamp: "2017-04-22T13:41:32.245Z"
 severity: "NOTICE"
 resource: {
  type: "gce_instance"
  labels: {
   instance_id: "2403273232180765234"
   zone: "us-central1-b"
   project_id: "my-gcp-project-id"
 insertId: "54DC1882F4B49.A4996C2.6A02F4C1"
 operation: {
  id: "operation-1492868454262-54dc185e9a4f0-249fe233-f73d472a"
  producer: ""
  last: true
 protoPayload: {
  @type: ""
  authenticationInfo: {
   principalEmail: ""
  requestMetadata: {…}
  serviceName: ""
  methodName: "v1.compute.instances.delete"
  resourceName: "projects/my-gcp-project-id/zones/us-central1-b/instances/abc123"


Logging supports integration with Splunk by routing logs through a Pub/Sub topic. For information on how to create a Pub/Sub topic and authorize Splunk to subscribe to the topic, see Third-party integration with Pub/Sub.

Late-arriving log entries

Routed log entries are saved to Cloud Storage buckets in hourly batches. It might take from 2 to 3 hours before the first entries begin to appear. Routed log file shards with the suffix An ("Append") hold log entries that arrived late.

If the destination experiences an outage, then Cloud Logging buffers the data until the outage is over.

If there aren't any logs in your sink's destination, check the export system metrics. The export system metrics indicate how many log entries are routed and how many are dropped due to errors. If the export system metrics indicate that no log entries were routed to the destination, check your filter to verify that log entries matching your filter have recently arrived in Logging:

Go to Log Router

App Engine log entries

App Engine combines multiple sub-entries of type google.appengine.logging.v1.LogLine (also called AppLog or AppLogLine) under a primary log entry of type google.appengine.logging.v1.RequestLog for the request that causes the log activity. The log lines each have a "request ID" that identifies the primary entry. The Logs Explorer displays the log lines with the request log entry. Logging attempts to put all the log lines into the batch with the original request, even if their timestamps would place them in the next batch. If that isn't possible, the request log entry might be missing some log lines, and there might be "orphan" log lines without a request in the next batch. If this possibility is important to you, be prepared to reconnect the pieces of the request when you process your logs.


If logs seem to be missing from your sink's destination or you otherwise suspect that your sink isn't properly routing logs, then see Troubleshoot routing and sinks.


Cloud Logging doesn't charge to route logs, but destination charges might apply. For details, see the appropriate service's pricing details:

Note also that if you send and then exclude your Virtual Private Cloud flow logs from Cloud Logging, VPC flow log generation charges apply in addition to the destination charges.