Using Exported Logs

This guide explains how you can find your exported log entries in Cloud Storage, BigQuery, and Cloud Pub/Sub.

For an overview of exporting logs, see Overview of Logs Export.

To learn how to export your logs, see the following pages:

Cloud Storage

If you are exporting log entries to Cloud Storage, go to the Cloud Storage browser in the GCP Console. Select the bucket you are using for logs export. See Organization for how logs are organized in the bucket.


If you don't see any exported logs, visit the Logs Viewer. Check that your export sink is running properly and that log entries matching your export filter have recently arrived in Stackdriver Logging.

Go to the Logs Viewer page

Log entries are saved to Cloud Storage buckets in hourly batches. It might take from 2 to 3 hours before the first entries begin to appear.


When you export logs to a Cloud Storage bucket, Stackdriver Logging writes a set of files to the bucket. The files are organized in directory hierarchies by log type and date. The log type can be a simple name like syslog or a compound name like If these logs were stored in a bucket named my-gcs-bucket, then the directories would be named as in the following example:


A single bucket can contain logs from multiple resource types.

Stackdriver Logging does not guarantee deduplication of log entries from sinks containing identical or overlapping filters; log entries from those sinks might be written multiple times to a Cloud Storage bucket.

The leaf directories (DD/) contain multiple files, each of which holds the exported log entries for a time period specified in the file name. The files are sharded and their names end in a shard number, Sn or An (n=0, 1, 2, ...). For example, here are two files that might be stored within the directory my-gcs-bucket/syslog/2015/01/13/:


These two files together contain the syslog log entries for all instances during the hour beginning 0800 UTC. The log entry timestamps are expressed in UTC (Coordinated Universal Time).

To get all the log entries, you must read all the shards for each time period—in this case, file shards 0 and 1. The number of file shards written can change for every time period depending on the volume of log entries.

Within the individual sharded files, log entries are stored as a list of LogEntry objects. For an example syslog entry, see LogEntry type on this page.

Note that sort order of log entries within the files is not uniform or otherwise guaranteed.


To see your exported logs in BigQuery, do the following:

  1. Go to the BigQuery Web UI:

    Go to the BigQuery UI

  2. Select the dataset used as your sink's destination.

  3. Select one of the dataset's tables. The log entries are visible on the Details tab, or you can query the table to return your data.

For more information, see Table organization to learn how the tables are named, and Schemas and fields to learn how the exported log entry fields are named in BigQuery.


If you don't see any exported logs, visit the Logs Viewer. Check that your export sink is running properly and that log entries matching your export filter have recently arrived in Stackdriver Logging.

Go to the Logs Viewer page

Log entries are saved to BigQuery in batches. It might take several minutes before the first entries begin to appear.

Table organization

When you export logs to a BigQuery dataset, Stackdriver Logging creates dated tables to hold the exported log entries. Log entries are placed in tables whose names are based on the entries' log names and timestamps1. The following table shows examples of how log names and timestamps are mapped to table names:

Log name Log entry timestamp1 BigQuery table name
syslog 2017-05-23T18:19:22.135Z syslog_20170523
apache-access 2017-01-01T00:00:00.000Z apache_access_20170101 2017-12-31T23:59:59.999Z compute_googleapis_com_activity_log_20171231

1: The log entry timestamps are expressed in UTC (Coordinated Universal Time).

Schemas and fields

BigQuery table schemas for exported logs are based on the structure of the LogEntry type and the contents of the log payloads. You can see the table schema by selecting a table with exported log entries in the BigQuery Web UI.

There are a few naming conventions that apply to the log entry fields:

  • For log entry fields that are part of the LogEntry type, the corresponding BigQuery field names are exactly the same as the log entry fields.
  • For any user-supplied fields, letter case is normalized to the lower case but naming is otherwise preserved.
    • For fields in structured payloads, as long as the @type specifier is not present, letter case is normalized to the lower case but naming is otherwise preserved.

The following examples show how these naming conventions are applied :

Log entry field LogEntry type mapping BigQuery field name
insertId insertId insertId
textPayload textPayload textPayload
httpRequest.status httpRequest.status httpRequest.status
httpRequest.requestMethod.GET httpRequest.requestMethod.[ABC] httpRequest.requestMethod.get
resource.labels.moduleid resource.labels.[ABC] resource.labels.moduleid
jsonPayload.MESSAGE jsonPayload.[ABC] jsonPayload.message
jsonPayload.myField.mySubfield jsonPayload.[ABC].[XYZ] jsonPayload.myfield.mysubfield

The mapping of structured payload fields to BigQuery field names is more complicated when the structured field contains a @type specifier. This is discussed in the following section.

Fields with @type

This section discusses special BigQuery schema field names for log entries whose payloads contain type specifiers (@type fields). This includes all exported audit log entries held in BigQuery. For example, this section explains why an audit log entry's protoPayload field might be mapped to the BigQuery schema field protopayload_auditlog.

If you are not concerned about these naming rules, then skip ahead to Queries. You can always inspect your BigQuery table schema to see what names are actually being generated.

Schema naming rules

Payloads in log entries can contain structured data, and that structured data can have nested structured fields. Any structured field can include an optional type specifier in the following format:


Structured fields that have type specifiers are customarily given BigQuery field names that have a version of [TYPE] appended to their field name.

For example, the following table shows the mapping of the top-level structured payload fields to BigQuery field names:

Payload Payload @type Payload field BigQuery field name
jsonPayload (none) statusCode jsonPayload.statusCode
jsonPayload statusCode jsonPayload_abc_xyz.statuscode
protoPayload (none) statusCode protoPayload.statuscode
protoPayload statusCode protopayload_abc_xyz.statuscode

If jsonPayload or protoPayload contains other structured fields, then those inner fields are mapped as follows:

  • If the nested structured field does not have a @type specifier, then its BigQuery field name is the same as the original field name, except it is normalized to lowercase letters.
  • If the nested structured field does have a @type specifier, then its BigQuery field name has [TYPE] (respelled) appended to the field name and is normalized to lowercase letters.


This example shows how structured payload fields are named and used when exported to BigQuery.

Assume that a log entry's payload is structured like the following:

jsonPayload: {
  name_a: {
    sub_a: "A value"
  name_b: {
    @type: ""
    sub_b: 22

The mapping to BigQuery fields is as follows:

  • The fields jsonPayload and name_a are structured, but they do not have @type specifiers. Their BigQuery names are jsonPayload and name_a, respectively.

  • The fields sub_a and sub_b are not structured, so their BigQuery names are sub_a and sub_b, respectively.

  • The field name_b has a @type specifier, whose [TYPE] is Therefore, its BigQuery name is name_b_google_cloud_v1_subtype.

In summary, the following 5 BigQuery names are defined for the log entry's payload:


Exceptions to the rule

There are two exceptions to the preceding rule for fields with type specifiers:

  • In App Engine request logs, the payload's name in exported logs in BigQuery is protoPayload, even though the payload has a type specifier. You can see this in the example App Engine logs query in Queries.

  • Certain audit log @type specifiers are shortened in BigQuery field names. This is discussed in the next section.

Audit log schema fields

The BigQuery field names for the following audit log fields are specially constructed to avoid them being too long. These fields all have type specifiers (@type fields) and they have the names listed in the following table:

Field BigQuery schema name
protoPayload protopayload_auditlog
protoPayload.serviceData protopayload_auditlog.servicedata_v1_bigquery
protoPayload.request protopayload_auditlog.request_[Vn]_[PROTO_NAME]
Example: protopayload_auditlog.request_v2_listlogsinksrequest
protoPayload.response protopayload_auditlog.response_[Vn]_[PROTO_NAME]
Example: protopayload_auditlog.response_v2_listlogsinksresponse


  • The serviceData naming rule is specific to audit logs that are generated by BigQuery and that are then exported to BigQuery. Those audit log entries contain a serviceData field that has a @type specifier of:
  • The naming rules for the request and response fields assume that their type specifiers follow a common pattern:


    For example, in a request field, the specifier could be:


    After replacing periods with underscores and changing to lower case, this results in the following customary field name:


    The specially constructed schema field name is the following:


    If you encounter a type specifier with a different pattern, then look at some exported log entries to see how the exported field names are shortened.


An audit log entry generated by BigQuery has a field with the following name:


If this log entry were then exported to BigQuery, how would the tableInsertRequest field be referenced? Before the name shortening, the corresponding exported field name would be:


After the name shortening, the same field is referenced in BigQuery tables like this:



See the Query Reference for more information on BigQuery queries. Especially useful are Table wildcard functions, which allow making queries across multiple tables and the Flatten operator, which allows to display data from repeated fields.

A sample Compute Engine logs query

The following BigQuery query retrieves log entries from multiple days and multiple log types:

  • The query searches the last three days of the logs syslog and apache-access. The query was made on 23-Feb-2015 and it covers all log entries received on 21-Feb and 22-Feb, plus log entries received on 23-Feb up to the time the query was issued.

  • The query retrieves results for a single Compute Engine instance, 1554300700000000000.

  • The query ignores traffic from the Stackdriver Monitoring endpoint health checker, Stackdriver_terminus_bot.

  timestamp AS Time,
  logName as Log,
  textPayload AS Message
  resource.type == 'gce_instance'
  AND resource.labels.instance_id == '1554300700000000000'
  AND NOT (textPayload CONTAINS 'Stackdriver_terminus_bot')
ORDER BY time;

Here are some example output rows:

Row | Time                    | Log                                         | Message
--- | ----------------------- | ------------------------------------------- | ----------------------------------------------------------------------------------------------------------------
 5  | 2015-02-21 03:40:14 UTC | projects/project-id/logs/syslog             | Feb 21 03:40:14 my-gce-instance collectd[24281]: uc_update: Value too old: name = 15543007601548826368/df-tmpfs/df_complex-used; value time = 1424490014.269; last cache update = 1424490014.269;
 6  | 2015-02-21 04:17:01 UTC | projects/project-id/logs/syslog             | Feb 21 04:17:01 my-gce-instance /USR/SBIN/CRON[8082]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
 7  | 2015-02-21 04:49:58 UTC | projects/project-id/logs/apache-access      | - - [21/Feb/2015:04:49:58 +0000] "GET / HTTP/1.0" 200 536 "-" "masscan/1.0 ("
 8  | 2015-02-21 05:17:01 UTC | projects/project-id/logs/syslog             | Feb 21 05:17:01 my-gce-instance /USR/SBIN/CRON[9104]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
 9  | 2015-02-21 05:30:50 UTC | projects/project-id/log/syslogapache-access | - - [21/Feb/2015:05:30:50 +0000] "GET /tmUnblock.cgi HTTP/1.1" 400 541 "-" "-"

A sample App Engine logs query

The following BigQuery query retrieves unsuccessful App Engine requests from the last month:

  timestamp AS Time, AS Host,
  protoPayload.status AS Status,
  protoPayload.resource AS Path
  protoPayload.status != 200

Here are some of the results:

Row | Time                    | Host                                  | Status | Path
--- | ----------------------- | ------------------------------------- | ------ | ------
 6  | 2015-02-12 19:35:02 UTC | |    404 | /foo?thud=3
 7  | 2015-02-12 19:35:21 UTC | |    404 | /foo
 8  | 2015-02-16 20:17:19 UTC |         |    404 | /favicon.ico
 9  | 2015-02-16 20:17:34 UTC |         |    404 | /foo?thud=%22what???%22

Cloud Pub/Sub

If you are streaming log entries to Cloud Pub/Sub, go to the Cloud Pub/Sub page in the GCP Console, find or create a subscription to the topic used for logs export, and pull a log entry from it. You might have to wait for a new log entry to be published.


If you don't see any exported logs, visit the Logs Viewer and check that log entries have arrived after you configured your sink:

Go to the Logs Viewer page

Log entries are streamed to Cloud Pub/Sub topics and should appear right away.


When you export logs to a Cloud Pub/Sub topic, Stackdriver Logging publishes each log entry as a Cloud Pub/Sub message as soon as Stackdriver Logging receives that log entry. The data field of each message is a base64-encoded LogEntry object. As an example, a Cloud Pub/Sub subscriber might pull the following object from a topic that is receiving log entries. The object shown contains a list with a single message, although Cloud Pub/Sub might return several messages if several log entries are available. The data value (about 600 characters) and the ackId value (about 200 characters) have been shortened to make the example easier to read:

 "receivedMessages": [
   "message": {
    "data": "eyJtZXRhZGF0YSI6eyJzZXZ0eSI6Il...Dk0OTU2G9nIjoiaGVsbG93b3JsZC5sb2cifQ==",
    "attributes": {
     "": "instance",
     "": "123456"
    "messageId": "43913662360"

If you decode the data field and format it, you get the following LogEntry object:

  "log": "helloworld.log",
  "insertId": "2015-04-15|11:41:00.577447-07||-1694494956",
  "textPayload": "Wed Apr 15 20:40:51 CEST 2015 Hello, world!",
  "timestamp": "2015-04-15T18:40:56Z",
  "labels": {
    "\/resource_type": "instance",
    "\/resource_id": "123456"
  "severity": "WARNING"

Log entry objects

Stackdriver Logging log entries are objects of type LogEntry. The most important fields of the log entry are shown in the following table:

Field name Type Description
logName string The log name to which the entry belongs: projects/[PROJECT_ID]/logs/[LOG_ID]. The [LOG_ID] is URL-encoded.
timestamp string The time the event occurred: "YYYY-MM-DDTHH:MM:SS.SSSSSSSSSZ".
severity LogSeverity The severity of the logged event.
resource MonitoredResource The resource, such as a VM instance or database, where the event occurred.
httpRequest HttpRequest Information about an associated HTTP request, if any.
labels map Optional information about the log entry.
Only one of the following:
textPayload string The content of the log entry as a text string.
protoPayload object The content of the log entry as a protocol buffer.
jsonPayload object The content of the log entry as a JSON object.

It is customary for all the log entries with a particular [LOG_ID] to have the same format. Each log type documents the contents of its payload field. See the Stackdriver Logging logs index for examples. Some sample log entries are shown in the following table:


Compute Engine's syslog is a custom log type produced by the logging agent, google-fluentd, which runs on virtual machine instances:

  logName: "projects/my-gcp-project-id/logs/syslog",
  timestamp: "2015-01-13T19:17:01Z",
  resource: {
    type: "gce_instance",
    labels: {
      instance_id: "12345",
      zone: "us-central1-a",
      project_id: "my-gcp-project-id"
  insertId: "abcde12345",
  textPayload: "Jan 13 19:17:01 my-gce-instance /USR/SBIN/CRON[29980]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)"


App Engine's request_log has log entries containing protoPayload fields which hold objects of type RequestLog:

  logName: "projects/my-gcp-project-id/logs/",
  timestamp: "2015-01-13T19:00:39.796169Z",
  resource: {
    type: "gae_app",
    labels: {
      module_id: "default",
      zone: "us6",
      project_id: "my-gcp-project-id",
      version_id: "20150925t173233"
  httpRequest: {
    status: 200
  insertId: "abcde12345",
  operation: {
    id: "abc123",
    producer: "",
    first: true,
    last: true
  protoPayload: {
    @type: ""
    versionId: "20150925t173233",
    status: 200,
    startTime: "2017-01-13T19:00:39.796169Z",
    # ...
    appId: "s~my-gcp-project-id",
    appEngineRelease: "1.9.17",


The activity log is an Admin Activity audit log. Its payload is a JSON representation of the AuditLog type:

 logName: "projects/my-gcp-project-id/logs/"
 timestamp: "2017-04-22T13:41:32.245Z"
 severity: "NOTICE"
 resource: {
  type: "gce_instance"
  labels: {
   instance_id: "2403273232180765234"
   zone: "us-central1-b"
   project_id: "my-gcp-project-id"
 insertId: "54DC1882F4B49.A4996C2.6A02F4C1"
 operation: {
  id: "operation-1492868454262-54dc185e9a4f0-249fe233-f73d472a"
  producer: ""
  last: true
 protoPayload: {
  @type: ""
  authenticationInfo: {
   principalEmail: ""
  requestMetadata: {…}
  serviceName: ""
  methodName: "v1.compute.instances.delete"
  resourceName: "projects/my-gcp-project-id/zones/us-central1-b/instances/abc123"

Late-arriving log entries

Exported log entries are stored in hourly batches—files in Cloud Storage. Exported log file shards with the suffix An ("Append") hold log entries that arrived late.

Also, App Engine combines multiple sub-entries of type google.appengine.logging.v1.LogLine (also called AppLog or AppLogLine) under a primary log entry of type google.appengine.logging.v1.RequestLog for the request that causes the log activity. The log lines each have a "request ID" that identifies the primary entry. The Logs Viewer displays the log lines with the request log entry. Stackdriver Logging attempts to put all the log lines into the batch with the original request, even if their timestamps would place them in the next batch. If that is not possible, the request log entry might be missing some log lines, and there might be "orphan" log lines without a request in the next batch. If this possibility is important to you, be prepared to reconnect the pieces of the request when you process your logs.

Third party integration with Cloud Pub/Sub

Stackdriver Logging supports logging integration with third parties. See Stackdriver Partnerships for a current list of integrations.

You export your logs through a Cloud Pub/Sub topic and the third party receives your logs by subscribing to the same topic.

To perform the integration, expect to do something like the following:

  1. Obtain from the third party a Google Cloud Platform (GCP) service account name created from their GCP project. For example, You use this name to give the third party permission to receive your logs.

  2. In your project containing the logs,

    Enable the API

  3. Create a Pub/Sub topic. You can do this when you configure a log sink, or by following these steps:

    1. Go to the Pub/Sub topic list.
    2. Select Create topic and enter a topic name. For example, projects/my-project-id/topics/my-pubsub-topic. You will export your logs to this topic.
    3. Select Create.
    4. Authorize Stackdriver Logging to export logs to the topic. See Setting permissions for Cloud Pub/Sub.
  4. Authorize the third party to subscribe to your topic:

    1. Stay in the Pub/Sub topic list for your project in the GCP Console.
    2. Select your new topic.
    3. Select Permissions.
    4. Enter the third party's service account name.
    5. In the Select a role menu, select Pub/Sub Subscriber.
    6. Select Add.
  5. Give the third party the name of your Cloud Pub/Sub topic. For example, projects/my-project-number/topics/my-pubsub-topic. They should subscribe to the topic before you start exporting.

  6. Start exporting the logs once your third party has subscribed to the topic:

    1. In your project containing the logs you want to export, click on Create Export above the search-filter box. This opens the Edit Export panel:

      Edit Export panel

    2. Enter a Sink Name.

    3. In the Sink Service menu, select Cloud Pub/Sub.
    4. In the Sink Destination menu, select the Cloud Pub/Sub topic to which the third party is subscribed.
    5. Select Create Sink to begin the export.
    6. A dialogue Sink created appears. This indicates that your export sink was successfully created with permissions to write future matching logs to the destination you selected.

Your third party should begin receiving the log entries right away.

Was this page helpful? Let us know how we did:

Send feedback about...

Stackdriver Logging