This document explains how you can find log entries that you routed from Cloud Logging to Cloud Storage buckets.
Log entries are saved to Cloud Storage buckets in hourly batches. It might take from 2 to 3 hours before the first entries begin to appear.
Before you begin
For a conceptual discussion of sinks, see Overview of routing and storage models: Sinks.
For instructions on how to route your logs, see Route logs to supported destinations.
To view your logs routed to Cloud Storage, do the following:
In the Google Cloud console, select Storage, and then click Buckets, or click the following button:
Select the Cloud Storage bucket you are using as your routing destination.
When you route logs to a Cloud Storage bucket, Logging writes a set of files to the bucket.
The files are organized in directory hierarchies by log type and date. The log
type, referred to as
[LOG_ID] in the
can be a simple name like
syslog or a compound name like
appengine.googleapis.com/request_log. If these logs were stored in a bucket
then the directories would be named as in the following example:
A single Cloud Storage bucket can contain logs from multiple resource types. The maximum file size is 3.5 GiB.
Logging doesn't guarantee deduplication of log entries from sinks containing identical or overlapping queries; log entries from those sinks might be written multiple times to a Cloud Storage bucket.
The leaf directories (
DD/) contain multiple files, each of which holds the
routed log entries for a time period specified in the file name. The files
are sharded and their names end in a shard number,
An (n=0, 1, 2,
...). For example, here are two files that might be stored within the directory
These two files together contain the
log entries for all instances during the hour beginning 08:00:00 UTC and ending
08:59:59 UTC. The log entry
timestamps are expressed in UTC (Coordinated Universal Time).
Log entries that arrive with a
within the 60-minute aligned window of their
timestamp are written to
main shard files. For example, a log entry with a
timestamp of 08:00:00
receiveTimestamp of 08:10:00 is stored in the main shard file.
These files include a numbered main shard in the suffix:
Log entries that arrive with a
timestamp in a different 60-minute
aligned window than their
receiveTimestamp are written to addendum shard
files. For example, A log entry with a
timestamp of 08:00:00 and
receiveTimestamp of 09:10:00 is stored in an addendum shard file.
These files include a numbered addendum shard with
For example, a log entry that has a
timestamp between 08:00:00 and
08:59:59 but a
receiveTimestamp in a different 60-minute aligned window
is written to a file with the
suffix, where the Unix timestamp identifies the time the file was routed to
Cloud Storage. If a log entry had a
08:50:00 and a
receiveTimestamp of 09:10:00, and was routed at
09:15:00 on March 25, 2021, the addendum file would be written as follows:
To get all the log entries, you must read all the shards for each time period—in this case, file shards 0 and 1. The number of file shards written can change for each time period.
Note that sort order of log entries within the files is neither uniform nor otherwise guaranteed.
Late-arriving log entries
Routed log entries are saved to Cloud Storage buckets in hourly batches.
It might take from 2 to
3 hours before the first entries begin to
appear. Routed log file shards with the suffix
("Append") hold log entries that arrived late.
If the destination experiences an outage, then Cloud Logging buffers the data until the outage is over.
If there aren't any logs in your sink's destination, check the export system metrics. The export system metrics indicate how many log entries are routed and how many are dropped due to errors. If the export system metrics indicate that no log entries were routed to the destination, check your filter to verify that log entries matching your filter have recently arrived in Logging.
In the Google Cloud console, select Logging, and then select Log Router, or click the following button:
App Engine log entries
App Engine combines multiple sub-entries of type
google.appengine.logging.v1.LogLine (also called AppLog or
AppLogLine) under a primary log entry of type
google.appengine.logging.v1.RequestLog for the request that
causes the log activity. The log lines each have a "request ID" that identifies
the primary entry. The Logs Explorer displays the log lines with the request
log entry. Logging attempts to put all the log lines into the
batch with the original request, even if their timestamps would place them in
the next batch. If that isn't possible, the request log entry might be missing
some log lines, and there might be "orphan" log lines without a request in the
next batch. If this possibility is important to you, be prepared to reconnect
the pieces of the request when you process your logs.
If logs seem to be missing from your sink's destination or you otherwise suspect that your sink isn't properly routing logs, then see Troubleshoot routing logs.
Cloud Logging doesn't charge to route logs to a supported destination; however, the destination might apply charges. For information about destination costs, see Cloud Storage pricing.
If you send and then exclude your Virtual Private Cloud flow logs from Cloud Logging, VPC flow log generation charges apply in addition to the destination charges.