Cloud Logging API overview

The Cloud Logging API lets you programmatically accomplish logging-related tasks, including reading and writing log entries, creating log-based metrics, and managing sinks to route logs.

See the following reference documentation for the Logging API:

For details on the limits that apply to your usage of the Logging API, see Logging API quotas and limits.

Enable the Logging API

The Logging API must be enabled before it can be used. For instructions, see Enable the Logging API.

Access the API

You can invoke the API by using a command-line interface or a client library written to support a high-level programming language. See the following reference documentation:

  • For the command-line interface to the Logging API, see the gcloud logging command.

  • For instructions on setting up client libraries and authorizing the Logging API, with sample code, see Client libraries.

To try the API without writing any code, you can use the APIs Explorer. The APIs Explorer appears on REST API method reference pages in a panel titled Try this API. For instructions, see Using the API Explorer.

Optimize usage of the API

Following are some tips for using the Logging API effectively.

Read and list logs efficiently

To efficiently use your entries.list quota, try the following:

  • Set a large pageSize: In the request body, you can set the pageSize parameter up to and including the maximum value of an int32 (2,147,483,647). Setting the pageSize parameter to a higher value lets Logging return more entries per query, reducing the number of queries needed to retrieve the full set of entries that you're targeting.

  • Set a large deadline: When a query nears its deadline, Logging prematurely terminates and returns the log entries scanned thus far. If you set a large deadline, then Logging can retrieve more entries per query.

  • Retry quota errors with exponential backoff: If your use case isn't time-sensitive, then you can wait for the quota to replenish before retrying your query. The pageToken parameter is still valid after a delay.

Write logs efficiently

To efficiently use your entries.write quota, increase your batch volume to support a larger number of log entries per request, which helps reduce the number of writes made per request. Logging supports requests with up to 10MB of data.

Bulk retrieval of log entries

The method you use to retrieve log entries is entries.list, but this method isn't intended for high-volume retrieval of log entries. Using this method in this way might quickly exhaust your quota for read requests.

If you need contemporary or continuous querying, or bulk retrieval of log entries, then configure sinks to send your logs to Pub/Sub. When you create a Pub/Sub sink, you send the log entries that you want to process to a Pub/Sub topic, and then consume the log entries from there.

This approach has the following advantages:

  • It doesn't exhaust your read-request quota. For more on quotas, see Logging usage limits.
  • It captures log entries that might have been written out of order, without workarounds to seek back and re-read recent entries to ensure nothing was missed.
  • It automatically buffers the log entries if the logs consumer becomes unavailable.
  • The log entries don't count towards your free allotment because they aren't stored in log buckets.

You can create Pub/Sub sinks to route logs to a variety of analytics platforms. For an example, see Scenarios for routing Cloud Logging data: Splunk.