The Cloud Logging API lets you programmatically accomplish logging-related tasks, including reading and writing log entries, creating log-based metrics, and managing sinks to route logs.
See the following reference documentation for the Logging API:
- For the REST version of the API, see REST reference.
- For the gRPC version of the API, see gRPC reference.
For details on the limits that apply to your usage of the Logging API, see Logging API quotas and limits.
Enable the Logging API
The Logging API must be enabled before it can be used. For instructions, see Enable the Logging API.
Access the Logging API
You can indirectly invoke the Logging API by using a command-line interface or a client library written to support a high-level programming language. For more information, see the following reference documentation:
- For the command-line interface to the Logging API,
see the
gcloud logging
command.
- To learn how to set up client libraries and authorize the Logging API, with sample code, see Client libraries.
- To try the API without writing any code, you can use the APIs Explorer. The APIs Explorer appears on REST API method reference pages in a panel titled Try this API. For instructions, see Using the API Explorer.
Optimize usage of the Logging API
Following are some tips for using the Logging API effectively.
Read and list logs efficiently
To efficiently use your entries.list
quota, try the
following:
Set a large
pageSize
: In the request body, you can set thepageSize
parameter up to and including the maximum value of anint32
(2,147,483,647). Setting thepageSize
parameter to a higher value lets Logging return more entries per query, reducing the number of queries needed to retrieve the full set of entries that you're targeting.Set a large deadline: When a query nears its deadline, Logging prematurely terminates and returns the log entries scanned thus far. If you set a large deadline, then Logging can retrieve more entries per query.
Retry quota errors with exponential backoff: If your use case isn't time-sensitive, then you can wait for the quota to replenish before retrying your query. The
pageToken
parameter is still valid after a delay.
Write logs efficiently
To efficiently use your entries.write
quota, increase your
batch volume to support a larger number of log entries per
request, which helps reduce the number of writes made per request.
Logging supports requests with up to 10MB of data.
Bulk retrieval of log entries
The method you use to retrieve log entries is
entries.list
, but this method isn't intended for
high-volume retrieval of log entries. Using this method in this way might
quickly exhaust your quota for read requests.
If you need contemporary or continuous querying, or bulk retrieval of log entries, then configure sinks to send your log entries to Pub/Sub. When you create a Pub/Sub sink, you send the log entries that you want to process to a Pub/Sub topic, and then consume the log entries from there.
This approach has the following advantages:
- It doesn't exhaust your read-request quota. For more on quotas, see Logging usage limits.
- It captures log entries that might have been written out of order, without workarounds to seek back and re-read recent entries to ensure nothing was missed.
- It automatically buffers the log entries if the consumer becomes unavailable.
- The log entries don't count towards your free allotment because they aren't stored in log buckets.
You can create Pub/Sub sinks to route log entries to a variety of analytics platforms. For an example, see Scenarios for routing Cloud Logging data: Splunk.