Jump to Content
DevOps & SRE

More support for structured logs in new version of Go logging library

July 1, 2022
https://storage.googleapis.com/gweb-cloudblog-publish/images/logging_nZNFoFp.max-2600x2600.jpg
Leonid Yankulin

Developer Relations Engineer

The new version of the Google logging client library for Go has been released. Version 1.5 adds new features and bug fixes including new structured logging capabilities that complete last year's effort to enrich structured logging support in Google logging client libraries.

Here are few of the new features in v1.5:

  • Faster and more robust way to detect and capture Google Cloud resources that the application is running on.
  • Automatic source location detection to support log observability for debugging and troubleshooting.
  • W3C header traceparent for capturing tracing information within the logged entries.
  • Better control over batched ingestion of the log entries by supporting the partialSuccess flag within Logger instances.
  • Support for out-of-process ingestion with redirection of the logs to stdout and stderr using a structured logging format.

Let's look into each closer:

Resource detection

Resource detection is an existing feature of the logging library. It detects a resource on which an application is running. Retrieves the resource's metadata. And implicitly adds this metadata to each log entry the application ingests using the library. It is especially useful for applications that run on Google Cloud since it collects a lot of resource's attributes from the Metadata server of the resource. These attributes enrich ingested logs with additional information such as a location of the VM, a name of the container or a service Id of the AppEngine service. The below Json shows a sample of the retrieved information after detecting the resource as a GKE container and retrieving resource metadata according to the documentation.

Loading...

The implementation is optimized to avoid performance degradation during the data collection process. Previously, the heuristic for identifying the resource was heavily based on environment variables which could result in many false positives. Additionally, the implementation performed too many queries to the metadata server which could sometimes cause delayed responses. In the 1.5 release the heuristic was updated to use additional artifacts beside the environment variables in the resource detection logic and the number of the queries to the metadata server was reduced to a bare minimum. As a result, false detection of GCP resources is decreased by an order of magnitude and the performance penalties to run the heuristic in non-GCP resources is decreased as well. The change does not affect the ingestion process and does not require any changes in the application's code.

Source location capturing

It is useful to capture the location in code where the log was ingested. While the main usage is in troubleshooting and debugging it can be useful in other circumstances. In this version of the library you can configure your logger instance to capture the source location metadata for each log entry ingested using Logger.Log() or Logger.LogSync() functions. Just pass the output of the SourceLocationPopulation() as a LoggerOption argument in the call to Client.Logger() when creating a new instance of the logger. The following snippet creates a logger instance that adds source location metadata into each ingested log with severity set to Debug:

Loading...

The function SourceLocationPopulation() accepts the following constants:

  • logging.DoNotPopulateSourceLocation ‒ is a default configuration that prevents capturing the source location in the ingested logs

  • logging.PopulateSourceLocationForDebugEntries ‒ adds the source location metadata into logs with Debug severity.

  • logging.AlwaysPopulateSourceLocation ‒ populates the source location in all ingested logs.

This feature has to be enabled explicitly because the operation of capturing the source location in Go may increase the total execution time of the log ingestion by a factor of 2. It is strongly discouraged to enable it for all ingested logs.

Use W3C context header for tracing

You could add tracing information with your logs in the previous versions of the library. The way to do it was directly, by providing trace and span identification and, optionally, the sampling flag. The following code demonstrates the manual setting of the trace and span identifiers:

Loading...

Or indirectly, by passing an instance of the http.Request as a part of the Http request metadata:

Loading...

In the latter case, the library will try to pull tracing information from the x-cloud-tracing-context header. From this release, the library also supports W3C tracing context header. If both headers are present, the tracing information is captured from the W3C traceparent header.

Out-of-process logs' ingestion

By default the library supports synchronous and asynchronous log ingestions by calling the Cloud Logging API directly. In certain cases the log ingestion is better to be done using external logging agents or built-in support for logs collection. In this release, you can configure a logger instance to write logs to stdout or stderr instead of ingesting it to Cloud Logging directly. The following example creates a logger that redirects logs to stdout using specially formatted Json string:

Loading...

The above code will print something like the following line to the standard output:

Loading...

In some circumstances, when the standard output cannot be used for printing logs, the logger can be configured to redirect output to the standard error (os.Stderr) with the same effect.

There are a couple of things to be aware of when you use the out-of-process logging:

  • Methods Logger.Log() and Logger.LogSync() behave the same way when the logger is configured with the out-of-process logging option. They write the Jsonified logs to the provided io.Write writer. And an external logging agent determines the logs' collection and ingestion.

  • You do not have control over the Log ID. All logs that are ingested by the logging agent or the built-in support of the managed service (e.g. Cloud Run) will use the Log ID that is determined out-of-process.

More control over batch ingestion

When you ingest logs using Logger.Log() function, the asynchronous ingestion batches multiple log entries together and ingest them using the entries.write Logging API. If the ingestion of any of the aggregate logs fails, no logs get ingested. Starting with this release you can control this logic by opting in the partial success flag. When the flag is set, the Logging API tries to ingest all logs, even if some other log entry fails due to a permanent error such as INVALID_ARGUMENT or PERMISSION_DENIED. This option can be opted-in when creating a new logger using the PartialSuccess logger option:

Loading...

Wrapping up

When you upgrade to version 1.5 you get a more robust and deterministic resource detection algorithm while keeping the behavior of the library unchanged. Additional functionality such as out-of-process ingestion, source location or batch ingestion control can be opted-in using the logger options. With these new features and fixes the behavior of the library becomes more deterministic and robust. 

Learn more about the release at go.pkg.dev. Please also visit the library's project on Github.

Posted in