Profiling Go Code

This page describes setting up Stackdriver Profiler for profiling Go code. For Go, Profiler offers CPU, heap, contention, and threads profiling. Contention profiling captures information about Go mutexes. Thread profiling captures information about Goroutines, not operating-system threads. See Profiling Concepts for more information.

You can use the profiling agent on Linux in the following environments:

  • Compute Engine
  • Kubernetes Engine
  • App Engine flexible environment.

You can also profile Go code on non-GCP systems. See Profiling Outside Google Cloud Platform for more information.

Enabling the Profiler API

Before you use the profiling agent, ensure that the underlying Profiler API is enabled. You can check the status of the API and enable it if necessary by using either the Cloud SDK gcloud command-line tool or the Cloud Console:

Cloud SDK

  1. If you have not already installed the Cloud SDK on your workstation, see Google Cloud SDK.

  2. To see if the Profiler API is enabled, run the following command on your workstation:

    gcloud services list

    If appears in the output, the API is enabled.

  3. If the API is not enabled, run the following command to enable it:

    gcloud services enable

For more information, see gcloud services.

Cloud Console

Go to APIs & services

  1. Select the project you will use to access the API.
  2. Click the Enable APIs and Service button.
  3. Search for “Stackdriver”.
  4. In the search results, click through to “Profiler API”.
  5. If “API enabled” is displayed, then the API is already enabled. If not, click the Enable button.

Using Stackdriver Profiler

In all of the supported environments, you use the Profiler by importing the package in your app and then initializing the Profiler as early as possible in your code.

By default, the profiling agent for Go has the following profile types enabled:

  • CPU
  • Heap
  • Threads

Mutex-contention profiling (“Contention” in the interface) can be enabled by setting the MutexProfiling configuration option to true.

For more information on the Profiler API, including all the configuration options, see the public API docs.

In heap profiles, the values for total allocated bytes and total allocated objects represent the total number of bytes or object allocated since the process started. These values will reset each time the app is redeployed. The relative, percentage-based values provide information about which allocations are causing the most work for the garbage collector.

Compute Engine and Kubernetes Engine

For Compute Engine and Kubernetes Engine, the code additions look like this:


import (

func main() {
	// Profiler initialization, best done as early as possible.
	if err := profiler.Start(profiler.Config{
		Service:        "myservice",
		ServiceVersion: "1.0.0",
		// ProjectID must be set if not running on GCP.
		// ProjectID: "my-project",
	}); err != nil {
		// TODO: Handle error.

In both the Compute Engine and Kubernetes Engine environments, the profiler.Config includes two parameters:

  • Service: A name for the service being profiled
  • ServiceVersion: (optional) The version of the service being profiled

See Service name and version arguments for more information on these configuration options.

App Engine flexible environment

For App Engine flexible environment, the code additions are nearly identical to those for Compute Engine and Kubernetes Engine. There is one exception. In the App Engine flexible environment, the Service and ServiceVersion parameters can be derived from the environment, so they do not have to specified. The line containing profiler.Config looks like this:

   if err := profiler.Start(profiler.Config{}); err != nil {

However, if you run the app locally, set the ProjectID (the ID of your GCP project) and Service parameters in profiler.Config, since they cannot be derived from a local environment. You do not need to set ServiceVersion.

To run the app:

  1. Update the dependencies:

    go get -u
  2. Deploy the app to your App Engine flexible environment as usual.

Service name and version arguments

When you load the Profiler agent, you specify a service-name argument and an optional service-version argument to configure it.

The service name lets Profiler collect profiling data for all replicas of that service. The profiler service ensures a collection rate of one profile per minute, on average, for each service name across each combination service versions and zones.

For example, if you have a service with two versions running across replicas in three zones, the profiler will create an average of 6 profiles per minute for that service.

If you use different service names for your replicas, then your service will be profiled more often than necessary, with a correspondingly higher overhead.

When selecting a service name:

  • Choose a name that clearly represents the service in your application architecture. The choice of service name is less important if you only run a single service or application. It is more important if your application runs as a set of micro-services, for example.

  • Make sure to not use any process-specific values, like a process ID, in the service-name string.

  • The service-name string must match this regular expression:


A good guideline is to use a static string like imageproc-service as the service name.

The service version is optional. If you specify the service version, Profiler can aggregate profiling information from multiple instances and display it correctly. It can be used to mark different versions of your services as they get deployed. The Profiler UI lets you filter the data by service version; this way, you can compare the performance of older and newer versions of the code.

The value of the service-version argument is a free-form string, but values for this argument typically look like version numbers, for example, 1.0.0 or 2.1.2.

Agent logging

The profiling agent can report debug information in its logs. To enable this logging in the profiling agent, set the DebugLogging option to true when starting the agent.

profiler.Start(profiler.Config{..., DebugLogging: true});

Running with Linux Alpine

When running your application using Docker images that run with Linux Alpine (such as golang:alpine or just alpine), you may see the following authentication error:

connection error: desc = "transport: authentication handshake failed: x509: failed to load system roots and no roots provided"

Note that to see the error you must have agent logging. By default the agent does not output any log messages.

The error indicates that the Docker images with Linux Alpine do not have the root SSL certificates installed by default. Those certificates are necessary for the profiling agent to be able to communicate with the profiler API. To resolve this add the following apk commands to your Dockerfile:

FROM alpine
RUN apk update \
 && apk add --no-cache ca-certificates

You then need to rebuild and redeploy your app.

Was this page helpful? Let us know how we did:

Send feedback about...

Stackdriver Profiler