Profiling Node.js applications

This page describes how to modify your Node.js application to capture profiling data and have that data sent to your Google Cloud project. For general information about profiling, see Profiling concepts.

Profile types for Node.js:

  • Heap
  • Wall time

Supported Node.js language versions:

  • 8.9.4 or higher on the 8.x version branch.
  • 10.4.1 or higher on the 10.x version branch.

Supported operating systems:

  • Linux. Profiling Node.js applications is supported for Linux kernels whose standard C library is implemented with glibc or with musl. For configuration information specific to Linux Alpine kernels, see Running on Linux Alpine.

Supported environments:

Enabling the Profiler API

Before you use the profiling agent, ensure that the underlying Profiler API is enabled. You can check the status of the API and enable it if necessary by using either the Cloud SDK gcloud command-line tool or the Cloud Console:

Cloud SDK

  1. If you have not already installed the Cloud SDK on your workstation, see Google Cloud SDK.

  2. Run the following command:

    gcloud services enable cloudprofiler.googleapis.com
    

For more information, see gcloud services.

Cloud Console

  1. Go to the APIs & Services dashboard:

    Go to APIs & services

  2. Select the project you will use to access the API.

  3. Click the Add APIs and Services button.

    Add APIs and Services

  4. Search for Profiler API.

  5. In the search results, select Stackdriver Profiler API.

  6. If API enabled is displayed, then the API is already enabled. If not, click the Enable button.

Using Stackdriver Profiler

In all of the supported environments, you use the Profiler by installing the package @google-cloud/profiler, adding a require statement to your application, and then deploying the application in the usual way.

Before you install @google-cloud/profiler

The package @google-cloud/profiler depends on a native module. Pre-built binaries for this native module are available for all supported language and platform combinations. To determine which pre-built binary to install, @google-cloud/profiler uses node-pre-gyp.

Installation

To install the latest version of Stackdriver Profiler, do the following:

    npm install --save @google-cloud/profiler

If you are also using the Trace agent, when you modify your application, import the Profiler package after the Trace agent package (@google-cloud/trace-agent). For more information, see Setting up Stackdriver Trace for Node.js.

Compute Engine

For Compute Engine, do the following:

  1. Install the latest version of Stackdriver Profiler:

    npm install --save @google-cloud/profiler
    
  2. Modify your application require code to create a serviceContext object that assigns to service the name of the service being profiled. Optionally, you can assign to version the version of the service being profiled. See Service name and version arguments for more information on these configuration options:

    require('@google-cloud/profiler').start({
      serviceContext: {
          service: 'your-service',
          version: '1.0.0'
      }
    });

GKE

For GKE, do the following:

  1. Modify your Dockerfile to install the Profiler package:

    FROM node:10
    ...
    RUN npm install @google-cloud/profiler
    
  2. Modify your application require code to create a serviceContext object that assigns to service the name of the service being profiled. Optionally, you can assign to version the version of the service being profiled. See Service name and version arguments for more information on these configuration options:

    require('@google-cloud/profiler').start({
      serviceContext: {
          service: 'your-service',
          version: '1.0.0'
      }
    });


Istio on Google Kubernetes Engine:

If you are using Istio on Google Kubernetes Engine, then also do the following:
  1. Ensure you are using GKE 1.13.11-gke.11 or later.
  2. Grant the profiling agent access to the Google Cloud metadata server and to the API server by running the following command:
    kubectl apply -f - <<EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: whitelist-egress-googleapis
    spec:
      hosts:
      - "accounts.google.com" # Used to get token
      - "*.googleapis.com"
      ports:
      - number: 80
        protocol: HTTP
        name: http
      - number: 443
        protocol: HTTPS
        name: https
    ---
    apiVersion: networking.istio.io/v1alpha
    kind: ServiceEntry
    metadata:
      name: whitelist-egress-google-metadata
    spec:
      hosts:
      - metadata.google.internal
      - metadata.google.internal.
      addresses:
      - 169.254.169.254 # metadata server
      ports:
      - number: 80
        name: http
        protocol: HTTP
      - number: 443
        name: https
        protocol: HTTPS
    EOF
  3. To verify the configuration, run the following command:
    kubectl get serviceentry
    The following is an example of the response when the configuration was successful:
    Output
    NAMESPACE   NAME                               AGE
    default     whitelist-egress-google-metadata   20h
    default     whitelist-egress-googleapis        20h
    
  4. Configure the Istio pilot-agent to enable HTTP 1.0 in the outbound HTTP listeners:
    kubectl set env deployment/istio-pilot -n istio-system PILOT_HTTP10=1
  5. Nest the profiling agent's initialization in a retry loop that is configured to retry until Istio has completed its initialization and is handling network traffic. For an example of a retry mechanism, see Running on Istio.

App Engine

For App Engine flexible environment and for App Engine standard environment, the require code is similar to the following:

require('@google-cloud/profiler').start();

In App Engine, the service and version parameters are derived from the environment, so you don't have to specify them. Therefore, you don't need to create a serviceContext object.

Analyzing data

After Profiler has collected data, you can view and analyze this data using the Profiler interface. To get started using this interface, see Opening the Profiler interface.

Service name and version arguments

When you load the Profiler agent, you specify a service-name argument and an optional service-version argument to configure it.

The service name lets Profiler collect profiling data for all replicas of that service. The profiler service ensures a collection rate of one profile per minute, on average, for each service name across each combination service versions and zones.

For example, if you have a service with two versions running across replicas in three zones, the profiler will create an average of 6 profiles per minute for that service.

If you use different service names for your replicas, then your service will be profiled more often than necessary, with a correspondingly higher overhead.

When selecting a service name:

  • Choose a name that clearly represents the service in your application architecture. The choice of service name is less important if you only run a single service or application. It is more important if your application runs as a set of micro-services, for example.

  • Make sure to not use any process-specific values, like a process ID, in the service-name string.

  • The service-name string must match this regular expression:

    ^[a-z]([-a-z0-9_.]{0,253}[a-z0-9])?$

A good guideline is to use a static string like imageproc-service as the service name.

The service version is optional. If you specify the service version, Profiler can aggregate profiling information from multiple instances and display it correctly. It can be used to mark different versions of your services as they get deployed. The Profiler UI lets you filter the data by service version; this way, you can compare the performance of older and newer versions of the code.

The value of the service-version argument is a free-form string, but values for this argument typically look like version numbers, for example, 1.0.0 or 2.1.2.

Agent logging

The profiling agent can report logging information. To enable logging, set the logLevel option when starting the agent. The supported logLevel values are:

  • 0: disables all agent logging.
  • 1: enables error logging.
  • 2: enables warning logging (default).
  • 3: enables info logging.
  • 4: enables debug logging.

Set the logLevel value in the same object that provides the service context:

require('@google-cloud/profiler').start({
    serviceContext: { ... }
    logLevel:       3
});

Running with Linux Alpine

If you use Docker images that run with Linux Alpine (such as golang:alpine or just alpine), you might see the following authentication error:

connection error: desc = "transport: authentication handshake failed: x509: failed to load system roots and no roots provided"

Note that to see the error you must have agent logging enabled.

The error indicates that the Docker images with Linux Alpine don't have the root SSL certificates installed by default. Those certificates are necessary for the profiling agent to communicate with the profiler API. To resolve this error, add the following apk commands to your Dockerfile:

FROM alpine
...
RUN apk add --no-cache ca-certificates

You then need to rebuild and redeploy your application.

Known issues

The following are known issues in the beta release of Stackdriver Profiler for Node.js:

  • The profiling agent for Node.js interferes with the normal exit of the program; it can take up to an hour for the program to terminate after all the tasks in the program have completed. Forcibly exiting the program, for example, by using Ctrl-C, causes the program to terminate immediately.

What's next

To learn about the Profiler graph and controls, go to Using the Stackdriver Profiler Interface. For advanced information, go to the following: