ESP Startup Options

The Extensible Service Proxy (ESP) is an NGINX-based proxy that enables Cloud Endpoints to provide API management features. You configure ESP by specifying custom options when you start the ESP Docker container. The startup script writes the NGINX configuration file and launches ESP.

Specifying startup options

The Extensible Service Proxy Docker container image and the daemon script in the Extensible Service Proxy Debian package both rely on the same startup script to configure Extensible Service Proxy.

To run the startup script:

  1. Invoke the script with the required minimum of options as follows:

    sudo docker run \
        --detach \
        [DOCKER_ARGUMENTS] \ \
  2. If you need to specify startup options, invoke the command above, and append the command line with the ESP arguments from the following table:

Short Option Long Option Description
-s SERVICE_NAME --service SERVICE_NAME Sets the name of the Endpoints service.

Available in ESP version 1.7.0 and later. Specify either managed or fixed. The --rollout_strategy=managed option configures ESP to use the latest deployed service configuration. When you specify this option, within a minute after you deploy a new service configuration, ESP detects the change and automatically begins using it. We recommend that you specify this option instead of a specific configuration ID for ESP to use. You do not need to specify the --version option when you set --rollout_strategy to managed.

-v CONFIG_ID --version CONFIG_ID Sets the service configuration ID to be used by ESP. See Getting the Service Name and Configuration ID for the information you need to set this option. When you specify --rollout_strategy=fixed or when you do not include the --rollout_strategy option, you must include the --version option and specify a configuration ID. In this case, every time you deploy a new Endpoints configuration, you must restart ESP with the new configuration ID.
-p HTTP1_PORT --http_port HTTP1_PORT Sets the ports to be exposed by ESP for HTTP/1.x connections.1
-P HTTP2_PORT --http2_port HTTP2_PORT Sets the ports to be exposed by ESP for HTTP/2 connections.1
-S SSL_PORT --ssl_port SSL_PORT Sets the ports to be exposed by ESP for HTTPS connections.1
-a BACKEND --backend BACKEND Sets the address for the HTTP/1.x application backend server. The default value for the backend address is
You can specify a protocol prefix, for example:
If you do not specify a protocol prefix, the default is http.
If your backend server is running on Compute Engine in a container, you can specify the container name and the port, for example:
-N STATUS_PORT --status_port STATUS_PORT Sets the status port (default: 8090).
none --disable_trace_sampling By default, ESP samples a small number of requests to your API every second to get traces that it sends to Stackdriver Trace. Set this boolean option to true to disable trace sampling. See Tracing your API for more information.
-n NGINX_CONFIG --nginx_config NGINX_CONFIG Sets the location for the custom NGINX configuration file. If you specify this option, ESP fetches the specified configuration file and then immediately launches NGINX with the provided custom configuration file.
-k SERVICE_ACCOUNT_KEY --service_account_key SERVICE_ACCOUNT_KEY Sets the service account credentials JSON file. If provided, ESP uses the service account to generate an access token to call Service Infrastructure APIs. The only time you need to specify this option is when ESP is running on platforms other than Google Cloud Platform (GCP), such as your local desktop, Kubernetes, or another cloud provider. See Creating a service account for more information.
-z HEALTHZ_PATH --healthz HEALTHZ_PATH Define a health checking endpoint on the same ports as the application backend. For example, -z healthz makes ESP return code 200 for location /healthz, instead of forwarding the request to the backend. Default: not used.

1These ports are optional and must be distinct from each other. If you do not specify any port option, ESP accepts HTTP/1.x connections on port 8080. For HTTPS connections, ESP expects the TLS secrets to be located at /etc/nginx/ssl/nginx.crt and /etc/nginx/ssl/nginx.key.

Sample command line invocations:

The following examples show how to use some of the command line arguments:

To start ESP so that it handles requests coming in at HTTP/1.x port 80 and HTTPS port 443 and sends application requests to

sudo docker run \
    --detach \
    --rollout_strategy=managed \
    --http_port=80 \
    --ssl_port=443 \

To start ESP with a custom NGINX configuration, using the service account credentials file to fetch the service config and connect to the service control:

sudo docker run \
    --detach \
    --volume=$HOME/Downloads:/esp \
    --rollout_strategy=managed \
    --service_account_key=/esp/serviceaccount.json \

Note that you must use the Docker flags --volume or --mount to mount the JSON file containing the private key for the service account and the custom NGINX config file as volumes inside ESP's Docker container. The above example maps the $HOME/Downloads directory on the local computer to the esp directory in the container. (When you save the private key file for the service account, it is typically downloaded to the Downloads directory. You can copy the private key file to another directory if you want to.) See Manage data in Docker for more information.

Customizing Extensible Service Proxy on Compute Engine

To customize ESP on a Compute Engine VM instance using the Debian package, you must provide custom arguments to the NGINX daemon script by editing /etc/default/nginx/:

  1. Use PORT=80 to change the default port (equivalent to the --http_port option).

  2. Use STATUS_PORT=8090 to change the status port (equivalent to the --status_port option).

  3. Modify the DAEMON_ARGS variable to specify any additional command line arguments to the startup script.

  4. After editing the file, run the following command to restart ESP:

    sudo service nginx restart

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Endpoints with OpenAPI
Need help? Visit our support page.