Running ESP Locally or on Another Platform

This page explains how to configure and run an instance of the Extensible Service Proxy (ESP) on a local machine or on another cloud provider, such as Amazon Web Services (AWS).

You can run ESP locally on a Linux or macOS computer. Windows is not supported. Hosting a local instance of ESP allows you to:

  • Try out ESP before actually deploying it to a production platform.
  • Verify that security settings are configured and working properly, and that metrics and logs appear in the Endpoints dashboard as expected.

You can deploy your application and ESP on the same host or on different hosts. You can also run ESP on a Linux VM or AWS.

Prerequisites

As a starting point, this page assumes that:

If you need an API for testing with ESP, you can configure and deploy the sample code in the Optional: Using a sample API section. If you have already configured and deployed your Cloud Endpoints API, skip to Creating a service account.

Optional: Using a sample API

This section walks you through configuring and deploying the Python Endpoints Getting Started sample locally. Do the steps in this section only if you do not have an API for testing with ESP.

The Endpoints Getting Started sample is available in other languages. See the Samples page for the GitHub location of the getting-started sample in your preferred language. Follow the instructions in the sample's README.md for running locally, and then follow the instructions in this section to configure Endpoints and to deploy the Endpoints configuration.

Getting required software

If you do not have a Python development environment setup yet, see Setting Up a Python Development Environment for guidance. Make sure that you have the following installed:

Getting the sample code

  1. Clone the sample app repository to your local machine:

    git clone https://github.com/GoogleCloudPlatform/python-docs-samples
    

  2. Change to the directory that contains the sample code:

    cd python-docs-samples/endpoints/getting-started
    

Configuring Endpoints

  1. In the sample code directory, open the openapi.yaml configuration file.

    swagger: "2.0"
    info:
      description: "A simple Google Cloud Endpoints API example."
      title: "Endpoints Example"
      version: "1.0.0"
    host: "echo-api.endpoints.YOUR-PROJECT-ID.cloud.goog"

  2. In the host field, replace YOUR-PROJECT-ID with your own Cloud project ID.

  3. Save openapi.yaml.

Deploying the Endpoints configuration

To deploy the Endpoints configuration, you use the gcloud endpoints services deploy command. This command uses Service Infrastructure, Google’s foundational services platform, used by Cloud Endpoints and other services to create and manage APIs and services.

  1. Update the Cloud SDK:

    gcloud components update
    
  2. Make sure that Cloud SDK (gcloud) is authorized to access your data and services on Google Cloud Platform:

    gcloud auth login
    

    A new brower tab opens and you are prompted to choose an account.

  3. Set the default project to your project ID:

    gcloud config set project [YOUR-PROJECT-ID]
    

    Replace [YOUR-PROJECT-ID] with the project ID of the Cloud project that you specified in openapi.yaml.

  4. Deploy your configuration:

    gcloud endpoints services deploy openapi.yaml
    

Service Management uses the text that you specified in the host field in the openapi.yaml file to create a new Cloud Endpoints service with the name echo-api.endpoints.YOUR-PROJECT-ID.cloud.goog (if it does not exist), and then configures the service according to your OpenAPI configuration file.

As it is creating and configuring the service, Service Management outputs a great deal of information to the terminal. You can safely ignore the warnings about the paths in openapi.yaml not requiring an API key. On successful completion, you will see a line like the following that displays the service configuration ID and the service name within square brackets:

Service Configuration [2017-02-13r0] uploaded for service [echo-api.endpoints.example-project-12345.cloud.goog]

In the example above, 2017-02-13r0 is the service configuration ID, and echo-api.endpoints.example-project-12345.cloud.goog is the service name.

Starting your local server

  1. Create a virtualenv, activate it, and install the application dependencies.

    virtualenv env
    source env/bin/activate
    pip install -r requirements.txt
    

  2. Start the server:

    python main.py
    

  3. Open another terminal window and use curl to send a request:

    curl --request POST \
        --header "content-type:application/json" \
        --data '{"message":"hello world"}' \
        http://localhost:8080/echo
    

    The API echos back the message that you send it, and responds with the following:

    {
      "message": "hello world"
    }
    

Creating a service account

To provide management for your API, ESP requires the services in Service Infrastructure. To call these services, ESP must use access tokens. When you deploy ESP to GCP platforms such as GKE, Compute Engine, or App Engine flexible environment, ESP obtains access tokens for you through the GCP metadata service.

When you deploy ESP to a non-GCP environment, such as your local desktop, an on-premises Kubernetes cluster, or another cloud provider, you must provide ESP with a service account JSON file that contains a private key. ESP uses the service account to generate access tokens to call the services that it needs to manage your API.

You can use either the GCP Console or the gcloud command-line tool to create the service account and private key file and to assign the service account the following roles:

Console

  1. Open the Service Accounts page in the GCP Console.

    Go to the Service Accounts page

  2. Click Select a project.

  3. Select your project and click Open.

  4. Click add Create Service Account.
  5. In the Service account name field, enter the name for your service account.

  6. Click Role and select:

    • Service Management -> Service Controller
    • Cloud Trace -> Cloud Trace Agent
  7. Click Furnish a new private key.

  8. For the Key type, use the default type,JSON.
  9. Click Save.

This creates the service account and downloads its private key to a JSON file.

gcloud

  1. Enter the following to display the project IDs for your Cloud projects:

    gcloud projects list
    
  2. Replace PROJECT_ID in the following command to set the default project to the one that your API is in:

    gcloud config set project PROJECT_ID
    
  3. Make sure that Cloud SDK (gcloud) is authorized to access your data and services on GCP:

    gcloud auth login
    

    If you have more than one account, make sure to choose the account that is in the GCP project that the API is in. If you run gcloud auth list, the account that you selected is shown as the active account for the project.

  4. To create a service account, run the following command and replace SERVICE_ACCOUNT_NAME and My Service Account with the name and display name that you want to use:

    gcloud iam service-accounts create SERVICE_ACCOUNT_NAME \
      --display-name "My Service Account"
    

    The command assigns an email address for the service account in the following format:

    SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com
    

    This email address is required in the subsequent commands.

  5. Create a service account key file:

    gcloud iam service-accounts keys create ~/service-account-creds.json \
      --iam-account SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com
    
  6. Add the Service Controller role:

    gcloud projects add-iam-policy-binding PROJECT_ID \
        --member serviceAccount:SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com \
        --role roles/servicemanagement.serviceController
    
  7. Add the Cloud Trace Agent role to enable Stackdriver Trace:

    gcloud projects add-iam-policy-binding PROJECT_ID \
        --member serviceAccount:SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com \
        --role roles/cloudtrace.agent
    

See gcloud iam service-accounts for more information about the commands.

Running ESP in a container

You can run ESP in a Docker container locally or on another platform using the docker run command. You can also run ESP in a container on a Kubernetes cluster.

Make sure to rename the JSON file to service-account-creds.json and copy it to $HOME/Downloads/ if it was downloaded to a different directory. This way, the full path name matches the options below.

Running ESP in a Docker container locally or on another platform

Linux

sudo docker run \
    --detach \
    --name="esp" \
    --net="host" \
    --volume=$HOME/Downloads:/esp \
    --publish=8082 \
    gcr.io/endpoints-release/endpoints-runtime:1 \
    --service=[YOUR_SERVICE_NAME] \
    --rollout_strategy=managed \
    --http_port=8082 \
    --backend=localhost:8080 \
    --service_account_key=/esp/service-account-creds.json
  

Mac OS

The Docker --net="host" option does not work on macOS. Instead, you must do explicit port mapping from the host to container replacing --net="host" with --publish 8082:8082. You also need to replace localhost with the special Mac-only DNS name docker.for.mac.localhost. See Use cases and workarounds in the Docker Documentation for more information.

sudo docker run \
    --detach \
    --name="esp" \
    --publish=8082:8082 \
    --volume=$HOME/Downloads:/esp \
    gcr.io/endpoints-release/endpoints-runtime:1 \
    --service=[YOUR_SERVICE_NAME] \
    --rollout_strategy=managed \
    --http_port=8082 \
    --backend=docker.for.mac.localhost:8080 \
    --service_account_key=/esp/service-account-creds.json
  

Another platform

sudo docker run \
    --detach \
    --name="esp" \
    --net="host" \
    --volume=$HOME/Downloads:/esp \
    --publish=8082 \
    gcr.io/endpoints-release/endpoints-runtime:1 \
    --service=[YOUR_SERVICE_NAME] \
    --rollout_strategy=managed \
    --http_port=8082 \
    --backend=[IP_Address]:[PORT] \
    --service_account_key=/esp/service-account-creds.json
  

The following table describes the Docker options used in the above commands. For information about the ESP options used in the example, see ESP options.

Option Description
--detach This Docker option starts the container in detached mode, so it runs in the background.
--name="esp" This Docker option provides an easy-to-access name for the container. For example, to see logs from the container, you could run docker logs esp
--net="host" This Docker option indicates that the Docker container should use the same network configuration as the host machine, allowing it to make calls to localhost on the host machine. This option does not work to run ESP locally on macOS.
--publish=8082:8082 For macOs when you want to run ESP locally, use this Docker option instead of --net="host" to do explicit port mapping from the host to container.
--volume=
$HOME/Downloads:/esp
This Docker option maps your local $HOME/Downloads directory to the /esp directory in the container. This mapping is used by the --service_account_key argument, explained in ESP options.

Running ESP in a container on a Kubernetes cluster

You deploy an API service by deploying ESP as a Docker container in the same Kubernetes Pod as the application container.

Notice that the set of pods running the proxy and the application are grouped under a Kubernetes service using a label selector, such as app: my-api. The Kubernetes service specifies the access policy to load balance the client requests to the proxy port.

ESP, which will be running inside a container, needs access to the credentials stored locally in the service-account-creds.json file in the $HOME/Downloads/ directory. To provide ESP with access to the credentials, you create a Kubernetes secret and mount the secret as a Kubernetes volume. Run the following command to create the secret:

kubectl create secret generic service-account-creds \
--from-file=$HOME/Downloads/service-account-creds.json

On success, you see the message: secret "service-account-creds" created

To deploy Extensible Service Proxy, start it using the command line options provided in the page ESP Startup Options.

In your Kubernetes configuration file, add the following, replacing [YOUR_APP_NAME] with the name of your API and [YOUR_SERVICE_NAME] with the name of your Cloud Endpoints service:

spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: "[YOUR_APP_NAME]"
    spec:
      volumes:
        - name: service-account-creds
          secret:
            secretName: service-account-creds
            containers:
        - name: esp
          image: gcr.io/endpoints-release/endpoints-runtime:1
          args: [
            "--http_port=8082",
            "--backend=127.0.0.1:8081",
            "--service=[YOUR_SERVICE_NAME]",
            "--rollout_strategy=managed",
            "--service_account_key=/etc/nginx/creds/service-account-creds.json"
          ]
          ports:
            - containerPort: 8080
          volumeMounts:
            - mountPath: /etc/nginx/creds
              name: service-account-creds
              readOnly: true

ESP options

The following table describes the ESP options used in the above examples. See Specifying Startup Options for ESP for the complete list of ESP options.

Option Description
--service=[YOUR_SERVICE_NAME] This ESP option sets the name of the Endpoints service name. Replace [YOUR_SERVICE_NAME] with the name of your service.
--rollout_strategy=
managed
This option configures ESP to use the latest deployed service configuration. When you specify this option, within a minute after you deploy a new service configuration, ESP detects the change and automatically begins using it. We recommend that you specify this option instead of a specific configuration ID for ESP to use.
--http_port=8082 This ESP option sets ESP to receive HTTP/1.x requests on port 8082.
--backend=host:port This ESP option sets the address for an HTTP/1.x application backend server. You may use one of the following values:
  • localhost:8080 Indicates that your API's backend server receives requests at localhost on port 8080. This value only works on Linux, and does not work on macOS or Windows.
  • docker.for.mac.localhost:8080 Use this value instead of localhost:8080 to run your API locally on macOS.
  • [IP_Address]:[PORT] If your application backend server is running in another container, or on another machine, specify the IP address and the port.
--service_account_key=
/esp/serviceaccount.json
This ESP option specifies where the private key file is located. The /esp directory matches the mapping from the --volume argument above, and the name of the serviceaccount.json file must match the name of the private key file you downloaded when you created the service account.

Sending requests

To confirm that the service account file is correct and that the ports are mapped correctly, send some requests to your API and make sure that the requests are going through ESP. You can see the ESP logs by running:

sudo docker logs esp

The following examples send requests to the sample API. If you are not using the sample API, we recommend that you run similar tests.

You have configured the Docker container to receive requests on port 8082. If you send a request directly to the server at http://localhost:8080, the requests bypasses ESP. For example:

Request:

    curl --request POST \
        --header "content-type:application/json" \
        --data '{"message":"hello world"}' \
        http://localhost:8080/echo

Response:

{
  "message": "hello world"
}

When you send a request to http://localhost:8082 which passes through ESP, and you do not send an API key, ESP rejects the request. For example:

Request:

    curl --request POST \
        --header "content-type:application/json" \
        --data '{"message":"hello world"}' \
        http://localhost:8082/echo

Response:

{
 "code": 16,
 "message": "Method doesn't allow unregistered callers (callers without
  established identity). Please use API Key or other form of API consumer
  identity to call this API.",
 "details": [
  {
   "@type": "type.googleapis.com/google.rpc.DebugInfo",
   "stackEntries": [],
   "detail": "service_control"
  }
 ]
}

To test the API with an API Key:

  1. Create an API key in the API credentials page.

    Create an API key

  2. Click Create credentials, then select API key.

  3. Copy the key, then paste it into the following environment variable statement:

    export KEY=AIza...

  4. Send a request with the key:

    curl --request POST \
        --header "content-type:application/json" \
        --data '{"message":"hello world"}' \
        http://localhost:8082/echo?key=$KEY
    

    You should see a successful response:

    {
      "message": "hello world"
    }
    

Cleaning up

Shut down and remove the esp Docker container using the docker tool:

sudo docker stop esp
sudo docker rm esp

If you want to clean up the deployed service configuration, see Deleting an API and API Instances.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Endpoints with OpenAPI