Getting Started with gRPC on Kubernetes

This tutorial shows you how to deploy a simple example gRPC service with the Extensible Service Proxy (ESP) to a Kubernetes cluster that is not running on Google Cloud Platform (GCP). The tutorial uses the Python version of the bookstore-grpc sample. See the What's next section for gRPC samples in other languages.

The tutorial uses prebuilt container images of the sample code and ESP, which are stored in Google Container Registry. If you are unfamiliar with containers, see the following for more information:

For an overview of Cloud Endpoints, see About Cloud Endpoints and Cloud Endpoints Architecture.


This tutorial assumes that you already have Minikube or a Kubernetes cluster setup. For more information, see the Kubernetes Documentation.

Task List

Use the following high-level task list as you work through the tutorial. All tasks are required to successfully send requests to the API.

  1. Set up a Cloud Platform project, and download required software. See Before you begin.
  2. Copy and configure files from the bookstore-grpc sample. See Configuring Endpoints.
  3. Deploy the Endpoints configuration to create a Endpoints service. See Deploying the Endpoints configuration.
  4. Create credentials for your Cloud Endpoints service. See Creating credentials for your service.
  5. Create a backend to serve the API and deploy the API. See Deploying the API backend.
  6. Get the service's external IP address: See Getting the service's external IP address.
  7. Send a request to the API. See Sending a request to the API.
  8. Avoid incurring charges to your GCP account. See Clean up.

Before you begin

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. Select or create a GCP project.

    Go to the Manage resources page

  3. Make sure that billing is enabled for your project.

    Learn how to enable billing

  4. Note the project ID, because you'll need it later.
  5. Install and initialize the Cloud SDK.
  6. Update the Cloud SDK and install the Endpoints components.
    gcloud components update
  7. Make sure that Cloud SDK (gcloud) is authorized to access your data and services on Google Cloud Platform:
    gcloud auth login
    A new browser tab opens and you are prompted to choose an account.
  8. Set the default project to your project ID.
    gcloud config set project [YOUR_PROJECT_ID]

    Replace [YOUR_PROJECT_ID] with your project ID. Do not include the square brackets.

    If you have other Cloud Platform projects, and you want to use gcloud to manage them, see Managing Cloud SDK Configurations.

  9. Install kubectl:
    gcloud components install kubectl
  10. Acquire new user credentials to use for Application Default Credentials. The user credentials are needed to authorize kubectl.
    gcloud auth application-default login
    A new browser tab opens and you are prompted to choose an account.
  11. Follow the steps in the gRPC Python Quickstart to install gRPC and the gRPC tools.

Configuring Endpoints

The bookstore-grpc sample contains the files that you need to copy locally and configure.

  1. Create a self-contained protobuf descriptor file from your service .proto file:
    1. Save a copy of bookstore.proto from the example repo. This file defines the Bookstore service's API.
    2. Create the following directory: mkdir generated_pb2
    3. Create the descriptor file, api_descriptor.pb, using the protoc protocol buffers compiler. Run the following command in the directory where you saved bookstore.proto:
      python -m grpc_tools.protoc \
          --include_imports \
          --include_source_info \
          --proto_path=. \
          --descriptor_set_out=api_descriptor.pb \
          --python_out=generated_pb2 \
          --grpc_python_out=generated_pb2 \

      In the above command, --proto_path is set to the current working directory. In your gRPC build environment, if you use a different directory for .proto input files, change --proto_path so the compiler searches the directory where you saved bookstore.proto.

  2. Create a gRPC API Configuration YAML file:
    1. Save a copy of api_config.yaml. This file defines the gRPC API configuration for the Bookstore service.
    2. Replace <MY_PROJECT_ID> in your api_config.yaml file with your GCP project ID. For example:
      # Name of the service configuration.

      Note that the field value in this file exactly matches the fully-qualified API name from the .proto file; otherwise deployment won't work. The Bookstore service is defined in bookstore.proto inside package endpoints.examples.bookstore. Its fully-qualified API name is endpoints.examples.bookstore.Bookstore, just as it appears in api_config.yaml.

        - name: endpoints.examples.bookstore.Bookstore

See Configuring Endpoints for more information.

Deploying the Endpoints Configuration

To deploy the Endpoints configuration, you use the gcloud endpoints services deploy command. This command uses Service Infrastructure, Google’s foundational services platform, used by Endpoints and other services to create and manage APIs and services.

  1. Make sure you are in the directory where api_descriptor.pb and api_config.yaml are located.
  2. Deploy the proto descriptor file and the configuration file using the gcloud command-line tool:
    gcloud endpoints services deploy api_descriptor.pb api_config.yaml

    As it is creating and configuring the service, Service Management outputs a great deal of information to the terminal. On successful completion, you will see a line like the following that displays the service configuration ID and the service name:

    Service Configuration [2017-02-13r0] uploaded for service []

    In the above example, 2017-02-13r0 is the service configuration ID and is the service name. The service configuration ID consists of a date stamp followed by a revision number. If you deploy the Endpoints configuration again on the same day, the revision number is incremented in the service configuration ID.

If you get an error message, see Troubleshooting Endpoints Configuration Deployment.

See Deploying the Endpoints Configuration for additional information.

Creating credentials for your service

To provide management for your API, ESP requires the services in Service Infrastructure. To call these services, ESP must use access tokens. When you deploy ESP to GCP platforms such as GKE or Compute Engine, ESP obtains access tokens for you through the GCP metadata service.

When you deploy ESP to a non-GCP environment, such as your local desktop, an on-premises Kubernetes cluster, or another cloud provider, you must provide ESP with a service account JSON file that contains a private key. ESP uses the service account to generate access tokens to call the services that it needs to manage your API.

You can use either the GCP Console or the gcloud command-line tool to create the service account and private key file and to assign the service account the following roles:


  1. Open the Service Accounts page in the GCP Console.

    Go to the Service Accounts page

  2. Click Select a project.
  3. Select the project that your API was created in and click Open.
  4. Click + Create Service Account.
  5. In the Service account name box, enter the name for your service account.
  6. Click Create.
  7. Click Select a role and select Service Management > Service Controller.
  8. Click + Add another role.
  9. Click Select a role and select Cloud Trace > Cloud Trace Agent.
  10. Click Continue.
  11. Click + Create key. The right-side panel opens.
  12. For the Key type, use the default type, JSON.
  13. Click Create.
  14. Click Done.

This creates the service account and downloads its private key to a JSON file.


  1. Enter the following to display the project IDs for your Cloud projects:

    gcloud projects list
  2. Replace PROJECT_ID in the following command to set the default project to the one that your API is in:

    gcloud config set project PROJECT_ID
  3. Make sure that Cloud SDK (gcloud) is authorized to access your data and services on GCP:

    gcloud auth login

    If you have more than one account, make sure to choose the account that is in the GCP project that the API is in. If you run gcloud auth list, the account that you selected is shown as the active account for the project.

  4. To create a service account, run the following command and replace SERVICE_ACCOUNT_NAME and My Service Account with the name and display name that you want to use:

    gcloud iam service-accounts create SERVICE_ACCOUNT_NAME \
      --display-name "My Service Account"

    The command assigns an email address for the service account in the following format:

    This email address is required in the subsequent commands.

  5. Create a service account key file:

    gcloud iam service-accounts keys create ~/service-account-creds.json \
  6. Add the Service Controller role:

    gcloud projects add-iam-policy-binding PROJECT_ID \
        --member \
        --role roles/servicemanagement.serviceController
  7. Add the Cloud Trace Agent role to enable Stackdriver Trace:

    gcloud projects add-iam-policy-binding PROJECT_ID \
        --member \
        --role roles/cloudtrace.agent

See gcloud iam service-accounts for more information about the commands.

Deploying the API backend

So far you have deployed the service configuration to Service Management, but you have not yet deployed the code that will serve the API backend. This section walks you through deploying prebuilt containers for the sample API and ESP to Kubernetes.

Providing ESP with the service credentials

ESP, which will be running inside a container, needs access to the credentials stored locally in the service-account-creds.json file. To provide ESP with access to the credentials, you create a Kubernetes secret and mount the Kubernetes secret as a Kubernetes volume.

To create the Kubernetes secret and mount the volume:

  1. If you used the GCP Console to create the service account, rename the JSON file to service-account-creds.json. Move it to the same directory where api_descriptor.pb and api_config.yaml are located.

  2. Create a Kubernetes secret with the service account credentials:

    kubectl create secret generic service-account-creds \

    On success, you see the message: secret "service-account-creds" created

The deployment manifest file that you will use to deploy the API and ESP to Kubernetes already contains the secret volume, as shown in the following two sections of the file:

  - name: service-account-creds
      secretName: service-account-creds
  - mountPath: /etc/nginx/creds
    name: service-account-creds
    readOnly: true

Configuring the service name and starting the service

ESP needs to know the name of your service to find the configuration that you deployed previously (via the gcloud endpoints services deploy command).

To configure the service name and start the service:

  1. Save a copy of the deployment manifest file, k8s-grpc-bookstore.yaml, to the same directory as service-account-creds.json.

  2. Open k8s-grpc-bookstore.yaml and replace SERVICE_NAME with the name of your Endpoints service. This is the same name that you configured in the name field in the api_config.yaml file.

      - name: esp
        args: [

    The --rollout_strategy=managed option configures ESP to use the latest deployed service configuration. When you specify this option, within a minute after you deploy a new service configuration, ESP detects the change and automatically begins using it. We recommend that you specify this option instead of a specific configuration ID for ESP to use. For more details on the ESP arguments, see ESP Startup Options.

  3. Start the service to deploy the service on Kubernetes:

    kubectl create -f k8s-grpc-bookstore.yaml

    If you see an error message similar to the following:

    The connection to the server localhost:8080 was refused - did you specify the right host or port?

    This indicates that kubectl is not properly configured. See Configure kubectl for more information.

Getting the service's external IP address

You'll need the service's external IP address to send requests to the sample API. It can take a few minutes after you start your service in the container before the external IP address is ready.

To view the external IP address:

  1. Invoke the command:

    kubectl get service
  2. Note the value for EXTERNAL-IP and save it into a SERVER_IP environment variable. We’ll use it when sending requests to the sample API.


Sending a request to the API

To send requests to the sample API, you can use a sample gRPC client written in Python.

  1. Clone the git repo where the gRPC client code is hosted:

    git clone
  2. Change your working directory:

    cd python-docs-samples/endpoints/bookstore-grpc/
  3. Install dependencies:

    pip install virtualenv
    virtualenv env
    source env/bin/activate
    python -m pip install -r requirements.txt
  4. Send a request to the sample API

    python --host $SERVER_IP --port 80
  5. Look at the activity graphs for your API in the Endpoints page.
    View Endpoints activity graphs
    It may take a few moments for the request to be reflected in the graphs.

  6. Look at the request logs for your API in the Logs Viewer page.
    View Endpoints request logs

If you do not get a successful response, see Troubleshooting Response Errors.

You just deployed and tested an API in Endpoints!

Clean up

To avoid incurring charges to your GCP account for the resources used in this quickstart:

  1. Delete the API:

    gcloud endpoints services delete [SERVICE_NAME]

    Replace [SERVICE_NAME] with the name of your API. Do not include the square brackets.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Endpoints with gRPC
Need help? Visit our support page.