Getting Started with gRPC on Kubernetes Engine

This page shows you how to deploy a simple example gRPC service with the Google Cloud Endpoints Extensible Server Proxy (ESP) on Kubernetes Engine.

This page uses the Python version of the bookstore-grpc sample. See the What's next section for gRPC samples in other languages.

For an overview of Cloud Endpoints, see About Cloud Endpoints and Cloud Endpoints Architecture.

Task List

Use the following high-level task list as you work through the tutorial. All tasks are required to successfully send requests to the API.

  1. Set up a Cloud Platform project, and download required software. See Before you begin.
  2. Copy and configure files from the bookstore-grpc sample. See Configuring Endpoints.
  3. Deploy the Endpoints configuration to create a Cloud Endpoints service. See Deploying the Endpoints configuration.
  4. Create a backend to serve the API and deploy the API. See Deploying the API backend.
  5. Get the service's external IP address: See Getting the service's external IP address.
  6. Send a request to the API. See Sending a request to the API.
  7. Avoid incurring charges to your Google Cloud Platform account. See Clean up.

Before you begin

  1. Sign in to your Google account.

    If you don't already have one, sign up for a new account.

  2. Select or create a Cloud Platform project.

    Go to the Manage resources page

  3. Enable billing for your project.

    Enable billing

  4. Note the project ID, because you'll need it later.
  5. Install and initialize the Cloud SDK.
  6. Update the Cloud SDK and install the Endpoints components.
    gcloud components update
  7. Make sure that Cloud SDK (gcloud) is authorized to access your data and services on Google Cloud Platform:
    gcloud auth login
    A new browser tab opens and you are prompted to choose an account.
  8. Set the default project to your project ID.
    gcloud config set project [YOUR_PROJECT_ID]

    Replace [YOUR_PROJECT_ID] with your project ID. Do not include the square brackets.

    If you have other Cloud Platform projects, and you want to use gcloud to manage them, see Managing Cloud SDK Configurations.

  9. Install kubectl:
    gcloud components install kubectl
  10. Acquire new user credentials to use for Application Default Credentials. The user credentials are needed to authorize kubectl.
    gcloud auth application-default login
    A new browser tab opens and you are prompted to choose an account.
  11. Follow the steps in the gRPC Python Quickstart to install gRPC and the gRPC tools.

Configuring Endpoints

The bookstore-grpc sample contains the files that you need to copy locally and configure.

  1. Create a self-contained protobuf descriptor file from your service .proto file:
    1. Save a copy of bookstore.proto from the example repo. This file defines the Bookstore service's API.
    2. Create the following directory: mkdir generated_pb2
    3. Create the descriptor file, api_descriptor.pb, using the protoc protocol buffers compiler. Run the following command in the directory where you saved bookstore.proto:
      python -m grpc_tools.protoc \
          --include_imports \
          --include_source_info \
          --proto_path=. \
          --descriptor_set_out=api_descriptor.pb \
          --python_out=generated_pb2 \
          --grpc_python_out=generated_pb2 \

      In the above command, --proto_path is set to the current working directory. In your gRPC build environment, if you use a different directory for .proto input files, change --proto_path so the compiler searches the directory where you saved bookstore.proto.

  2. Create a gRPC API Configuration YAML file:
    1. Save a copy of api_config.yaml. This file defines the gRPC API configuration for the Bookstore service.
    2. Replace <MY_PROJECT_ID> in your api_config.yaml file with your GCP project ID. For example:
      # Name of the service configuration.

      Note that the apis: name value in this file exactly matches the fully-qualified API name from the .proto file; otherwise deployment won't work. The Bookstore service is defined in bookstore.proto inside package endpoints.examples.bookstore. Its fully-qualified API name is endpoints.examples.bookstore.Bookstore, just as it appears in api_config.yaml.

        - name: endpoints.examples.bookstore.Bookstore

Deploying the Endpoints Configuration

To deploy the Endpoints configuration, you use Google Service Management, an infrastructure service of Google Cloud Platform that manages other APIs and services, including services created using Cloud Endpoints.

  1. Make sure you are in the directory where api_descriptor.pb and api_config.yaml are located.
  2. Deploy the proto descriptor file and the configuration file using the gcloud command-line tool:
    gcloud endpoints services deploy api_descriptor.pb api_config.yaml

    As it is creating and configuring the service, Service Management outputs a great deal of information to the terminal. On successful completion, you will see a line like the following that displays the service configuration ID and the service name:

    Service Configuration [2017-02-13-r2] uploaded for service []

    In the above example, 2017-02-13-r2 is the service configuration ID and is the service name. If you get an error message, see Troubleshooting configuration deployment errors.

    See gcloud endpoints services deploy in the Cloud SDK Reference documentation for more information.

Deploying the API backend

So far you have deployed the API configuration to Service Management, but you have not yet deployed the code that will serve the API backend. This section walks you through creating a Kubernetes Engine cluster to host the API backend and deploying the API.

Creating a container cluster

To create a container cluster for our example:

  1. Go to the console's container clusters page: Go to the container clusters page.
  2. Click Create cluster.
  3. Accept the default settings and click Create. Note the cluster name and zone, as you'll need them later in this tutorial.

Authenticating kubectl to the container cluster

To use kubectl to create and manager cluster resources, you need to get cluster credentials and make them available to kubectl. To do this, invoke the following command, replacing [NAME] with your new cluster name and [ZONE] with its cluster zone. Do not include the square brackets.

gcloud container clusters get-credentials [NAME] --zone [ZONE]

Deploying the sample API and ESP to the cluster

To deploy our sample gRPC service to the cluster so that clients can use it:

  1. Get the service name and service configuration ID for the sample API. These are the same values returned when you deployed the API configuration.
  2. Save and edit a copy of the Kubernetes configuration file, replacing SERVICE_NAME and SERVICE_CONFIG_ID with the values for the sample API as shown in the following snippet.
      - name: esp
        args: [
          - containerPort: 9000
      - name: bookstore
          - containerPort: 8000
    Note: The configuration sample displays the lines that need to be edited. To run Cloud Endpoints, the complete configuration file is required.

    In this configuration file, the following arguments specify how you want to run the Extensible Service Proxy container:

    • --service: specifies the name of your Endpoints service
    • --version: specifies the service config ID of the Endpoints service
    • --http2_port: specifies the port that accepts HTTP2 connections
    • --backend: specifies the application backend to which the ESP proxies requests. In this example, the grpc:// prefix indicates that the backend accepts gRPC traffic.
    For mode details on the Endpoints Proxy arguments, please see the Proxy Startup Options.

    For example:

          - name: esp
            args: [
  3. Start the service:
    kubectl create -f grpc-bookstore.yaml

Getting the service's external IP address

You'll need the service's external IP address to send requests to the sample API. It can take a few minutes after you start your service in the container before the external IP address is ready.

To view the external IP address:

  1. Invoke the command:

    kubectl get service

  2. Note the value for EXTERNAL-IP and save it into a SERVER_IP environment variable. We’ll use it when sending requests to the sample API.


Sending a request to the API

To send requests to the sample API, you can use a sample gRPC client written in Python.

  1. Clone the git repo where the gRPC client code is hosted:

    git clone
  2. Change your working directory:

    cd python-docs-samples/endpoints/bookstore-grpc/
  3. Install dependencies:

    pip intall virtualenv
    virtualenv env
    source env/bin/activate
    python -m pip install -r requirements.txt
  4. Send a request to the sample API

    python --host $SERVER_IP --port 80
  5. Look at the activity graphs for your API in the Endpoints page.
    View Endpoints activity graphs
    It may take a few moments for the request to be reflected in the graphs.

  6. Look at the request logs for your API in the Logs Viewer page.
    View Endpoints request logs

You just deployed and tested an API in Cloud Endpoints!

Clean up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this quickstart:

  1. Delete the API:

    gcloud endpoints services delete [SERVICE_NAME]

    Replace [SERVICE_NAME] with the name of your API. Do not include the square brackets.

  2. Delete the Kubernetes cluster:

    gcloud container clusters delete [NAME] --zone [ZONE]

    Replace[NAME] and [ZONE] with the name and zone of your Kubernetes cluster. Do not include the square brackets.

What's next

Send feedback about...

Cloud Endpoints with gRPC