Getting started with Endpoints for GKE with ESP

This tutorial shows you how to deploy a simple example gRPC service with the Extensible Service Proxy (ESP) on Google Kubernetes Engine (GKE). This tutorial uses the Python version of the bookstore-grpc sample. See the What's next section for gRPC samples in other languages.

The tutorial uses prebuilt container images of the sample code and ESP, which are stored in Container Registry. If you are unfamiliar with containers, see the following for more information:

For an overview of Cloud Endpoints, see About Endpoints and Endpoints architecture.

Objectives

Use the following high-level task list as you work through the tutorial. All tasks are required to successfully send requests to the API.

  1. Set up a Google Cloud project, and download the required software. See Before you begin.
  2. Copy and configure files from the bookstore-grpc sample. See Configuring Endpoints.
  3. Deploy the Endpoints configuration to create a Endpoints service. See Deploying the Endpoints configuration.
  4. Create a backend to serve the API and deploy the API. See Deploying the API backend.
  5. Get the service's external IP address. See Getting the service's external IP address.
  6. Send a request to the API. See Sending a request to the API.
  7. Avoid incurring charges to your Google Cloud account. See Clean up.

Costs

This tutorial uses the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up.

Before you begin

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to the project selector page

  3. Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.

  4. Make a note of the Google Cloud project ID because it is needed later.
  5. Install and initialize the Cloud SDK.
  6. Update the Cloud SDK and install the Endpoints components.
    gcloud components update
  7. Make sure that the Cloud SDK (gcloud) is authorized to access your data and services on Google Cloud:
    gcloud auth login
    A new browser tab opens and you are prompted to choose an account.
  8. Set the default project to your project ID.
    gcloud config set project YOUR_PROJECT_ID

    Replace YOUR_PROJECT_ID with your project ID.

    If you have other Google Cloud projects, and you want to use gcloud to manage them, see Managing Cloud SDK configurations.

  9. Install kubectl:
    gcloud components install kubectl
  10. Acquire new user credentials to use as the application's default credentials. The user credentials are needed to authorize kubectl.
    gcloud auth application-default login
    In the new browser tab that opens, choose an account.
  11. Follow the steps in the gRPC Python quickstart to install gRPC and the gRPC tools.

Configuring Endpoints

The bookstore-grpc sample contains the files that you need to copy locally and configure.

  1. Create a self-contained protobuf descriptor file from your service .proto file:
    1. Save a copy of bookstore.proto from the example repository. This file defines the Bookstore service's API.
    2. Create the following directory: mkdir generated_pb2
    3. Create the descriptor file, api_descriptor.pb, by using the protoc protocol buffers compiler. Run the following command in the directory where you saved bookstore.proto:
      python -m grpc_tools.protoc \
          --include_imports \
          --include_source_info \
          --proto_path=. \
          --descriptor_set_out=api_descriptor.pb \
          --python_out=generated_pb2 \
          --grpc_python_out=generated_pb2 \
          bookstore.proto
      

      In the preceding command, --proto_path is set to the current working directory. In your gRPC build environment, if you use a different directory for .proto input files, change --proto_path so the compiler searches the directory where you saved bookstore.proto.

  2. Create a gRPC API configuration YAML file:
    1. Save a copy of the api_config.yamlfile. This file defines the gRPC API configuration for the Bookstore service.
    2. Replace MY_PROJECT_ID in your api_config.yaml file with your Google Cloud project ID. For example:
      #
      # Name of the service configuration.
      #
      name: bookstore.endpoints.example-project-12345.cloud.goog
      

      Note that the apis.name field value in this file exactly matches the fully-qualified API name from the .proto file; otherwise deployment won't work. The Bookstore service is defined in bookstore.proto inside package endpoints.examples.bookstore. Its fully-qualified API name is endpoints.examples.bookstore.Bookstore, just as it appears in the api_config.yaml file.

      apis:
        - name: endpoints.examples.bookstore.Bookstore
      

See Configuring Endpoints for more information.

Deploying the Endpoints configuration

To deploy the Endpoints configuration, you use the gcloud endpoints services deploy command. This command uses Service Management to create a managed service.

  1. Make sure you are in the directory where the api_descriptor.pb and api_config.yaml files are located.
  2. Confirm that the default project that the gcloud command-line tool is currently using is the Google Cloud project that you want to deploy the Endpoints configuration to. Validate the project ID returned from the following command to make sure that the service doesn't get created in the wrong project.
    gcloud config list project
    

    If you need to change the default project, run the following command:

    gcloud config set project YOUR_PROJECT_ID
    
  3. Deploy the proto descriptor file and the configuration file by using the gcloud command-line tool:
    gcloud endpoints services deploy api_descriptor.pb api_config.yaml
    

    As it is creating and configuring the service, Service Management outputs information to the terminal. When the deployment completes, a message similar to the following is displayed:

    Service Configuration [CONFIG_ID] uploaded for service [bookstore.endpoints.example-project.cloud.goog]

    CONFIG_ID is the unique Endpoints service configuration ID created by the deployment. For example:

    Service Configuration [2017-02-13r0] uploaded for service [bookstore.endpoints.example-project.cloud.goog]
    

    In the previous example, 2017-02-13r0 is the service configuration ID and bookstore.endpoints.example-project.cloud.goog is the service name. The service configuration ID consists of a date stamp followed by a revision number. If you deploy the Endpoints configuration again on the same day, the revision number is incremented in the service configuration ID.

Checking required services

At a minimum, Endpoints and ESP require the following Google services to be enabled:
Name Title
servicemanagement.googleapis.com Service Management API
servicecontrol.googleapis.com Service Control API
endpoints.googleapis.com Google Cloud Endpoints

In most cases, the gcloud endpoints services deploy command enables these required services. However, the gcloud command completes successfully but doesn't enable the required services in the following circumstances:

  • If you used a third-party application such as Terraform, and you don't include these services.

  • You deployed the Endpoints configuration to an existing Google Cloud project in which these services were explicitly disabled.

Use the following command to confirm that the required services are enabled:

gcloud services list

If you do not see the required services listed, enable them:

gcloud services enable servicemanagement.googleapis.com
gcloud services enable servicecontrol.googleapis.com
gcloud services enable endpoints.googleapis.com

Also enable your Endpoints service:

gcloud services enable ENDPOINTS_SERVICE_NAME

To determine the ENDPOINTS_SERVICE_NAME you can either:

  • After deploying the Endpoints configuration, go to the Endpoints page in the Cloud Console. The list of possible ENDPOINTS_SERVICE_NAME are shown under the Service name column.

  • For OpenAPI, the ENDPOINTS_SERVICE_NAME is what you specified in the host field of your OpenAPI spec. For gRPC, the ENDPOINTS_SERVICE_NAME is what you specified in the name field of your gRPC Endpoints configuration.

For more information about the gcloud commands, see gcloud services.

If you get an error message, see Troubleshooting Endpoints configuration deployment.

See Deploying the Endpoints configuration for additional information.

Deploying the API backend

So far you have deployed the service configuration to Service Management, but you haven't yet deployed the code that serves the API backend. This section walks you through creating a GKE cluster to host the API backend and deploying the API.

Creating a container cluster

To create a container cluster for our example:

  1. In the Google Cloud Console, go to the Kubernetes clusters page.

    Go to the Kubernetes clusters page

  2. Click Create cluster.
  3. Accept the default settings and click Create. Make a note of the cluster name and zone, as they are needed later in this tutorial.

Authenticating kubectl to the container cluster

To use kubectl to create and manager cluster resources, you need to get cluster credentials and make them available to kubectl. To do this, run the following command, replacing NAME with your new cluster name and ZONE with its cluster zone.

gcloud container clusters get-credentials NAME --zone ZONE

Checking required permissions

Please follow this permission recommendation to choose a proper node service account as the identity to talk to Google services. ESP and ESPv2 need to talk to Google ServiceController and Stackdriver, additional IAM roles are required for the node service account running ESP and ESPv2.

If the node service account is not the Compute Engine default service account, follow next step to add the required IAM roles:

Add required IAM roles:

The following IAM roles are required for the service account used for ESP and ESPv2.

To add the Service Controller and Cloud Trace Agent IAM roles to the service account:

Console

  1. In the Cloud Console, select the project where your service account was created.
  2. Open the IAM/Iam page

    Go to the IAM/Iam page

    . The page should list all IAM members, including all service accounts.
  3. Select your service account and click on Edit pin at the right.
  4. An Edit Permissions panel will open.
  5. Click + Add another role.
  6. Click Select a role and select Service Management > Service Controller.
  7. Click + Add another role.
  8. Click Select a role and select Cloud Trace > Cloud Trace Agent.
  9. Click Save.
  10. You should now see the Service Controller and Cloud Trace Agent roles in the role column for your service account on the IAM page.

gcloud

  1. Add the Service Controller role:

    gcloud projects add-iam-policy-binding PROJECT_ID \
            --member serviceAccount:SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com \
            --role roles/servicemanagement.serviceController
  2. Add the Cloud Trace Agent role to enable Cloud Trace:

    gcloud projects add-iam-policy-binding PROJECT_ID \
            --member serviceAccount:SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com \
            --role roles/cloudtrace.agent

For more information, see What are roles and permissions?

WorkLoad Identity:

If Workload Identity is used, a separate service account other than the node service account can be used to talk to Google services. You can create a Kubernetes service account for the pod to run ESP and ESPv2, create a Google service account and associate the Kubernetes service account to the Google service account.

Follow these steps to associate a Kubernetes service account with a Google service account.

The Google service account should have above required IAM roles. If not, follow the add required IAM roles step to add them.

Deploying the sample API and ESP to the cluster

To deploy the sample gRPC service to the cluster so that clients can use it:

  1. Save and open for editing a copy of the grpc-bookstore.yaml deployment manifest file.
  2. Replace SERVICE_NAME with the name of your Endpoints service. This is the same name that you configured in the name field in the api_config.yaml file.
    spec:
      containers:
      - name: esp
        image: gcr.io/endpoints-release/endpoints-runtime:1
        args: [
          "--http2_port=9000",
          "--service=SERVICE_NAME",
          "--rollout_strategy=managed",
          "--backend=grpc://127.0.0.1:8000"
        ]
        ports:
          - containerPort: 9000
      - name: bookstore
        image: gcr.io/endpointsv2/python-grpc-bookstore-server:1
        ports:
          - containerPort: 8000

    The --rollout_strategy=managed option configures ESP to use the latest deployed service configuration. When you specify this option, up to 5 minutes after you deploy a new service configuration, ESP detects the change and automatically begins using it. We recommend that you specify this option instead of a specific configuration ID for ESP to use. For more details on the ESP arguments, see ESP startup options.

    For example:

        spec:
          containers:
          - name: esp
            image: gcr.io/endpoints-release/endpoints-runtime:1
            args: [
              "--http2_port=9000",
              "--service=bookstore.endpoints.example-project-12345.cloud.goog",
              "--rollout_strategy=managed",
              "--backend=grpc://127.0.0.1:8000"
            ]
    
  3. Start the service:
    kubectl create -f grpc-bookstore.yaml
    

If you get an error message, see Troubleshooting Endpoints in GKE.

Getting the service's external IP address

You need the service's external IP address to send requests to the sample API. It can take a few minutes after you start your service in the container before the external IP address is ready.

  1. View the external IP address:

    kubectl get service
  2. Make a note of the value for EXTERNAL-IP and save it in a SERVER_IP environment variable. The external IP address is used to send requests to the sample API.

    export SERVER_IP=YOUR_EXTERNAL_IP
    

Sending a request to the API

To send requests to the sample API, you can use a sample gRPC client written in Python.

  1. Clone the git repo where the gRPC client code is hosted:

    git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git
       

  2. Change your working directory:

    cd python-docs-samples/endpoints/bookstore-grpc/
      

  3. Install dependencies:

    pip install virtualenv
    virtualenv env
    source env/bin/activate
    python -m pip install -r requirements.txt
    

  4. Send a request to the sample API:

    python bookstore_client.py --host SERVER_IP --port 80
    

If you don't get a successful response, see Troubleshooting response errors.

You just deployed and tested an API in Endpoints!

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

  1. Delete the API:

    gcloud endpoints services delete SERVICE_NAME
    

    Replace SERVICE_NAME with the name of your API.

  2. Delete the GKE cluster:

    gcloud container clusters delete NAME --zone ZONE
    

What's next