This tutorial shows you how to deploy a simple example gRPC
service with the
Extensible Service Proxy
(ESP) on Google Kubernetes Engine
(GKE). This tutorial
uses the Python version of the
bookstore-grpc
sample. See the
What's next section for gRPC samples in other
languages.
The tutorial uses prebuilt container images of the sample code and ESP, which are stored in Artifact Registry. If you are unfamiliar with containers, see the following for more information:
For an overview of Cloud Endpoints, see About Endpoints and Endpoints architecture.
Objectives
Use the following high-level task list as you work through the tutorial. All tasks are required to successfully send requests to the API.
- Set up a Google Cloud project, and download the required software. See Before you begin.
- Copy and configure files from the
bookstore-grpc
sample. See Configuring Endpoints. - Deploy the Endpoints configuration to create a Endpoints service. See Deploying the Endpoints configuration.
- Create a backend to serve the API and deploy the API. See Deploying the API backend.
- Get the service's external IP address. See Getting the service's external IP address.
- Send a request to the API. See Sending a request to the API.
- Avoid incurring charges to your Google Cloud account. See Clean up.
Costs
In this document, you use the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage,
use the pricing calculator.
When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
- Make a note of the Google Cloud project ID because it is needed later.
- Install and initialize the Google Cloud CLI.
- Update the gcloud CLI and install the Endpoints
components.
gcloud components update
- Make sure that the Google Cloud CLI (
gcloud
) is authorized to access your data and services on Google Cloud: A new browser tab opens and you are prompted to choose an account.gcloud auth login
- Set the default project to your project ID.
gcloud config set project YOUR_PROJECT_ID
Replace YOUR_PROJECT_ID with your project ID.
If you have other Google Cloud projects, and you want to use
gcloud
to manage them, see Managing gcloud CLI configurations. - Install
kubectl
:gcloud components install kubectl
- Acquire new user credentials to use as the application's default
credentials. The user credentials are needed to authorize
kubectl
. In the new browser tab that opens, choose an account.gcloud auth application-default login
- Follow the steps in the gRPC Python quickstart to install gRPC and the gRPC tools.
Configuring Endpoints
The bookstore-grpc
sample contains the files that you need to copy locally and configure.
- Create a self-contained protobuf descriptor file from your service
.proto
file:- Save a copy of
bookstore.proto
from the example repository. This file defines the Bookstore service's API. - Create the following directory:
mkdir generated_pb2
- Create the descriptor file,
api_descriptor.pb
, by using theprotoc
protocol buffers compiler. Run the following command in the directory where you savedbookstore.proto
:python -m grpc_tools.protoc \ --include_imports \ --include_source_info \ --proto_path=. \ --descriptor_set_out=api_descriptor.pb \ --python_out=generated_pb2 \ --grpc_python_out=generated_pb2 \ bookstore.proto
In the preceding command,
--proto_path
is set to the current working directory. In your gRPC build environment, if you use a different directory for.proto
input files, change--proto_path
so the compiler searches the directory where you savedbookstore.proto
.
- Save a copy of
- Create a gRPC API configuration YAML file:
- Save a copy of the
api_config.yaml
file. This file defines the gRPC API configuration for the Bookstore service. - Replace MY_PROJECT_ID in your
api_config.yaml
file with your Google Cloud project ID. For example:# # Name of the service configuration. # name: bookstore.endpoints.example-project-12345.cloud.goog
Note that the
apis.name
field value in this file exactly matches the fully-qualified API name from the.proto
file; otherwise deployment won't work. The Bookstore service is defined inbookstore.proto
inside packageendpoints.examples.bookstore
. Its fully-qualified API name isendpoints.examples.bookstore.Bookstore
, just as it appears in theapi_config.yaml
file.apis: - name: endpoints.examples.bookstore.Bookstore
- Save a copy of the
See Configuring Endpoints for more information.
Deploying the Endpoints configuration
To deploy the Endpoints configuration, you use the
gcloud endpoints services deploy
command. This command uses
Service Management
to create a managed service.
- Make sure you are in the directory where the
api_descriptor.pb
andapi_config.yaml
files are located. - Confirm that the default project that the
gcloud
command-line tool is currently using is the Google Cloud project that you want to deploy the Endpoints configuration to. Validate the project ID returned from the following command to make sure that the service doesn't get created in the wrong project.gcloud config list project
If you need to change the default project, run the following command:
gcloud config set project YOUR_PROJECT_ID
- Deploy the
proto descriptor
file and the configuration file by using the Google Cloud CLI:gcloud endpoints services deploy api_descriptor.pb api_config.yaml
As it is creating and configuring the service, Service Management outputs information to the terminal. When the deployment completes, a message similar to the following is displayed:
Service Configuration [CONFIG_ID] uploaded for service [bookstore.endpoints.example-project.cloud.goog]
CONFIG_ID is the unique Endpoints service configuration ID created by the deployment. For example:
Service Configuration [2017-02-13r0] uploaded for service [bookstore.endpoints.example-project.cloud.goog]
In the previous example,
2017-02-13r0
is the service configuration ID andbookstore.endpoints.example-project.cloud.goog
is the service name. The service configuration ID consists of a date stamp followed by a revision number. If you deploy the Endpoints configuration again on the same day, the revision number is incremented in the service configuration ID.
Checking required services
At a minimum, Endpoints and ESP require the following Google services to be enabled:Name | Title |
---|---|
servicemanagement.googleapis.com |
Service Management API |
servicecontrol.googleapis.com |
Service Control API |
endpoints.googleapis.com |
Google Cloud Endpoints |
In most cases, the gcloud endpoints services deploy
command enables these
required services. However, the gcloud
command completes successfully but
doesn't enable the required services in the following circumstances:
If you used a third-party application such as Terraform, and you don't include these services.
You deployed the Endpoints configuration to an existing Google Cloud project in which these services were explicitly disabled.
Use the following command to confirm that the required services are enabled:
gcloud services list
If you do not see the required services listed, enable them:
gcloud services enable servicemanagement.googleapis.comgcloud services enable servicecontrol.googleapis.com
gcloud services enable endpoints.googleapis.com
Also enable your Endpoints service:
gcloud services enable ENDPOINTS_SERVICE_NAME
To determine the ENDPOINTS_SERVICE_NAME you can either:
After deploying the Endpoints configuration, go to the Endpoints page in the Cloud console. The list of possible ENDPOINTS_SERVICE_NAME are shown under the Service name column.
For OpenAPI, the ENDPOINTS_SERVICE_NAME is what you specified in the
host
field of your OpenAPI spec. For gRPC, the ENDPOINTS_SERVICE_NAME is what you specified in thename
field of your gRPC Endpoints configuration.
For more information about the gcloud
commands, see
gcloud
services.
If you get an error message, see Troubleshooting Endpoints configuration deployment.
See Deploying the Endpoints configuration for additional information.
Deploying the API backend
So far you have deployed the service configuration to Service Management, but you haven't yet deployed the code that serves the API backend. This section walks you through creating a GKE cluster to host the API backend and deploying the API.
Creating a container cluster
To create a container cluster for our example:
- In the Google Cloud console, go to the Kubernetes clusters page.
- Click Create cluster.
- Accept the default settings and click Create. Make a note of the cluster name and zone, as they are needed later in this tutorial.
Authenticating kubectl
to the container cluster
To use kubectl
to create and manager cluster resources, you need to get
cluster credentials and make them available to kubectl
. To do this, run the
following command, replacing NAME with your new cluster
name and ZONE with its cluster zone.
gcloud container clusters get-credentials NAME --zone ZONE
Checking required permissions
ESP and ESPv2 calls Google services which use IAM to verify if the calling identity has enough permissions to access the used IAM resources. The calling identity is the attached service account deploying ESP and ESPv2.
When deployed in GKE pod, the attached service account is the node service account. Usually it is the Compute Engine default service account. Please follow this permission recommendation to choose a proper node service account.
If Workload Identity is used, a separate service account other than the node service account can be used to talk to Google services. You can create a Kubernetes service account for the pod to run ESP and ESPv2, create a Google service account and associate the Kubernetes service account to the Google service account.
Follow these steps to associate a Kubernetes service account with a Google service account. This Google service account is the attached service account.
If the attached service account is the Compute Engine default service account of the project and the endpoint service configuration is deployed in the same project, the service account should have enough permissions to access the IAM resources, following IAM roles setup step can be skipped. Otherwise following IAM roles should be added to the attached service account.
Add required IAM roles:
This section describes the IAM resources used by ESP and ESPv2 and the IAM roles required for the attached service account to access these resources.
Endpoint Service Configuration
ESP and ESPv2 call Service Control which uses the endpoint service configuration. The endpoint service configuration is an IAM resource and ESP and ESPv2 need the Service Controller role to access it.
The IAM role is on the endpoint service configuration, not on the project. A project may have multiple endpoint service configurations.
Use the following gcloud command to add the role to the attached service account for the endpoint service configuration.
gcloud endpoints services add-iam-policy-binding SERVICE_NAME \ --member serviceAccount:SERVICE_ACCOUNT_NAME@DEPLOY_PROJECT_ID.iam.gserviceaccount.com \ --role roles/servicemanagement.serviceController
Where
* SERVICE_NAME
is the endpoint service name
* SERVICE_ACCOUNT_NAME@DEPLOY_PROJECT_ID.iam.gserviceaccount.com
is the attached service account.
Cloud Trace
ESP and ESPv2 call
Cloud Trace service to
export Trace to a project. This project is called the tracing
project. In ESP, the tracing project and the project that owns
the endpoint service configuration are the same. In ESPv2, the
tracing project can be specified by the flag --tracing_project_id
, and
defaults to the deploying project.
ESP and ESPv2 require the Cloud Trace Agent role to enable Cloud Trace.
Use the following gcloud command to add the role to the attached service account:
gcloud projects add-iam-policy-binding TRACING_PROJECT_ID \ --member serviceAccount:SERVICE_ACCOUNT_NAME@DEPLOY_PROJECT_ID.iam.gserviceaccount.com \ --role roles/cloudtrace.agent
Where
* TRACING_PROJECT_ID is the tracing project ID
* SERVICE_ACCOUNT_NAME@DEPLOY_PROJECT_ID.iam.gserviceaccount.com
is the attached service account.
For more information, see
What are roles and permissions?
Deploying the sample API and ESP to the cluster
To deploy the sample gRPC service to the cluster so that clients can use it:
- Save and open for editing a copy of the grpc-bookstore.yaml deployment manifest file.
- Replace SERVICE_NAME with the name of your
Endpoints service.
This is the same name that you configured in the
name
field in theapi_config.yaml
file.The
--rollout_strategy=managed
option configures ESP to use the latest deployed service configuration. When you specify this option, up to 5 minutes after you deploy a new service configuration, ESP detects the change and automatically begins using it. We recommend that you specify this option instead of a specific configuration ID for ESP to use. For more details on the ESP arguments, see ESP startup options.For example:
spec: containers: - name: esp image: gcr.io/endpoints-release/endpoints-runtime:1 args: [ "--http2_port=9000", "--service=bookstore.endpoints.example-project-12345.cloud.goog", "--rollout_strategy=managed", "--backend=grpc://127.0.0.1:8000" ]
- Start the service:
kubectl create -f grpc-bookstore.yaml
If you get an error message, see Troubleshooting Endpoints in GKE.
Getting the service's external IP address
You need the service's external IP address to send requests to the sample API. It can take a few minutes after you start your service in the container before the external IP address is ready.
View the external IP address:
kubectl get service
Make a note of the value for
EXTERNAL-IP
and save it in a SERVER_IP environment variable. The external IP address is used to send requests to the sample API.export SERVER_IP=YOUR_EXTERNAL_IP
Sending a request to the API
To send requests to the sample API, you can use a sample gRPC client written in Python.
Clone the git repo where the gRPC client code is hosted:
git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git
Change your working directory:
cd python-docs-samples/endpoints/bookstore-grpc/
Install dependencies:
pip install virtualenv
virtualenv env
source env/bin/activate
python -m pip install -r requirements.txt
Send a request to the sample API:
python bookstore_client.py --host SERVER_IP --port 80
Look at the activity graphs for your API in the Endpoints > Services page.
Go to the Endpoints Services page
It may take a few moments for the request to be reflected in the graphs.
Look at the request logs for your API in the Logs Explorer page.
If you don't get a successful response, see Troubleshooting response errors.
You just deployed and tested an API in Endpoints!
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Delete the API:
gcloud endpoints services delete SERVICE_NAME
Replace SERVICE_NAME with the name of your API.
Delete the GKE cluster:
gcloud container clusters delete NAME --zone ZONE