Troubleshooting Cloud Endpoints in GKE

This document presents troubleshooting techniques for Cloud Endpoints deployments on Google Kubernetes Engine and Kubernetes.

Failed in kubectl create -f gke.yaml

If you see the Failed in kubectl create -f gke.yaml error message, take the following steps:

  1. Authorize gcloud:

    gcloud auth login
    gcloud auth application-default login
    
  2. Create a cluster. You can either use the following gcloud command, or create a cluster using Google Cloud Console.

    gcloud container clusters create [CLUSTER_NAME]
    

    Replace [CLUSTER_NAME] with your cluster's name.

  3. Get credentials for your cluster and make them available to kubectl:

    gcloud container clusters get-credentials [CLUSTER_NAME]
    

    Replace [CLUSTER_NAME] with your cluster's name.

Cloud Endpoints metrics and logs aren't displayed

If you can successfully send requests to your API, but you do not see any metrics or logs on the Cloud Endpoints Services page in the Google Cloud Platform Console, check if the cluster has the required OAuth scopes, as follows:

  1. Go to the Kubernetes clusters page in the GCP Console.

    Go to the Kubernetes clusters page

  2. Select your cluster from the list.

  3. Click Permissions.
  4. Confirm that Service Control and Service Management have the following OAuth scopes:

    • Service Control: Enabled
    • Service Management: Read Only

If Service Control and Service Management do not have the required OAuth scopes, you need to create another cluster with the required scopes, or follow the steps in Updating Google Kubernetes Engine VM scopes with zero downtime.

For more information, see What are access scopes?.

Accessing logs from Extensible Service Proxy

If you need to access the Extensible Service Proxy logs to diagnose problems, use kubectl as follows:

  1. Get the name of the pod:

      kubectl get pod
    
      NAME                       READY     STATUS    RESTARTS   AGE
      esp-echo-174578890-x09gl   2/2       Running   2          21s
    

    The pod name is esp-echo-174578890-x09gl and it has two containers: esp and echo.

  2. To view the logs in a pod use kubectl logs:

      kubectl logs [POD_NAME] -c [CONTAINER_NAME]
    

    Where [POD_NAME] and [CONTAINER_NAME] are returned from the kubectl get pod command, in the previous step. For example:

      kubectl logs esp-echo-174578890-x09gl -c esp
    

Verifying the service name

If you see the error message Fetching service config failed, verify that the service name that you specified in the --service field in your Deployment manifest file (referred to as deployment.yaml below) matches the host name in the name propery specified in your gRPC API configuration YAML file (referred to as api_config.yaml below).

If the incorrect name is in deployment.yaml:

  1. Open deployment.yaml and go to the section configured for the ESP container. For example:

      containers:
      - name: esp
        image: gcr.io/endpoints-release/endpoints-runtime:1
        args: [
          "--http_port=8081",
          "--backend=127.0.0.1:8080",
          "--service=[SERVICE_NAME]",
          "--rollout_strategy=managed"
        ]
    
  2. Change [SERVICE_NAME] so that it matches the host name specified in the name property in api_config.yaml and save deployment.yaml.

  3. Start the Kubernetes service:

      kubectl create -f deployment.yaml
    

If the incorrect name is in api_config.yaml:

  1. Get the service name that Cloud Endpoints was configured to use.

  2. Delete the service:

      gcloud endpoints services delete [SERVICE_NAME]
    

    Replace [SERVICE_NAME] with the name from the previous step. It takes 30 days for the service to be deleted from GCP. You will not be able to reuse the service name during this time.

  3. Open api_config.yaml and correct the host name in the name property and save the file.

  4. Deploy the updated service configuration:

    gcloud endpoints services deploy api_descriptor.pb api_config.yaml api_config_http.yaml
    
  5. Wait for the service configuration to be successfully deployed.

  6. Start the Kubernetes service:

      kubectl create -f deployment.yaml
    

Checking configuration files

  1. SSH into the pod using kubectl:

      kubectl exec -ti -c [CONTAINER_NAME] [POD_NAME] bash
    

    Replace [CONTAINER_NAME] with the name of your container and [POD_NAME] with the name of your pod.

  2. In the etc/nginx/endpoints/ directory, check the following configuration files for errors.

    • nginx.conf - The nginx config file with ESP directives
    • service.json - The service configuration file

Accessing the Endpoints status page

If you set rollout_strategy to managed when you started ESP, and you need to find out the configuration ID that an instance of ESP is using, the Endpoints status page has the information.

To access the Endpoints status page:

  1. SSH into the pod using kubectl:

      kubectl exec -ti -c [CONTAINER_NAME] [POD_NAME] bash
    

    Replace [CONTAINER_NAME] with the name of your container and [POD_NAME] with the name of your pod.

  2. Install curl.

  3. Enter the following:

      curl http://localhost:8090/endpoints_status
    

You will see something like the following:

"serviceConfigRollouts": {
    "rolloutId": "2017-08-09r27",
    "percentages": {
         "2017-08-09r26": "100"
    }
}

The value in the rolloutId is the service configuration ID that ESP is using. To make sure ESP is using the same configuration as Cloud Endpoints, see Getting the Service Name and Configuration ID.

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Endpoints with gRPC
Need help? Visit our support page.