Troubleshooting Cloud Endpoints in Kubernetes Engine

This document presents troubleshooting techniques for Cloud Endpoints deployments on Kubernetes Engine and Kubernetes.

Failed in kubectl create -f gke.yaml

If you see the Failed in kubectl create -f gke.yaml error message, take the following steps:

  1. Authorize gcloud:

    gcloud auth login
    gcloud auth application-default login
    
  2. Create a cluster. You can either use the following gcloud command, or create a cluster using Google Cloud Console.

    gcloud container clusters create [CLUSTER_NAME]
    

    Replace [CLUSTER_NAME] with your cluster's name.

  3. Get credentials for your cluster and make them available to kubectl:

    gcloud container clusters get-credentials [CLUSTER_NAME]
    

    Replace [CLUSTER_NAME] with your cluster's name.

Accessing Logs from Extensible Server Proxy

If you need to access the Extensible Server Proxy logs to diagnose problems, use kubectl as follows:

  1. Get the name of the pod:

      kubectl get pod
    
      NAME                       READY     STATUS    RESTARTS   AGE
      esp-echo-174578890-x09gl   2/2       Running   2          21s
    

    The pod name is esp-echo-174578890-x09gl and it has two containers; esp and echo.

  2. To view the logs in a pod use kubectl logs:

      kubectl logs [POD_NAME] -c [CONTAINER_NAME]
    

    Where [POD_NAME] and [CONTAINER_NAME] are returned from the kubectl get pod command, in the previous step. For example:

      kubectl logs esp-echo-174578890-x09gl -c esp
    

Verifying the service name and configuration ID

Cloud Endpoints uses the name property that you specify in your gRPC API configuration YAML file as the Endpoints service name. When you deploy your gRPC API configuration file, Cloud Endpoints creates a configuration ID for the service.

You specify both the service name and service configuration ID in your Kubernetes configuration file, for example:

template:
  metadata:
    labels:
      app: esp-echo
  spec:
    volumes:
    - name: nginx-ssl
      secret:
        secretName: nginx-ssl
    containers:
    - name: esp
      image: gcr.io/endpoints-release/endpoints-runtime:1
      args: [
        "--http_port", "8080",
        "--ssl_port", "443",
        "--backend", "127.0.0.1:8081",
        "--service", "SERVICE_NAME",
        "--version", "SERVICE_CONFIG_ID",
      ]
      ports:
        - containerPort: 8080
        - containerPort: 443
      volumeMounts:
      - mountPath: /etc/nginx/ssl
        name: nginx-ssl
        readOnly: true
    - name: echo
      image: gcr.io/endpoints-release/echo:latest
      ports:
        - containerPort: 8081

You deploy containers to a cluster using the following command, where [YOUR_CONFIG].yaml is the name of your Kubernetes configuration file.

 kubectl create -f [YOUR_CONFIG].yaml

If you get an error from this command, get the service name and configuration ID and verify that they match the text in your Kubernetes configuration file.

Checking Configuration Files

You can SSH into the pod using kubectl:

kubectl exec -ti -c CONTAINER POD bash

After you SSH into the pod, check the following configuration files for errors.

/etc/nginx/endpoints/ Two config files:
nginx.conf - nginx config file with ESP directives
service.json - service configuration file

Send feedback about...

Cloud Endpoints with gRPC