This tutorial shows how a service developer can troubleshoot a broken Knative serving service using Stackdriver tools for discovery and a local development workflow for investigation.
This step-by-step "case study" companion to the troubleshooting guide uses a sample project that results in runtime errors when deployed, which you troubleshoot to find and fix the problem.
Note that you cannot use this tutorial with Knative serving on VMware due to Google Cloud Observability support limitations.
Objectives
- Write, build, and deploy a service to Knative serving
- Use Cloud Logging to identify an error
- Retrieve the container image from Container Registry for a root cause analysis
- Fix the "production" service, then improve the service to mitigate future problems
Costs
In this document, you use the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage,
use the pricing calculator.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
- Enable the Knative serving API
- Install and initialize the Google Cloud CLI.
- Install the
kubectl
component:gcloud components install kubectl
- Update components:
gcloud components update
- If you are using Knative serving, create a new cluster using the instructions in Setting up Knative serving.
- If you are using Knative serving, install curl to try out the service
- Follow the instructions to install Docker locally
Setting up gcloud defaults
To configure gcloud with defaults for your Knative serving service:
Set your default project:
gcloud config set project PROJECT_ID
Replace PROJECT_ID with the name of the project you use for this tutorial.
Configure gcloud for your cluster:
gcloud config set run/platform gke gcloud config set run/cluster CLUSTER-NAME gcloud config set run/cluster_location REGION
Replace:
- CLUSTER-NAME with the name you used for your cluster,
- REGION with the supported cluster location of your choice.
Assembling the code
Build a new Knative serving greeter service step-by-step. As a reminder, this service creates a runtime error on purpose for the troubleshooting exercise.
Create a new project:
Node.js
Create a Node.js project by defining the service package, initial dependencies, and some common operations.Create a new
hello-service
directory:mkdir hello-service cd hello-service
Create a new Node.js project by generating a
package.json
file:npm init --yes npm install --save express@4
Open the new
package.json
file in your editor and configure astart
script to runnode index.js
. When you're done, the file will look like this:{ "name": "hello-service", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "start": "node index.js", "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC", "dependencies": { "express": "^4.17.1" } }
If you continue to evolve this service beyond the immediate tutorial, consider filling in the description, author, and evaluate the license. For more details, read the package.json documentation.
Python
Create a new
hello-service
directory:mkdir hello-service cd hello-service
Create a requirements.txt file and copy your dependencies into it:
Go
Create a new
hello-service
directory:mkdir hello-service cd hello-service
Create a Go project by initializing a new go module:
go mod init example.com/hello-service
You can update the specific name as you wish: you should update the name if the code is published to a web-reachable code repository.
Java
Create a new maven project:
mvn archetype:generate \ -DgroupId=com.example \ -DartifactId=hello-service \ -DarchetypeArtifactId=maven-archetype-quickstart \ -DinteractiveMode=false
Copy the dependencies into your
pom.xml
dependency list (between the<dependencies>
elements):Copy the build setting into your
pom.xml
(under the<dependencies>
elements):
Create an HTTP service to handle incoming requests:
Node.js
Python
Go
Java
Create a
Dockerfile
to define the container image used to deploy the service:Node.js
Python
Go
Java
This sample uses Jib to build Docker images using common Java tools. Jib optimizes container builds without the need for a Dockerfile or having Docker installed. Learn more about building Java containers with Jib.
Shipping the code
Shipping code consists of three steps: building a container image with Cloud Build, uploading the container image to Container Registry, and deploying the container image to Knative serving.
To ship your code:
Build your container and publish on Container Registry:
Node.js
gcloud builds submit --tag gcr.io/PROJECT_ID/hello-service
Where PROJECT_ID is your GCP project ID. You can check your current project ID with
gcloud config get-value project
.Upon success, you should see a SUCCESS message containing the ID, creation time, and image name. The image is stored in Container Registry and can be re-used if desired.
Python
gcloud builds submit --tag gcr.io/PROJECT_ID/hello-service
Where PROJECT_ID is your GCP project ID. You can check your current project ID with
gcloud config get-value project
.Upon success, you should see a SUCCESS message containing the ID, creation time, and image name. The image is stored in Container Registry and can be re-used if desired.
Go
gcloud builds submit --tag gcr.io/PROJECT_ID/hello-service
Where PROJECT_ID is your GCP project ID. You can check your current project ID with
gcloud config get-value project
.Upon success, you should see a SUCCESS message containing the ID, creation time, and image name. The image is stored in Container Registry and can be re-used if desired.
Java
mvn compile jib:build -Dimage=gcr.io/PROJECT_ID/hello-service
Where PROJECT_ID is your GCP project ID. You can check your current project ID with
gcloud config get-value project
.Upon success, you should see a BUILD SUCCESS message. The image is stored in Container Registry and can be re-used if desired.
Run the following command to deploy your app:
gcloud run deploy hello-service --image gcr.io/PROJECT_ID/hello-service
Replace PROJECT_ID with your GCP project ID.
hello-service
is both the container image name and name of the Knative serving service. Notice that the container image is deployed to the service and cluster that you configured previously under Setting up gcloudWait until the deployment is complete: this can take about half a minute. On success, the command line displays the service URL.
Trying it out
Try out the service to confirm you have successfully deployed it. Requests should fail with a HTTP 500 or 503 error (members of the class 5xx Server errors). The tutorial walks through troubleshooting this error response.
If your cluster is configured with a routable default domain, skip the steps above and instead copy the URL into your web browser.
If you don't use automatic TLS certificates and domain mapping you are not provided a navigable URL for your service.
Instead, use the provided URL and the IP address of the service's ingress
gateway to create a curl
command that can make requests to your service:
-
To get the external IP for the Istio ingress gateway:
kubectl get svc istio-ingress -n gke-system
where the resulting output looks something like this:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) istio-ingress LoadBalancer XX.XX.XXX.XX pending 80:32380/TCP,443:32390/TCP,32400:32400/TCP
The EXTERNAL-IP for the Load Balancer is the IP address you must use. Run a
curl
command using thisGATEWAY_IP
address in the URL.curl -G -H "Host: SERVICE-DOMAIN" https://EXTERNAL-IP/
Replace SERVICE-DOMAIN with the default assigned domain of your service. You can obtain this by taking the default URL and removing the protocol
http://
.See the HTTP 500 or HTTP 503 error message.
Investigating the problem
Visualize that the HTTP 5xx error encountered above in Trying it out was encountered as a production runtime error. This tutorial walks through a formal process for handling it. Although production error resolution processes vary widely, this tutorial presents a particular sequence of steps to show the application of useful tools and techniques.
To investigate this problem you will work through these phases:
- Collect more details on the reported error to support further investigation and set a mitigation strategy.
- Relieve user impact by deciding to push forward in a fix or rollback to a known-healthy version.
- Reproduce the error to confirm the correct details have been gathered and that the error is not a one-time glitch
- Perform a root cause analysis on the bug to find the code, configuration, or process which created this error
At the start of the investigation you have a URL, timestamp, and the message "Internal Server Error".
Gathering further details
Gather more information about the problem to understand what happened and determine next steps.
Use available tools to collect more details:
View logs for more details.
Use Cloud Logging to review the sequence of operations leading to the problem, including error messages.
Rollback to a healthy version
If you have a revision that you know was
working, you can rollback your service to use that revision. For example,
you will not be able to perform a rollback on the new
hello-service
service that you deployed in this
tutorial because it contains only a single revision.
To locate a revision and rollback your service:
Reproducing the error
Using the details you obtained previously, confirm the problem consistently occurs under test conditions.
Send the same HTTP request by trying it out again, and see if the same error and details are reported. It may take some time for error details to show up.
Because the sample service in this tutorial is read-only and doesn't trigger any complicating side effects, reproducing errors in production is safe. However, for many real services, this won't be the case: you may need to reproduce errors in a test environment or limit this step to local investigation.
Reproducing the error establishes the context for further work. For example, if developers cannot reproduce the error further investigation may require additional instrumentation of the service.
Performing a root cause analysis
Root cause analysis is an important step in effective troubleshooting to ensure you fix the problem instead of a symptom.
Previously in this tutorial, you reproduced the problem on Knative serving which confirms the problem is active when the service is hosted on Knative serving. Now reproduce the problem locally to determine if the problem is isolated to the code or if it only emerges in production hosting.
If you have not used Docker CLI locally with Container Registry, authenticate it with gcloud:
gcloud auth configure-docker
For alternative approaches see Container Registry authentication methods.
If the most recently used container image name is not available, the service description has the information of the most recently deployed container image:
gcloud run services describe hello-service
Find the container image name inside the
spec
object. A more targeted command can directly retrieve it:gcloud run services describe hello-service \ --format="value(spec.template.spec.containers.image)"
This command reveals a container image name such as
gcr.io/PROJECT_ID/hello-service
.Pull the container image from the Container Registry to your environment, this step might take several minutes as it downloads the container image:
docker pull gcr.io/PROJECT_ID/hello-service
Later updates to the container image that reuse this name can be retrieved with the same command. If you skip this step, the
docker run
command below pulls a container image if one is not present on the local machine.Run locally to confirm the problem is not unique to Knative serving:
PORT=8080 && docker run --rm -e PORT=$PORT -p 9000:$PORT \ gcr.io/PROJECT_ID/hello-service
Breaking down the elements of the command above,
- The
PORT
environment variable is used by the service to determine the port to listen on inside the container. - The
run
command starts the container, defaulting to the entrypoint command defined in the Dockerfile or a parent container image. - The
--rm
flag deletes the container instance on exit. - The
-e
flag assigns a value to an environment variable.-e PORT=$PORT
is propagating thePORT
variable from the local system into the container with the same variable name. - The
-p
flag publishes the container as a service available on localhost at port 9000. Requests to localhost:9000 will be routed to the container on port 8080. This means output from the service about the port number in use will not match how the service is accessed. - The final argument
gcr.io/PROJECT_ID/hello-service
is a repository path pointing to the latest version of the container image. If not available locally, docker attempts to retrieve the image from a remote registry.
In your browser, open http://localhost:9000. Check the terminal output for error messages that match those on Google Cloud Observability.
If the problem is not reproducible locally, it may be unique to the Knative serving environment. Review the Knative serving troubleshooting guide for specific areas to investigate.
In this case the error is reproduced locally.
- The
Now that the error is doubly-confirmed as persistent and caused by the service code instead of the hosting platform, it's time to investigate the code more closely.
For purposes of this tutorial it is safe to assume the code inside the container and the code in the local system is identical.
Node.js
Find the source of the error message in the fileindex.js
around the line
number called out in the stack trace shown in the logs:
Python
Find the source of the error message in the filemain.py
around the line
number called out in the stack trace shown in the logs:
Go
Find the source of the error message in the file main.go
around the line
number called out in the stack trace shown in the logs:
Java
Find the source of the error message in the file App.java
around the line number called out in the stack trace shown in the logs:
Examining this code, the following actions are taken when the NAME
environment
variable is not set:
- An error is logged to Google Cloud Observability
- An HTTP error response is sent
The problem is caused by a missing variable, but the root cause is more specific: the code change adding the hard dependency on an environment variable did not include related changes to deployment scripts and runtime requirements documentation.
Fixing the root cause
Now that we have collected the code and identified the potential root cause, we can take steps to fix it.
Check whether the service works locally with the
NAME
environment available in place:Run the container locally with the environment variable added:
PORT=8080 && docker run --rm -e PORT=$PORT -p 9000:$PORT \ -e NAME="Local World!" \ gcr.io/PROJECT_ID/hello-service
Navigate your browser to http://localhost:9000
See "Hello Local World!" appear on the page
Modify the running Knative serving service environment to include this variable:
Run the services update command with the
--update-env-vars
parameter to add an environment variable:gcloud run services update hello-service \ --update-env-vars NAME=Override
Wait a few seconds while Knative serving creates a new revision based on the previous revision with the new environment variable added.
Confirm the service is now fixed:
- Navigate your browser to the Knative serving service URL.
- See "Hello Override!" appear on the page.
- Verify that no unexpected messages or errors appear in Cloud Logging.
Improving future troubleshooting speed
In this sample production problem, the error was related to operational configuration. There are code changes that will minimize the impact of this problem in the future.
- Improve the error log to include more specific details.
- Instead of returning an error, have the service fall back to a safe default. If using a default represents a change to normal functionality, use a warning message for monitoring purposes.
Let's step through removing the NAME
environment variable as a hard dependency.
Remove the existing
NAME
-handling code:Node.js
Python
Go
Java
Add new code that sets a fallback value:
Node.js
Python
Go
Java
Test locally by re-building and running the container through the affected configuration cases:
Node.js
docker build --tag gcr.io/PROJECT_ID/hello-service .
Python
docker build --tag gcr.io/PROJECT_ID/hello-service .
Go
docker build --tag gcr.io/PROJECT_ID/hello-service .
Java
mvn compile jib:build
Confirm the
NAME
environment variable still works:PORT=8080 && docker run --rm -e $PORT -p 9000:$PORT \ -e NAME="Robust World" \ gcr.io/PROJECT_ID/hello-service
Confirm the service works without the
NAME
variable:PORT=8080 && docker run --rm -e $PORT -p 9000:$PORT \ gcr.io/PROJECT_ID/hello-service
If the service does not return a result, confirm the removal of code in the first step did not remove extra lines, such as those used to write the response.
Deploy this by revisiting the Deploy your code section.
Each deployment to a service creates a new revision and automatically starts serving traffic when ready.
To clear the environment variables set earlier:
gcloud run services update hello-service --clear-env-vars
Add the new functionality for the default value to automated test coverage for the service.
Finding other issues in the logs
You may see other issues in the Log Viewer for this service. For example, an unsupported system call will appear in the logs as a "Container Sandbox Limitation".
For example, the Node.js services sometimes result in this log message:
Container Sandbox Limitation: Unsupported syscall statx(0xffffff9c,0x3e1ba8e86d88,0x0,0xfff,0x3e1ba8e86970,0x3e1ba8e86a90). Please, refer to https://gvisor.dev/c/linux/amd64/statx for more information.
In this case, the lack of support does not impact the hello-service sample service.
Clean up
If you created a new project for this tutorial, delete the project. If you used an existing project and wish to keep it without the changes added in this tutorial, delete resources created for the tutorial.
Deleting the project
The easiest way to eliminate billing is to delete the project that you created for the tutorial.
To delete the project:
- In the Google Cloud console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
Deleting tutorial resources
Delete the Knative serving service you deployed in this tutorial:
gcloud run services delete SERVICE-NAME
Where SERVICE-NAME is your chosen service name.
You can also delete Knative serving services from the Google Cloud console:
Remove the gcloud default configurations you added during the tutorial setup:
gcloud config unset run/platform gcloud config unset run/cluster gcloud config unset run/cluster_location
Remove the project configuration:
gcloud config unset project
Delete other Google Cloud resources created in this tutorial:
- Delete the container image named
gcr.io/<var>PROJECT_ID</var>/hello-service
from Container Registry. - If you created a cluster for this tutorial, delete the cluster
- Delete the container image named
What's next
- Learn more about how to use Cloud Logging to gain insight into production behavior.
- For more information about Knative serving troubleshooting, see [/anthos/run/archive/docs/troubleshooting#sandbox).
- Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center.