View latency of app requests

Learn how to collect and view latency data from your applications:

  1. Create a Google Kubernetes Engine (GKE) cluster by using the Google Cloud CLI.

  2. Download and deploy a sample application to your cluster.

  3. Create a trace by sending an HTTP request to the sample application.

  4. View the latency information of the trace you created.

  5. Clean up.


To follow step-by-step guidance for this task directly in the Google Cloud console, click Guide me:

Guide me


Before you begin

  1. Security constraints defined by your organization might prevent you from completing the following steps. For troubleshooting information, see Develop applications in a constrained Google Cloud environment.

  2. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  3. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  4. Make sure that billing is enabled for your Google Cloud project.

  5. Enable the Google Kubernetes Engine and Cloud Trace APIs.

    Enable the APIs

  6. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  7. Make sure that billing is enabled for your Google Cloud project.

  8. Enable the Google Kubernetes Engine and Cloud Trace APIs.

    Enable the APIs

Create a GKE cluster

  1. In the toolbar, click Activate Cloud Shell, and then perform the following steps in the Cloud Shell.

  2. Create a cluster:

    gcloud container clusters create cloud-trace-demo --zone us-central1-c
    

    The previous command, which takes several minutes to complete, creates a standard cluster with the name cloud-trace-demo in the zone us-central1-c.

  3. Configure kubectl to automatically refresh its credentials to use the same identity as the Google Cloud CLI:

    gcloud container clusters get-credentials cloud-trace-demo --zone us-central1-c
    
  4. Verify access to your cluster:

    kubectl get nodes
    

    A sample output of this command is:

    NAME                                              STATUS   ROLES    AGE   VERSION
    gke-cloud-trace-demo-default-pool-063c0416-113s   Ready    <none>   78s   v1.22.12-gke.2300
    gke-cloud-trace-demo-default-pool-063c0416-1n27   Ready    <none>   79s   v1.22.12-gke.2300
    gke-cloud-trace-demo-default-pool-063c0416-frkd   Ready    <none>   78s   v1.22.12-gke.2300
    

Download and deploy and application

Download and deploy a Python application, which uses the Flask framework and the OpenTelemetry package. The application is described in the About the app section of this page.

In the Cloud Shell, do the following:

  1. Clone a Python app from GitHub:

    git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git
    
  2. Run the following command to deploy the sample application:

    cd python-docs-samples/trace/cloud-trace-demo-app-opentelemetry && ./setup.sh
    

    The script setup.sh takes several minutes to complete.

    The script configures three services using a pre-built image and then waits for all resources to be provisioned. The workloads are named cloud-trace-demo-a, cloud-trace-demo-b, and cloud-trace-demo-c.

    A sample output of this command is:

    deployment.apps/cloud-trace-demo-a is created
    service/cloud-trace-demo-a is created
    deployment.apps/cloud-trace-demo-b is created
    service/cloud-trace-demo-b is created
    deployment.apps/cloud-trace-demo-c is created
    service/cloud-trace-demo-c is created
    
    Wait for load balancer initialization complete......
    Completed.
    

Create trace data

A trace describes the time it takes an application to complete a single operation.

To create a trace, in the Cloud Shell, run the following command:

curl $(kubectl get svc -o=jsonpath='{.items[?(@.metadata.name=="cloud-trace-demo-a")].status.loadBalancer.ingress[0].ip}')

The response of the previous command looks like the following:

Hello, I am service A
And I am service B
Hello, I am service C

You can execute the curl command multiple times to generate multiple traces.

View latency data

  1. In the navigation panel of the Google Cloud console, select Trace, and then select Trace explorer:

    Go to Trace explorer

    Each trace is represented by a dot on the graph and a row in the table.

    In the following screenshot shows multiple traces:

    Trace explorer window for the quickstart.

  2. To view a trace in detail, select a dot in the graph or a row in the table.

    The scatter plot is refreshed and the dot you selected is highlighted with a circle drawn around the dot, and all other dots that represent all other traces are dimmed.

    A Gantt chart displays information about the selected trace. The first row in the Gantt chart is for the trace, and there exists one row for each span in the trace. A span describes how long it takes to perform a complete sub-operation.

    Additional details about each span are shown in the details pane.

  3. To view information detailed information about a span, in the Gantt chart, select the span.

About the application

The sample application used in this quickstart is available in a GitHub repository. This repository contains information on how to use the application in environments other than the Cloud Shell. The sample application is written in Python, uses the Flask framework and OpenTelemetry packages, and executes on a GKE cluster.

Instrumentation

The file app.py in the GitHub repository, contains the instrumentation necessary to capture and send trace data to your Google Cloud project:

  • The application imports several OpenTelemetry packages:

    from opentelemetry import trace
    from opentelemetry.exporter.cloud_trace import CloudTraceSpanExporter
    from opentelemetry.instrumentation.flask import FlaskInstrumentor
    from opentelemetry.instrumentation.requests import RequestsInstrumentor
    from opentelemetry.propagate import set_global_textmap
    from opentelemetry.propagators.cloud_trace_propagator import CloudTraceFormatPropagator
    from opentelemetry.sdk.trace import TracerProvider
    from opentelemetry.sdk.trace.export import BatchSpanProcessor
    
  • The application instruments web requests with trace context and automatically traces Flask handlers and requests to other services:

    app = flask.Flask(__name__)
    FlaskInstrumentor().instrument_app(app)
    RequestsInstrumentor().instrument()
  • The application configures the Cloud Trace exporter as a trace provider, which propagates trace context in the Cloud Trace format:

    def configure_exporter(exporter):
        """Configures OpenTelemetry context propagation to use Cloud Trace context
    
        Args:
            exporter: exporter instance to be configured in the OpenTelemetry tracer provider
        """
        set_global_textmap(CloudTraceFormatPropagator())
        tracer_provider = TracerProvider()
        tracer_provider.add_span_processor(BatchSpanProcessor(exporter))
        trace.set_tracer_provider(tracer_provider)
    
    
    configure_exporter(CloudTraceSpanExporter())
    tracer = trace.get_tracer(__name__)
  • The following code snippet shows how to send requests in Python. OpenTelemetry implicitly propagates the trace context for you with your outgoing requests:

    if endpoint is not None and endpoint != "":
        data = {"body": keyword}
        response = requests.get(
            endpoint,
            params=data,
        )
        return keyword + "\n" + response.text
    else:
        return keyword, 200
    
    

How the application works

For clarity, in this section, cloud-trace-demo is omitted from the service names. For example, the service cloud-trace-demo-c is referenced as c.

This application creates three services named a, b, and c. Service a is configured to call service b, service b is configured to call service c. For details on the configuration of the services, see the YAML files in the GitHub repository.

When you issued a HTTP request to service a in this quickstart, you used the following curl command:

curl $(kubectl get svc -o=jsonpath='{.items[?(@.metadata.name=="cloud-trace-demo-a")].status.loadBalancer.ingress[0].ip}')

The curl command works as follows:

  1. kubectl fetches the IP address of the service named cloud-trace-demo-a.
  2. The curl command then sends the HTTP request to service a.
  3. Service a receives the HTTP request and sends a request to service b.
  4. Service b receives the HTTP request and sends a request to service c.
  5. Service c receives the HTTP request from service b and returns the string Hello, I am service C to service b.
  6. Service b receives the response from service c, appends it to the string And I am service B, and returns the result to service a.
  7. Service a receives the response from service b and appends it to the string Hello, I am service A.
  8. The response from service a is printed in the Cloud Shell.

Clean up

To avoid incurring charges to your Google Cloud account for the resources used on this page, follow these steps.

If you created a new project and you no longer need the project, then delete the project.

If you used an existing project, then do the following:

  1. To delete your cluster, in the Cloud Shell, run the following command:

    gcloud container clusters delete cloud-trace-demo --zone us-central1-c

What's next