Cloud Spanner end-to-end latency guide

This page:

  • Describes the high-level components involved in a Cloud Spanner API request.
  • Explains how to extract, capture, and visualize latencies associated with these components to know the source of the latencies.

Overview

Cloud Spanner architecture diagram

The high-level components that are used to make a Cloud Spanner API request include:

  1. Cloud Spanner client libraries. These libraries provide a layer of abstraction on top of gRPC, and handle the details of session management, running transactions, retries, and more.

  2. Google Front End. This infrastructure service is common to all Google Cloud services, including Cloud Spanner. Google Front End ensures that all TLS connections are terminated, and applies protections against Denial of Service attacks. To learn more about Google Front End, see Google Front End Service.

  3. Cloud Spanner API Front End, which performs various checks on the API request (including authentication, authorization, and quota checks), and maintains sessions and transaction states.

  4. Cloud Spanner database, which performs the execution of reads and writes to the database.

When you're making a remote procedure call to Cloud Spanner, the API request is prepared by the Cloud Spanner client libraries, and then passes through both the Google Front End and the Cloud Spanner API Front End, before reaching the Cloud Spanner database.

By measuring and comparing the request latencies from different components to the database, you'll know which component is causing the problem.

The latency for each component is explained in the sections that follow.

Client round-trip latency

Cloud Spanner architecture diagram for client round-trip latency

This latency is the length of time (in milliseconds) between the first byte of the Cloud Spanner API request that's sent by the client to the database (through both the Google Front End and the Cloud Spanner API Front End), and the last byte of response that the client receives from the database.

Cloud Spanner client libraries provide stats and traces with the use of the OpenCensus instrumentation framework. This framework gives insights into the internals of the client, and aids in troubleshooting end-to-end (round-trip) latency.

By default, the framework is disabled. To learn how to enable this framework, see Capture client round-trip latency.

The grpc.io/client/roundtrip_latency metric provides the time between the first byte of the API request sent to the last byte of the response received.

Capture client round-trip latency

You can capture client round-trip latency for the following languages:

Java

static void captureGrpcMetric(DatabaseClient dbClient) {
  // Register basic gRPC views.
  RpcViews.registerClientGrpcBasicViews();

  // Enable OpenCensus exporters to export metrics to Stackdriver Monitoring.
  // Exporters use Application Default Credentials to authenticate.
  // See https://developers.google.com/identity/protocols/application-default-credentials
  // for more details.
  try {
    StackdriverStatsExporter.createAndRegister();
  } catch (IOException | IllegalStateException e) {
    System.out.println("Error during StackdriverStatsExporter");
  }

  try (ResultSet resultSet =
      dbClient
          .singleUse() // Execute a single read or query against Cloud Spanner.
          .executeQuery(Statement.of("SELECT SingerId, AlbumId, AlbumTitle FROM Albums"))) {
    while (resultSet.next()) {
      System.out.printf(
          "%d %d %s", resultSet.getLong(0), resultSet.getLong(1), resultSet.getString(2));
    }
  }
}

Go


import (
	"context"
	"fmt"
	"io"
	"regexp"

	"cloud.google.com/go/spanner"
	"google.golang.org/api/iterator"

	"contrib.go.opencensus.io/exporter/stackdriver"
	"go.opencensus.io/plugin/ocgrpc"
	"go.opencensus.io/stats/view"
)

var validDatabasePattern = regexp.MustCompile("^projects/(?P<project>[^/]+)/instances/(?P<instance>[^/]+)/databases/(?P<database>[^/]+)$")

func queryWithGRPCMetric(w io.Writer, db string) error {
	projectID, _, _, err := parseDatabaseName(db)
	if err != nil {
		return err
	}

	ctx := context.Background()
	client, err := spanner.NewClient(ctx, db)
	if err != nil {
		return err
	}
	defer client.Close()

	// Register OpenCensus views.
	if err := view.Register(ocgrpc.DefaultClientViews...); err != nil {
		return err
	}

	// Create OpenCensus Stackdriver exporter.
	sd, err := stackdriver.NewExporter(stackdriver.Options{
		ProjectID: projectID,
	})
	if err != nil {
		return err
	}
	// It is imperative to invoke flush before your main function exits
	defer sd.Flush()

	// Start the metrics exporter
	sd.StartMetricsExporter()
	defer sd.StopMetricsExporter()

	stmt := spanner.Statement{SQL: `SELECT SingerId, AlbumId, AlbumTitle FROM Albums`}
	iter := client.Single().Query(ctx, stmt)
	defer iter.Stop()
	for {
		row, err := iter.Next()
		if err == iterator.Done {
			return nil
		}
		if err != nil {
			return err
		}
		var singerID, albumID int64
		var albumTitle string
		if err := row.Columns(&singerID, &albumID, &albumTitle); err != nil {
			return err
		}
		fmt.Fprintf(w, "%d %d %s\n", singerID, albumID, albumTitle)
	}
}

func parseDatabaseName(databaseUri string) (project, instance, database string, err error) {
	matches := validDatabasePattern.FindStringSubmatch(databaseUri)
	if len(matches) == 0 {
		return "", "", "", fmt.Errorf("failed to parse database name from %q according to pattern %q",
			databaseUri, validDatabasePattern.String())
	}
	return matches[1], matches[2], matches[3], nil
}

Visualize client round-trip latency

After retrieving the metrics, you can visualize client round-trip latency in Cloud Monitoring.

Here's an example of a graph that illustrates the client round-trip latency metric. The program creates an OpenCensus view called roundtrip_latency. This string becomes part of the name of the metric when it's exported to Cloud Monitoring.

Cloud Monitoring client round-trip latency

Figure 1. Client round-trip latency graph in Cloud Monitoring

Google Front End latency

Cloud Spanner architecture diagram for Google Front End latency

This latency is the length of time (in milliseconds) between when Google's network receives a remote procedure call from the client and when the first byte of the response is received by the Google Front End. This latency doesn't include any TCP/SSL handshake.

Every response from Cloud Spanner, whether it's REST or gRPC, includes a header that contains the total time between the Google Front End and the back end (the Cloud Spanner service) for both the request and the response. This helps to distinguish better the source of the latency between the client and Google's network.

The cloud.google.com/[language]/spanner/gfe_latency metric captures and exposes Google Front End latency for Cloud Spanner requests.

Capture Google Front End latency

You can capture Google Front End latency for the following languages:

Java

private static final String MILLISECOND = "ms";
private static final TagKey key = TagKey.create("grpc_client_method");

// GFE t4t7 latency extracted from server-timing header.
public static final MeasureLong SPANNER_GFE_LATENCY =
    MeasureLong.create(
        "cloud.google.com/java/spanner/gfe_latency",
        "Latency between Google's network receives an RPC and reads back the first byte of the"
            + " response",
        MILLISECOND);

static final Aggregation AGGREGATION_WITH_MILLIS_HISTOGRAM =
    Distribution.create(BucketBoundaries.create(Arrays.asList(
        0.0, 0.01, 0.05, 0.1, 0.3, 0.6, 0.8, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 8.0, 10.0, 13.0,
        16.0, 20.0, 25.0, 30.0, 40.0, 50.0, 65.0, 80.0, 100.0, 130.0, 160.0, 200.0, 250.0,
        300.0, 400.0, 500.0, 650.0, 800.0, 1000.0, 2000.0, 5000.0, 10000.0, 20000.0, 50000.0,
        100000.0)));
static final View GFE_LATENCY_VIEW = View
    .create(Name.create("cloud.google.com/java/spanner/gfe_latency"),
        "Latency between Google's network receives an RPC and reads back the first byte of the"
            + " response",
        SPANNER_GFE_LATENCY,
        AGGREGATION_WITH_MILLIS_HISTOGRAM,
        Collections.singletonList(key));

static ViewManager manager = Stats.getViewManager();

private static final Tagger tagger = Tags.getTagger();
private static final StatsRecorder STATS_RECORDER = Stats.getStatsRecorder();

static void captureGfeMetric(DatabaseClient dbClient) {
  // Register GFE view.
  manager.registerView(GFE_LATENCY_VIEW);

  // Enable OpenCensus exporters to export metrics to Stackdriver Monitoring.
  // Exporters use Application Default Credentials to authenticate.
  // See https://developers.google.com/identity/protocols/application-default-credentials
  // for more details.
  try {
    StackdriverStatsExporter.createAndRegister();
  } catch (IOException | IllegalStateException e) {
    System.out.println("Error during StackdriverStatsExporter");
  }

  try (ResultSet resultSet =
      dbClient
          .singleUse() // Execute a single read or query against Cloud Spanner.
          .executeQuery(Statement.of("SELECT SingerId, AlbumId, AlbumTitle FROM Albums"))) {
    while (resultSet.next()) {
      System.out.printf(
          "%d %d %s", resultSet.getLong(0), resultSet.getLong(1), resultSet.getString(2));
    }
  }
}

private static final HeaderClientInterceptor interceptor = new HeaderClientInterceptor();
private static final Metadata.Key<String> SERVER_TIMING_HEADER_KEY =
    Metadata.Key.of("server-timing", Metadata.ASCII_STRING_MARSHALLER);
// Every response from Cloud Spanner, there will be an additional header that contains the total
// elapsed time on GFE. The format is "server-timing: gfet4t7; dur=[GFE latency in ms]".
private static final Pattern SERVER_TIMING_HEADER_PATTERN = Pattern.compile(".*dur=(?<dur>\\d+)");

// ClientInterceptor to intercept the outgoing RPCs in order to retrieve the GFE header.
private static class HeaderClientInterceptor implements ClientInterceptor {

  @Override
  public <ReqT, RespT> ClientCall<ReqT, RespT> interceptCall(MethodDescriptor<ReqT, RespT> method,
      CallOptions callOptions, Channel next) {
    return new SimpleForwardingClientCall<ReqT, RespT>(next.newCall(method, callOptions)) {

      @Override
      public void start(Listener<RespT> responseListener, Metadata headers) {
        super.start(new SimpleForwardingClientCallListener<RespT>(responseListener) {
          @Override
          public void onHeaders(Metadata metadata) {
            processHeader(metadata, method.getFullMethodName());
            super.onHeaders(metadata);
          }
        }, headers);
      }
    };
  }

  // Process header, extract duration value and record it using OpenCensus.
  private static void processHeader(Metadata metadata, String method) {
    if (metadata.get(SERVER_TIMING_HEADER_KEY) != null) {
      String serverTiming = metadata.get(SERVER_TIMING_HEADER_KEY);
      Matcher matcher = SERVER_TIMING_HEADER_PATTERN.matcher(serverTiming);
      if (matcher.find()) {
        long latency = Long.parseLong(matcher.group("dur"));

        TagContext tctx = tagger.emptyBuilder().put(key, TagValue.create(method)).build();
        try (Scope ss = tagger.withTagContext(tctx)) {
          STATS_RECORDER.newMeasureMap()
              .put(SPANNER_GFE_LATENCY, latency)
              .record();
        }
      }
    }
  }

Go


import (
	"context"
	"fmt"
	"io"
	"strconv"
	"strings"

	spanner "cloud.google.com/go/spanner/apiv1"
	gax "github.com/googleapis/gax-go/v2"
	sppb "google.golang.org/genproto/googleapis/spanner/v1"
	"google.golang.org/grpc"
	"google.golang.org/grpc/metadata"

	"contrib.go.opencensus.io/exporter/stackdriver"
	"go.opencensus.io/stats"
	"go.opencensus.io/stats/view"
	"go.opencensus.io/tag"
)

// OpenCensus Tag, Measure and View.
var (
	KeyMethod    = tag.MustNewKey("grpc_client_method")
	GFELatencyMs = stats.Int64("cloud.google.com/go/spanner/gfe_latency",
		"Latency between Google's network receives an RPC and reads back the first byte of the response", "ms")
	GFELatencyView = view.View{
		Name:        "cloud.google.com/go/spanner/gfe_latency",
		Measure:     GFELatencyMs,
		Description: "Latency between Google's network receives an RPC and reads back the first byte of the response",
		Aggregation: view.Distribution(0.0, 0.01, 0.05, 0.1, 0.3, 0.6, 0.8, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 8.0, 10.0, 13.0,
			16.0, 20.0, 25.0, 30.0, 40.0, 50.0, 65.0, 80.0, 100.0, 130.0, 160.0, 200.0, 250.0,
			300.0, 400.0, 500.0, 650.0, 800.0, 1000.0, 2000.0, 5000.0, 10000.0, 20000.0, 50000.0,
			100000.0),
		TagKeys: []tag.Key{KeyMethod}}
)

func queryWithGFELatency(w io.Writer, db string) error {
	projectID, _, _, err := parseDatabaseName(db)
	if err != nil {
		return err
	}

	ctx := context.Background()
	client, err := spanner.NewClient(ctx)
	if err != nil {
		return err
	}
	defer client.Close()

	// Register OpenCensus views.
	err = view.Register(&GFELatencyView)
	if err != nil {
		return err
	}

	// Create OpenCensus Stackdriver exporter.
	sd, err := stackdriver.NewExporter(stackdriver.Options{
		ProjectID: projectID,
	})
	if err != nil {
		return err
	}
	// It is imperative to invoke flush before your main function exits
	defer sd.Flush()

	// Start the metrics exporter
	sd.StartMetricsExporter()
	defer sd.StopMetricsExporter()

	// Create a session.
	req := &sppb.CreateSessionRequest{Database: db}
	session, err := client.CreateSession(ctx, req)
	if err != nil {
		return err
	}

	// Execute a SQL query and retrieve the GFE server-timing header in gRPC metadata.
	req2 := &sppb.ExecuteSqlRequest{
		Session: session.Name,
		Sql:     `SELECT SingerId, AlbumId, AlbumTitle FROM Albums`,
	}
	var md metadata.MD
	resultSet, err := client.ExecuteSql(ctx, req2, gax.WithGRPCOptions(grpc.Header(&md)))
	if err != nil {
		return err
	}
	for _, row := range resultSet.GetRows() {
		for _, value := range row.GetValues() {
			fmt.Fprintf(w, "%s ", value.GetStringValue())
		}
		fmt.Fprintf(w, "\n")
	}

	// The format is: "server-timing: gfet4t7; dur=[GFE latency in ms]"
	srvTiming := md.Get("server-timing")[0]
	gfeLtcy, err := strconv.Atoi(strings.TrimPrefix(srvTiming, "gfet4t7; dur="))
	if err != nil {
		return err
	}
	// Record GFE t4t7 latency with OpenCensus.
	ctx, err = tag.New(ctx, tag.Insert(KeyMethod, "ExecuteSql"))
	if err != nil {
		return err
	}
	stats.Record(ctx, GFELatencyMs.M(int64(gfeLtcy)))

	return nil
}

Visualize Google Front End latency

After retrieving the metrics, you can visualize Google Front End latency in Cloud Monitoring.

Here's an example of a graph that illustrates the Google Front End latency metric. The program creates an OpenCensus view called gfe_latency. This string becomes part of the name of the metric when it's exported to Cloud Monitoring.

Cloud Monitoring Google Front End latency

Figure 2. Google Front End latency graph in Cloud Monitoring

Cloud Spanner API request latency

Cloud Spanner architecture diagram for Cloud Spanner API request latency

This latency is the length of time (in seconds) between the first byte of request that the Cloud Spanner API Front End receives and the last byte of response that the Cloud Spanner API Front End sends. The latency includes the time needed for processing API requests in both the Cloud Spanner back end and the API layer. However, it doesn't include network or reverse-proxy overhead between Cloud Spanner clients and servers.

The spanner.googleapis.com/api/request_latencies metric captures and exposes Cloud Spanner API Front End latency for Cloud Spanner requests.

Capture Cloud Spanner API request latency

By default, this latency is available as part of Cloud Monitoring metrics. You don't have to do anything to capture and export it.

Visualize Cloud Spanner API request latency

You can use the Metrics Explorer charting tool to visualize the graph for the spanner.googleapis.com/api/request_latencies metric in Cloud Monitoring.

Here's an example of a graph that illustrates the Cloud Spanner API request latency metric.

Cloud Monitoring API request latency

Figure 3. Cloud Spanner API request latency graph in Cloud Monitoring

Query latency

Cloud Spanner architecture diagram for query latency

This latency is the length of time (in milliseconds) that it takes to run SQL queries in the Cloud Spanner database.

Query latency is available for both the executeSql and the executeStreamingSql APIs.

If the QueryMode parameter is set to PROFILE, then Cloud Spanner's ResultSetStats are available in the responses.

Setting QueryMode to PROFILE returns both the query plan, and the execution statistics along with the results. Also, ResultSetStats includes the elapsed time for running queries in the Cloud Spanner database.

Capture query latency

You can capture query latency for the following languages:

Java

private static final String MILLISECOND = "ms";
static final List<Double> RPC_MILLIS_BUCKET_BOUNDARIES =
    Collections.unmodifiableList(
        Arrays.asList(
            0.0, 0.01, 0.05, 0.1, 0.3, 0.6, 0.8, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 8.0, 10.0, 13.0,
            16.0, 20.0, 25.0, 30.0, 40.0, 50.0, 65.0, 80.0, 100.0, 130.0, 160.0, 200.0, 250.0,
            300.0, 400.0, 500.0, 650.0, 800.0, 1000.0, 2000.0, 5000.0, 10000.0, 20000.0, 50000.0,
            100000.0));
static final Aggregation AGGREGATION_WITH_MILLIS_HISTOGRAM =
    Distribution.create(BucketBoundaries.create(RPC_MILLIS_BUCKET_BOUNDARIES));

static MeasureDouble QUERY_STATS_ELAPSED =
    MeasureDouble.create(
        "cloud.google.com/java/spanner/query_stats_elapsed",
        "The execution of the query",
        MILLISECOND);

// Register the view. It is imperative that this step exists,
// otherwise recorded metrics will be dropped and never exported.
static View QUERY_STATS_LATENCY_VIEW = View
    .create(Name.create("cloud.google.com/java/spanner/query_stats_elapsed"),
        "The execution of the query",
        QUERY_STATS_ELAPSED,
        AGGREGATION_WITH_MILLIS_HISTOGRAM,
        Collections.emptyList());

static ViewManager manager = Stats.getViewManager();
private static final StatsRecorder STATS_RECORDER = Stats.getStatsRecorder();

static void captureQueryStatsMetric(DatabaseClient dbClient) {
  manager.registerView(QUERY_STATS_LATENCY_VIEW);

  // Enable OpenCensus exporters to export metrics to Cloud Monitoring.
  // Exporters use Application Default Credentials to authenticate.
  // See https://developers.google.com/identity/protocols/application-default-credentials
  // for more details.
  try {
    StackdriverStatsExporter.createAndRegister();
  } catch (IOException | IllegalStateException e) {
    System.out.println("Error during StackdriverStatsExporter");
  }

  try (ResultSet resultSet = dbClient.singleUse()
      .analyzeQuery(Statement.of("SELECT SingerId, AlbumId, AlbumTitle FROM Albums"),
          QueryAnalyzeMode.PROFILE)) {

    while (resultSet.next()) {
      System.out.printf(
          "%d %d %s", resultSet.getLong(0), resultSet.getLong(1), resultSet.getString(2));
    }
    Value value = resultSet.getStats().getQueryStats()
        .getFieldsOrDefault("elapsed_time", Value.newBuilder().setStringValue("0 msecs").build());
    double elapasedTime = Double.parseDouble(value.getStringValue().replaceAll(" msecs", ""));
    STATS_RECORDER.newMeasureMap()
        .put(QUERY_STATS_ELAPSED, elapasedTime)
        .record();
  }
}

Go


import (
	"context"
	"fmt"
	"io"
	"strconv"
	"strings"

	"cloud.google.com/go/spanner"
	"google.golang.org/api/iterator"

	"contrib.go.opencensus.io/exporter/stackdriver"
	"go.opencensus.io/stats"
	"go.opencensus.io/stats/view"
	"go.opencensus.io/tag"
)

// OpenCensus Tag, Measure and View.
var (
	QueryStatsElapsed = stats.Float64("cloud.google.com/go/spanner/query_stats_elapsed",
		"The execution of the query", "ms")
	QueryStatsLatencyView = view.View{
		Name:        "cloud.google.com/go/spanner/query_stats_elapsed",
		Measure:     QueryStatsElapsed,
		Description: "The execution of the query",
		Aggregation: view.Distribution(0.0, 0.01, 0.05, 0.1, 0.3, 0.6, 0.8, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 8.0, 10.0, 13.0,
			16.0, 20.0, 25.0, 30.0, 40.0, 50.0, 65.0, 80.0, 100.0, 130.0, 160.0, 200.0, 250.0,
			300.0, 400.0, 500.0, 650.0, 800.0, 1000.0, 2000.0, 5000.0, 10000.0, 20000.0, 50000.0,
			100000.0),
		TagKeys: []tag.Key{}}
)

func queryWithQueryStats(w io.Writer, db string) error {
	projectID, _, _, err := parseDatabaseName(db)
	if err != nil {
		return err
	}

	ctx := context.Background()
	client, err := spanner.NewClient(ctx, db)
	if err != nil {
		return err
	}
	defer client.Close()

	// Register OpenCensus views.
	err = view.Register(&QueryStatsLatencyView)
	if err != nil {
		return err
	}

	// Create OpenCensus Stackdriver exporter.
	sd, err := stackdriver.NewExporter(stackdriver.Options{
		ProjectID: projectID,
	})
	if err != nil {
		return err
	}
	// It is imperative to invoke flush before your main function exits
	defer sd.Flush()

	// Start the metrics exporter
	sd.StartMetricsExporter()
	defer sd.StopMetricsExporter()

	// Execute a SQL query and get the query stats.
	stmt := spanner.Statement{SQL: `SELECT SingerId, AlbumId, AlbumTitle FROM Albums`}
	iter := client.Single().QueryWithStats(ctx, stmt)
	defer iter.Stop()
	for {
		row, err := iter.Next()
		if err == iterator.Done {
			// Record query execution time with OpenCensus.
			elapasedTime := iter.QueryStats["elapsed_time"].(string)
			elapasedTimeMs, err := strconv.ParseFloat(strings.TrimSuffix(elapasedTime, " msecs"), 64)
			if err != nil {
				return err
			}
			stats.Record(ctx, QueryStatsElapsed.M(elapasedTimeMs))
			return nil
		}
		if err != nil {
			return err
		}
		var singerID, albumID int64
		var albumTitle string
		if err := row.Columns(&singerID, &albumID, &albumTitle); err != nil {
			return err
		}
		fmt.Fprintf(w, "%d %d %s\n", singerID, albumID, albumTitle)
	}
}

Visualize query latency

After retrieving the metrics, you can visualize query latency in Cloud Monitoring.

Here's an example of a graph that illustrates the query latency metric. The program creates an OpenCensus view called query_stats_elapsed. This string becomes part of the name of the metric when it's exported to Cloud Monitoring.

Cloud Monitoring query latency

Figure 4. Query latency graph in Cloud Monitoring

Troubleshoot latency issues

By using Cloud Monitoring to capture and visualize client round-trip, Google Front End, Cloud Spanner API request, and query latencies, you can compare them, side-by-side. This will help you to identify the source of the latency.

In the following table, you can find some examples of latency issues and why they're occurring:

For this latency issue... The problem might be...
You have a high client round-trip latency, but a low Google Front End latency and a low Cloud Spanner API request latency. There may be an issue in the application code or there may be a networking issue between the client and the regional Google Front End. If your application has a performance issue that causes some code paths to be slow, then the client round-trip latency for each API request might increase. This latency can also be caused by issues on the computing infrastructure on the client side (for example, VM, CPU, or memory utilization, connections, file descriptors, and so on).
You have a high client round-trip latency and a high Cloud Spanner API request latency.
  1. Some of your queries are causing higher latencies because the queries scan and fetch a lot of data. Use this guide to analyze the profile so that you can optimize your queries.
  2. If the query-per-second rate is low, then the latency may come from outliers. Tune the deadline and retry settings to reduce the impact of the outliers.
You have a high Google Front End latency, but a low Cloud Spanner API request latency.
  1. Accessing a database from another region can lead to high Google Front End latency and lower Cloud Spanner API request latency. For example, traffic from a client in the us-east1 region that has an instance in the us-central1 region will have a high Google Front End latency, but a lower Cloud Spanner API request latency.
  2. Check the Google Cloud Status Dashboard to see if there's any ongoing networking issues in your region. If there aren't any issues, then open a support case and include this information so that support engineers can help with troubleshooting the Google Front End.
You have a high Cloud Spanner API request latency, but a low query latency. Check the Google Cloud Status Dashboard to see if there's any ongoing networking issues in your region. If there aren't any issues, then open a support case and include this information so that support engineers can help with troubleshooting the Cloud Spanner API Front End.
You have a high query latency. To view metrics for your query and to help you troubleshoot your query latency issue, see Query statistics. For better performance, optimize your schema, indexing, and query. For more information, see SQL best practices and Troubleshooting performance regressions.

What's next