Incoming requests to Cloud Run services
automatically generate traces that you can view in Cloud Trace.
You can use these traces to identify sources of any latency issues in your
implementation without needing to add further instrumentation in Cloud Trace.
The standard W3C trace context propagation header
traceparent is automatically populated for Cloud Run requests.
However, if you do add additional instrumentation, you can also use Cloud Trace to measure the time it takes for the request to propagate through each layer in your implementation, for example, the time it takes to complete a database query, receive results from an API request, or run some complex business logic. Each of these layer-specific time measurements is a "span". You can view the traces in Cloud Trace as waterfall graphs reflecting the latency values.
Automatically generated traces in Cloud Run, whether sampled or forced, do not result in billing charges. However, if you use Cloud Trace libraries and add your own spans by correlating them to Cloud Run provided spans, you will be charged by Cloud Trace.
Trace sampling rate
Cloud Run doesn't sample the traces for every request. When used with Cloud Run, requests are sampled at a maximum rate of 0.1 requests per second for each container instance. You can also force a particular request to be traced.
Cloud Run does not support configuration of the Cloud Run sample rate.
When to add instrumentation
Traces are automatically generated without any instrumentation required in your service. However, in some cases, you may want to add instrumentation code to your service to take full advantage of the Cloud Trace feature. For example, you need to add instrumentation if you want to:
- Create custom trace spans, for example, to get timing data for how long it takes your service to get work back from the Cloud Translation API.
- Propagate trace context so Cloud Trace shows the request flow across multiple services as a single request.
To learn more, refer to the documentation on viewing traces.