Concurrency

In Knative serving, each revision is automatically scaled to the number of container instances needed to handle all incoming requests.

When more container instances are processing requests, more CPU and memory will be used, resulting in higher costs. When new container instances need to be started, requests might take more time to be processed, decreasing the performances of your service.

To give you more control, Knative serving provides a concurrency setting that specifies the maximum number of requests that can be processed simultaneously by a given container instance.

Concurrency values

By default Knative serving container instances can receive many requests at the same time (up to a maximum of 80). Note that in comparison, Functions-as-a-Service (FaaS) solutions like Cloud Functions have a fixed concurrency of 1.

Although you should use the default concurrency value, if needed you can lower the maximum concurrency. For example, if your code cannot process parallel requests, set concurrency to 1.

The specified concurrency value is a maximum and Knative serving might not send as many requests to a given container instance if the CPU of the instance is already highly utilized.

The following diagram shows how the concurrency setting affects the number of container instances needed to handle incoming concurrent requests:

Concurrency diagram

When to limit concurrency to one request at a time

You can limit concurrency so that only one request at a time will be sent to each running container instance. You should consider doing this in cases where:

  • Each request uses most of the available CPU or memory.
  • Your container image is not designed for handling multiple requests at the same time, for example, if your container relies on global state that two requests cannot share.

Note that a concurrency of 1 is likely to negatively affect scaling performance, because many container instances will have to start up to handle a spike in incoming requests.

Case study

The following metrics show a use case where 400 clients are making 3 requests per second to a Knative serving service that is set to a maximum concurrency of 1. The green top line shows the requests over time, the bottom blue line shows the number of container instances started to handle the requests.

Concurrency set to one

The following metrics show 400 clients making 3 requests per second to a Knative serving service that is set to a maximum concurrency of 80. The green top line shows the requests over time, the bottom blue line shows the number of container instances started to handle the requests. Notice that far fewer instances are needed to handle the same request volume.

Concurrency set to 80

What's next

To manage the concurrency of your Knative serving services, see Setting concurrency.

To optimize your concurrency setting, see development tips for tuning concurrency.