以下指标展示了一个用例,其中 400 个客户端每秒向一个 Cloud Run 服务发出 3 个请求,该服务的每个实例的并发请求数上限设置为 1。顶部的绿线表示一段时间内的请求数,而底部蓝线表示开始处理请求的实例数量。
以下指标显示 400 个客户端每秒向一个 Cloud Run 服务发出 3 个请求,该服务的每个实例的并发请求数上限设置为 80。顶部的绿线表示一段时间内的请求数,而底部蓝线表示开始处理请求的实例数量。请注意,处理相同请求量所需的实例要少得多。
源代码部署的并发设置
启用并发设置后,Cloud Run 不会在同一实例处理的并发请求之间提供隔离。在这种情况下,您必须确保您的代码可以安全地并发执行。您可以通过设置其他并发值来更改此设置。我们建议您先从较低的并发值(例如 8)开始,然后再逐渐增大。从过高的并发值开始可能会由于资源(例如内存或 CPU)限制而导致意外行为。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[],[],null,["# Maximum concurrent requests for services\n\nFor Cloud Run services, each [revision](/run/docs/resource-model#revisions)\nis automatically scaled to the number of instances needed to handle\nall incoming requests.\n\nWhen more instances are processing requests, more CPU and memory will\nbe used, resulting in higher costs.\n\nTo give you more control, Cloud Run provides a *maximum concurrent\nrequests per instance* setting that specifies the maximum number of requests\nthat can be processed simultaneously by a given instance.\n\nMaximum concurrent requests per instance\n----------------------------------------\n\nYou can configure the maximum concurrent requests per instance.\nBy default each Cloud Run instance can receive up to\n80 requests at the same time;\nyou can increase this to a maximum of 1000.\n\nAlthough you should use the default value, if needed you can\n[lower the maximum concurrency](/run/docs/configuring/concurrency). For example,\nif your code cannot process parallel requests,\n[set concurrency to `1`](#concurrency-1).\n\nThe specified concurrency value is a maximum limit. If the CPU of the instance\nis already highly utilized, Cloud Run might not send as many requests\nto a given instance. In these cases, the Cloud Run instance might show\nthat the maximum concurrency is not being utilized. For example, if the high CPU\nusage is sustained, the number of instances might scale up instead.\n\nThe following diagram shows how the maximum concurrent requests per instance\nsetting affects the number of instances needed to handle incoming\nconcurrent requests:\n\nTuning concurrency for autoscaling and resource utilization\n-----------------------------------------------------------\n\nAdjusting the maximum concurrency per instance significantly influences how your service scales and utilizes resources.\n\n- **Lower concurrency**: Forces Cloud Run to use more instances for the same request volume, because each instance handles fewer requests. This can improve responsiveness for applications that are not optimized for high internal parallelism or for applications you want to scale more quickly based on request load.\n- **Higher concurrency**: Allows each instance to handle more requests, potentially leading to fewer active instances and reducing cost. This is suitable for applications efficient at parallel I/O-bound tasks or for applications that can truly utilize multiple vCPUs for concurrent request processing.\n\nStart with the default concurrency (80) , monitor the performance and utilization of your application closely, and adjust as needed.\n\n### Concurrency with multi-vCPU instances\n\nTuning concurrency is especially critical if your service uses multiple vCPUs but your application is single-threaded or effectively single-threaded (CPU-bound).\n\n- **vCPU hotspots**: A single-threaded application on a multi-vCPU instance may max out one vCPU while others idle. The Cloud Run CPU autoscaler measures average CPU utilization across all vCPUs. The average CPU utilization can remain deceptively low in this scenario, preventing effective CPU-based scaling.\n- **Using concurrency to drive scaling**: If CPU-based autoscaling is ineffective due to vCPU hotspots, lowering maximum concurrency becomes an important tool. vCPU hotspots often occur where multi-vCPU is chosen for a single-threaded application due to high memory needs. Using concurrency to drive scaling forces scaling based on request throughput. This ensures that more instances are started to handle the load, reducing per-instance queuing and latency.\n\nWhen to limit maximum concurrency to one request at a time.\n-----------------------------------------------------------\n\nYou can limit concurrency so that only one request at a time will be sent to\neach running instance. You should consider doing this in cases where:\n\n- Each request uses most of the available CPU or memory.\n- Your container image is not designed for handling multiple requests at the same time, for example, if your container relies on global state that two requests cannot share.\n\nNote that a concurrency of `1` is likely to negatively affect scaling\nperformance, because many instances will have to start up to handle a\nspike in incoming requests. See\n[Throughput versus latency versus tradeoffs](/run/docs/tips/general#throughput-latency-cost-tradeoff)\nfor more considerations.\n\nCase study\n----------\n\nThe following metrics show a use case where 400 clients are making 3 requests\nper second to a Cloud Run service that is set to a maximum concurrent\nrequests per instance of 1.\nThe green top line shows the requests over time, the bottom blue line\nshows the number of instances started to handle the requests.\n\nThe following metrics show 400 clients making 3 requests per second to a\nCloud Run service that is set to a maximum concurrent requests\nper instance of 80.\nThe green top line shows the requests over time, the bottom blue line shows the\nnumber of instances started to handle the requests.\nNotice that far fewer instances are needed to handle the same request volume.\n\nConcurrency for source code deployments\n---------------------------------------\n\nWhen concurrency is enabled, Cloud Run does not provide\nisolation between concurrent requests processed by the same instance.\nIn such cases, you must ensure that your code is safe to execute\nconcurrently. You can change this by\n[setting a different concurrency value](/run/docs/configuring/concurrency). We\nrecommend starting with a lower concurrency like 8, and then moving it up.\nStarting with a concurrency that is too high could lead to unintended behavior\ndue to resource constraints (such as memory or CPU).\n\nLanguage runtimes can also impact concurrency. Some of these language-specific\nimpacts are shown in the following list:\n\n- Node.js is inherently single-threaded. To take advantage of concurrency, use\n JavaScript's asynchronous code style, which is idiomatic in Node.js. See\n [Asynchronous flow control](https://nodejs.org/en/learn/asynchronous-work/asynchronous-flow-control)\n in the official Node.js documentation for details.\n\n- For Python 3.8 and later, supporting high concurrency per instance\n requires enough threads to handle the concurrency. We recommend that you\n [set a runtime environment variable](/run/docs/configuring/services/environment-variables#setting)\n so that the threads value is equal to the concurrency value, for example:\n `THREADS=8`.\n\nWhat's next\n-----------\n\nTo manage the maximum concurrent requests per instance of your\nCloud Run services, see\n[Setting maximum concurrent requests per instance](/run/docs/configuring/concurrency).\n\nTo optimize your maximum concurrent requests per instance setting, see\n[development tips for tuning concurrency](/run/docs/tips#tuning-concurrency)."]]