[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[],[],null,["# Concurrency\n\nIn Knative serving, each [revision](/kubernetes-engine/enterprise/knative-serving/docs/resource-model#revisions)\nis automatically scaled to the number of container instances needed to handle\nall incoming requests.\n\nWhen more container instances are processing requests, more CPU and memory will\nbe used, resulting in higher costs.\nWhen new container instances need to be started, requests might take more time\nto be processed, decreasing the performances of your service.\n\nTo give you more control, Knative serving provides a *concurrency* setting\nthat specifies the maximum number of requests that can be processed\nsimultaneously by a given container instance.\n\nConcurrency values\n------------------\n\nBy default Knative serving container instances can receive many requests at\nthe same time (up to a maximum of 80).\nNote that in comparison, Functions-as-a-Service (FaaS) solutions like\nCloud Run functions have a fixed concurrency of 1.\n\nAlthough you should use the default concurrency value, if needed you can\n[lower the maximum concurrency](/kubernetes-engine/enterprise/knative-serving/docs/configuring/concurrency). For example,\nif your code cannot process parallel requests,\n[set concurrency to `1`](#concurrency-1).\n\nThe specified concurrency value is a maximum and Knative serving might not\nsend as many requests to a given container instance if the CPU of the instance\nis already highly utilized.\n\nThe following diagram shows how the concurrency setting affects the number of\ncontainer instances needed to handle incoming concurrent requests:\n\nWhen to limit concurrency to one request at a time\n--------------------------------------------------\n\nYou can limit concurrency so that only one request at a time will be sent to\neach running container instance. You should consider doing this in cases where:\n\n- Each request uses most of the available CPU or memory.\n- Your container image is not designed for handling multiple requests at the same time, for example, if your container relies on global state that two requests cannot share.\n\nNote that a concurrency of `1` is likely to negatively affect scaling\nperformance, because many container instances will have to start up to handle a\nspike in incoming requests.\n\nCase study\n----------\n\nThe following metrics show a use case where 400 clients are making 3 requests\nper second to a Knative serving service that is set to a maximum concurrency\nof 1. The green top line shows the requests over time, the bottom blue line\nshows the number of container instances started to handle the requests.\n\nThe following metrics show 400 clients making 3 requests per second to a\nKnative serving service that is set to a maximum concurrency of 80. The green\ntop line shows the requests over time, the bottom blue line shows the number of\ncontainer instances started to handle the requests. Notice that far fewer\ninstances are needed to handle the same request volume.\n\nWhat's next\n-----------\n\nTo manage the concurrency of your Knative serving services, see\n[Setting concurrency](/kubernetes-engine/enterprise/knative-serving/docs/configuring/concurrency).\n\nTo optimize your concurrency setting, see\n[development tips for tuning concurrency](/kubernetes-engine/enterprise/knative-serving/docs/tips/general#tuning-concurrency)."]]