[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[],[],null,["# Calculate Provisioned Throughput requirements\n\nThis section explains the concepts of generative AI scale unit (GSU) and\nburndown rates. Provisioned Throughput is calculated and priced\nusing generative AI scale units (GSUs) and burndown rates.\n\nGSU and burndown rate\n---------------------\n\nA *Generative AI Scale Unit (GSU)* is a measure of throughput for your prompts\nand responses. This amount specifies how much throughput to provision a model\nwith.\n\nA *burndown rate* is a ratio that converts the input and output units (such as\ntokens, characters, or images) to input tokens per second, input characters per\nsecond, or input images per second, respectively. This ratio represents the\nthroughput and is used to produce a standard unit across models.\n\nDifferent models use different amounts of throughput. For information about the\nminimum GSU purchase amount and increments for each model, see [Supported models\nand burndown rates](/vertex-ai/generative-ai/docs/supported-models) in this document.\n\nThis equation demonstrates how throughput is calculated: \n\n inputs_per_query = inputs_across_modalities_converted_using_burndown_rates\n outputs_per_query = outputs_across_modalities_converted_using_burndown_rates\n\n throughput_per_second = (inputs_per_query + outputs_per_query) * queries_per_second\n\nThe calculated throughput per second determines how many GSUs that you need for\nyour use case.\n\nImportant Considerations\n------------------------\n\nTo help you plan for your Provisioned Throughput needs, review the\nfollowing important considerations:\n\n- **Requests are prioritized.**\n\n Provisioned Throughput customers are prioritized and serviced\n first before on-demand requests.\n- **Throughput doesn't accumulate.**\n\n Unused throughput doesn't accumulate or carry over to the next\n month.\n- **Provisioned Throughput is measured in tokens per second, characters per second, or images per second.**\n\n Provisioned Throughput isn't measured solely based on queries per minute\n (QPM). It's measured based on the query size for your use case, the response\n size, and the QPM.\n- **Provisioned Throughput is specific to a project, region, model, and version.**\n\n Provisioned Throughput is assigned to a specific\n project-region-model-version combination. The same model called from a\n different region won't count against your Provisioned Throughput\n quota and won't be prioritized over on-demand requests.\n\n### Context caching\n\n|\n| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nProvisioned Throughput supports default\n[context caching](/vertex-ai/generative-ai/docs/context-cache/context-cache-overview).\nHowever, Provisioned Throughput doesn't support caching requests\nusing the Vertex AI API that include retrieving information about a context\ncache.\n\nBy default, Google automatically caches inputs to reduce cost and latency.\nFor the Gemini 2.5 Flash and Gemini 2.5 Pro models, cached\ntokens are charged at a [75% discount](/vertex-ai/generative-ai/pricing)\nrelative to standard input tokens when a cache hit occurs. For\nProvisioned Throughput, the discount is applied through a\nreduced burndown rate.\n\nFor example, Gemini 2.5 Pro has the following burndown rates for input\ntext tokens and cached tokens:\n\n- 1 input text token = 1 token\n\n- 1 input cached text token = 0.25 tokens\n\nSending 1,000 input tokens to this model results in a burndown of your\nProvisioned Throughput by 1,000 input tokens per second. However,\nif you send 1,000 cached tokens to Gemini 2.5 Pro, this results in a\nburndown of your Provisioned Throughput by 250 tokens per second.\n\nNote that this can lead to higher throughput for similar queries where the tokens\naren't cached and the cache discount isn't applied.\n\nTo view the burndown rates for models supported in Provisioned Throughput,\nsee [Supported models and burndown rates](/vertex-ai/generative-ai/docs/supported-models).\n\nUnderstand the burndown for Live API\n------------------------------------\n\n| **Request access:** For information about access to this release, see the [access request page](https://docs.google.com/forms/d/e/1FAIpQLScxBeD4UJ8GbUfX4SXjj5a1XJ1K7Urwvb0iSGdGccNcFRBrpQ/viewform).\n\nProvisioned Throughput supports the Gemini 2.5 Flash with\nLive API. To understand how to calculate the burndown while using\nthe Live API, see\n[Calculate throughput for Live API](/vertex-ai/generative-ai/docs/provisioned-throughput/live-api#calculate).\n\nFor more information about using Provisioned Throughput\nfor Gemini 2.5 Flash with Live API, see\n[Provisioned Throughput for Live API](/vertex-ai/generative-ai/docs/provisioned-throughput/live-api).\n\nExample of estimating your Provisioned Throughput needs\n-------------------------------------------------------\n\nTo estimate your Provisioned Throughput needs, use the\n[estimation tool in the Google Cloud console](/vertex-ai/generative-ai/docs/purchase-provisioned-throughput#estimate-provisioned-throughput).\nThe following example illustrates the process of estimating the amount of\nProvisioned Throughput for your model. The region isn't considered\nin the estimation calculations.\n\nThis table provides the burndown rates for `gemini-2.0-flash` that you\ncan use to follow the example.\n\n1. Gather your requirements.\n\n 1. In this example, your requirement is to verify that you can support 10\n queries per second (QPS) of a query with an input of 1,000 text tokens and\n 500 audio tokens, to receive an output of 300 text tokens using\n `gemini-2.0-flash`.\n\n This step means that you understand your use case, because you have\n identified your model, the QPS, and the size of your inputs and outputs.\n 2. To calculate your throughput, refer to the\n [burndown rates](/vertex-ai/generative-ai/docs/supported-models#google-models) for your selected model.\n\n2. Calculate your throughput.\n\n 1. Multiply your inputs by the burndown rates to arrive at total input tokens:\n\n 1,000\\*(1 token per input text token) + 500\\*(7 tokens per input audio\n token) = 4,500 burndown adjusted input tokens per query.\n 2. Multiply your outputs by the burndown rates to arrive at total output tokens:\n\n 300\\*(4 tokens per output text token) = 1,200 burndown adjusted output\n tokens per query\n 3. Add your totals together:\n\n 4,500 burndown adjusted input tokens + 1,200 burndown adjusted output\n tokens = 5,700 total tokens per query\n 4. Multiply the total number of tokens by the QPS to arrive at total\n throughput per second:\n\n 5,700 total tokens per query \\* 10 QPS = 57,000 total tokens per second\n3. Calculate your GSUs.\n\n 1. The GSUs are the total tokens per second divided by per-second throughput per GSU from the burndown table.\n\n 57,000 total tokens per second ÷ 3,360 per-second throughput per GSU = 16.96 GSUs\n 2. The minimum GSU purchase increment for `gemini-2.0-flash` is\n 1, so you'll need 17 GSUs to assure your workload.\n\nWhat's next\n-----------\n\n- [Purchase Provisioned Throughput](/vertex-ai/generative-ai/docs/purchase-provisioned-throughput)."]]