This section explains the concepts of generative AI scale unit (GSU) and
burndown rates. Provisioned Throughput is calculated and priced
using generative AI scale units (GSUs) and burndown rates. A Generative AI Scale Unit (GSU) is a measure of throughput for your prompts
and responses. This amount specifies how much throughput to provision a model
with. A burndown rate is a ratio that converts the input and output units (such as
tokens, characters, or images) to input tokens per second, input characters per
second, or input images per second, respectively. This ratio represents the
throughput and is used to produce a standard unit across models. Different models use different amounts of throughput. For information about the
minimum GSU purchase amount and increments for each model, see Supported models
and burndown rates in this document. This equation demonstrates how throughput is calculated: The calculated throughput per second determines how many GSUs that you need for
your use case. To help you plan for your Provisioned Throughput needs, review the
following important considerations: Requests are prioritized. Provisioned Throughput customers are prioritized and serviced
first before on-demand requests. Throughput doesn't accumulate. Unused throughput doesn't accumulate or carry over to the next
month. Provisioned Throughput is measured in tokens per second, characters per second, or images per second. Provisioned Throughput isn't measured solely based on queries per minute
(QPM). It's measured based on the query size for your use case, the response
size, and the QPM. Provisioned Throughput is specific to a project, region, model, and version. Provisioned Throughput is assigned to a specific
project-region-model-version combination. The same model called from a
different region won't count against your Provisioned Throughput
quota and won't be prioritized over on-demand requests. Provisioned Throughput supports default
context caching.
However, Provisioned Throughput doesn't support caching requests
using the Vertex AI API that include retrieving information about a context
cache. By default, Google automatically caches inputs to reduce cost and latency.
For the Gemini 2.5 Flash and Gemini 2.5 Pro models, cached
tokens are charged at a 75% discount
relative to standard input tokens when a cache hit occurs. For
Provisioned Throughput, the discount is applied through a
reduced burndown rate. For example, Gemini 2.5 Pro has the following burndown rates for input
text tokens and cached tokens: 1 input text token = 1 token 1 input cached text token = 0.25 tokens Sending 1,000 input tokens to this model results in a burndown of your
Provisioned Throughput by 1,000 input tokens per second. However,
if you send 1,000 cached tokens to Gemini 2.5 Pro, this results in a
burndown of your Provisioned Throughput by 250 tokens per second. Note that this can lead to higher throughput for similar queries where the tokens
aren't cached and the cache discount isn't applied. To view the burndown rates for models supported in Provisioned Throughput,
see Supported models and burndown rates. Provisioned Throughput supports the Gemini 2.5 Flash with
Live API. To understand how to calculate the burndown while using
the Live API, see
Calculate throughput for Live API. For more information about using Provisioned Throughput
for Gemini 2.5 Flash with Live API, see
Provisioned Throughput for Live API. To estimate your Provisioned Throughput needs, use the
estimation tool in the Google Cloud console.
The following example illustrates the process of estimating the amount of
Provisioned Throughput for your model. The region isn't considered
in the estimation calculations. This table provides the burndown rates for Gather your requirements. In this example, your requirement is to verify that you can support 10
queries per second (QPS) of a query with an input of 1,000 text tokens and
500 audio tokens, to receive an output of 300 text tokens using
This step means that you understand your use case, because you have
identified your model, the QPS, and the size of your inputs and outputs. To calculate your throughput, refer to the
burndown rates for your selected model. Calculate your throughput. Multiply your inputs by the burndown rates to arrive at total input tokens: 1,000*(1 token per input text token) + 500*(7 tokens per input audio
token) = 4,500 burndown adjusted input tokens per query. Multiply your outputs by the burndown rates to arrive at total output tokens: 300*(4 tokens per output text token) = 1,200 burndown adjusted output
tokens per query Add your totals together: 4,500 burndown adjusted input tokens + 1,200 burndown adjusted output
tokens = 5,700 total tokens per query Multiply the total number of tokens by the QPS to arrive at total
throughput per second: 5,700 total tokens per query * 10 QPS = 57,000 total tokens per second Calculate your GSUs. The GSUs are the total tokens per second divided by per-second throughput per GSU from the burndown table. 57,000 total tokens per second ÷ 3,360 per-second throughput per GSU = 16.96 GSUs The minimum GSU purchase increment for GSU and burndown rate
inputs_per_query = inputs_across_modalities_converted_using_burndown_rates
outputs_per_query = outputs_across_modalities_converted_using_burndown_rates
throughput_per_second = (inputs_per_query + outputs_per_query) * queries_per_second
Important Considerations
Context caching
Understand the burndown for Live API
Example of estimating your Provisioned Throughput needs
gemini-2.0-flash
that you
can use to follow the example.
Model
Throughput per GSU
Units
Minimum GSU purchase increment
Burndown rates
Gemini 2.0 Flash
3,360
Tokens
1
1 input text token = 1 token
1 input image token = 1 token
1 input video token = 1 token
1 input audio token = 7 tokens
1 output text token = 4 tokens
gemini-2.0-flash
.
gemini-2.0-flash
is
1, so you'll need 17 GSUs to assure your workload.What's next
Calculate Provisioned Throughput requirements
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-23 UTC.