Google Cloud uses quotas to restrict how much of a particular shared Google Cloud resource that you can use. Each quota represents a specific countable resource, such as API calls to a particular service, the number of bytes sent to a particular service, or the number of streaming connections used concurrently by your project.
Many services also have limits that are unrelated to the quota system. These are fixed constraints, such as maximum message sizes or the number of Pub/Sub resources you can create in a project, which cannot be increased or decreased.
View and manage quotas
For a given project, you can use the IAM & admin quotas dashboard to view current quota limits and usage. You can also use this dashboard to do the following:
- Reduce your quota limits
- Initiate a process to apply for higher quota limits
For more information about monitoring and alerting on your quota usage, see Monitoring.
Quota usage attribution
For push subscriber throughput, quota usage is charged against the project that contains the push subscription. This is the project that appears in the name of the subscription.
For all other quotas, usage is charged against the project associated with the credentials specified in the request. The quota usage is not charged against the project that contains the requested resource.
For example: If a service account in project A sends a publish request to
publish to a topic in project B, the quota is charged to project A.
In some cases, you might want quota usage to be charged against another
project. You can use the X-Goog-User-Project
system parameter to
change the project for quota attribution. For more information about X-Goog-User-Project
,
see System parameters.
You can use gcloud CLI to set the project for quota attribution
for a specific request. The gcloud CLI sends
the X-Goog-User-Project
request header.
You must have the roles/serviceusage.serviceUsageConsumer
role
or a custom role with the serviceusage.services.use
permission on the project
that you are going to use for quota attribution.
The following example shows how to get a list of subscriptions in the project RESOURCE_PROJECT while charging the Administrator operations quota against the project QUOTA_PROJECT. Run the following command in your Google Cloud CLI terminal:
gcloud pubsub subscriptions list --project=
RESOURCE_PROJECT --billing-project=
QUOTA_PROJECT
Replace QUOTA_PROJECT
with the ID of the Google Cloud project against which you want to charge quota.
Note that in Pub/Sub, the billed project is always the one that contains the resource. You can change the project only for quota attribution.
Pub/Sub quotas
The quotas listed in the following table can be viewed and edited on a per-project basis in the APIs and services quotas dashboard.
Regional quotas are divided into 3 types:
- Large regions:
europe-west1
,europe-west4
,us-central1
,us-east1
,us-east4
,us-west1
,us-west2
- Medium regions:
asia-east1
,asia-northeast1
,asia-southeast1
,europe-west2
,europe-west3
- Small regions: all other regions
Exactly-once delivery quotas are region specific. Check the details for each region in the following table.
Quota | Default quota limit | Description |
---|---|---|
Publisher throughput per region |
|
Quota usage is based on the size of the published
Note that multiple messages can be included in a single publish request, and there is no additional quota charge per message. |
Pull subscriber throughput per region |
|
Quota usage is based on the size of the returned
|
Acknowledger throughput per region |
|
Quota usage is based on the size of
|
Push subscriptions throughput per region |
|
For push delivery requests made to the
push endpoint, quota usage is based on the size of the
|
BigQuery subscriptions throughput per region |
|
For requests made to BigQuery,
quota usage is based on the size of the |
Cloud Storage subscriptions throughput per region |
|
For requests made to Cloud Storage,
quota usage is based on the size of the |
StreamingPull subscriber throughput per region |
|
Quota usage is based on the size of the
Note that Client Libraries use StreamingPull operations where possible. |
Number of open StreamingPull connections per region |
|
The number of open StreamingPull connections at any given time. See StreamingPull. |
Administrator operations | 6,000 per minute (100 ops/s) |
Each administrator operation, such as GetTopicRequest, charges one unit against this quota.
|
Number of messages consumed from subscriptions with exactly-once delivery enabled per region |
|
Quota usage is based on the number of
|
Number of messages acknowledged or whose deadline is extended when using subscriptions with exactly-once delivery enabled per region |
|
Quota usage is based on the number of acknowledgment IDs in
|
Throughput quota units
Throughput quota usage is measured in 1kB units. 1 kB is 1000 bytes. For
example, in a PublishRequest
with 105 messages of 50 bytes each, the user data
size is 105 * 50 bytes = 5250 bytes
, so the quota usage is
max(1kB, ceil(5250 bytes/1000)) = 6kB
.
Resource limits
Resource | Limits |
---|---|
Project |
10,000 topics 10,000 attached or detached subscriptions 5,000 snapshots 10,000 schemas |
Topic |
10,000 attached subscriptions 5,000 attached snapshots If topic message retention is configured, messages published to a topic can be retained in persistent storage for up to 31 days from the time of publication. |
Subscription |
By default, retains unacknowledged messages
in persistent storage for 7
days from the time of publication. There is no limit on the
number of retained messages. If subscribers don't use a subscription, the subscription expires. The default expiration period is 31 days. |
Schema | Schema size (the definition field): 300KBRevisions per schema: 20 |
Publish request |
10MB (total size) 1,000 messages |
Message |
Message size (the data field): 10MBAttributes per message: 100 Attribute key size: 256 bytes Attribute value size: 1024 bytes |
StreamingPull streams | 10 MBps per open stream |
Unary Pull response |
Maximum number of messages in Pull response: 1000 Maximum size of Pull response: 10 MB |
Pull/StreamingPull messages | The service might impose limits on the total number of outstanding StreamingPull messages per connection. If you run into such limits, increase the rate at which you acknowledge messages and the number of connections you use. |
Acknowledge and ModifyAckDeadline requests |
512 KB (total size) |
Ordering keys | If messages have ordering keys, the maximum publisher throughput is 1 MBps for each ordering key. |
Cloud Storage bucket objects | When using Cloud Storage import topics, the limit for the number of objects in a bucket is 50 million. |
Use a service account for higher quotas
If you use the Google Cloud CLI tool with a normal user account (that is, a non-service account), Pub/Sub operations are limited to a rate suitable for manual operations. Rates in excess of this limit results in the RESOURCE_EXHAUSTED error. The solution is to make sure that you are using service account credentials. If you want to use credentials from the gcloud CLI for automation, activate a service account for your Pub/Sub operations.
Use locational endpoints to route requests
If you have additional quota in particular regions, you can route requests to these regions using locational Pub/Sub endpoints. When you publish messages to a global endpoint, the Pub/Sub service might route traffic to a region that does not have sufficient quota.
Quota mismatches
Quota mismatches can happen when published or received messages are smaller than 1000 bytes. For example:
If you publish 10 500-byte messages in separate requests, your publisher quota usage will be 10,000 bytes. This is because messages that are smaller than 1000 bytes are automatically rounded up to the next 1000-byte increment.
If you receive those 10 messages in a single pull response, your subscriber quota usage might be only 5 kB, since the actual size of each message is combined to determine the overall quota.
The inverse is also true. The subscriber quota usage might be greater than the publisher quota usage if you publish multiple messages in a single publish request or receive the messages in separate Pull requests.