This document provides information about quotas and resource limits for Pub/Sub.
Viewing quota usage and managing quota limits
For a given project, you can use the IAM & admin quotas dashboard to view current quota limits and usage. You can also use this dashboard to:
- Reduce your quota limits
- Initiate a process to apply for higher quota limits
For more information about monitoring and alerting on your quota usage, see Monitoring.
Quota usage attribution
For push subscriber throughput, quota usage is charged against the project that contains the push subscription.
For the following quotas, usage is charged against the project associated with the caller's credentials, not against the project that contains the requested resource (that is, the project that appears in the topic or subscription name):
- Publisher throughput
- Subscriber throughput
- Administrator operations
For example: If a service account in project A sends a publish request to publish to a topic in project B, the quota is charged to project A. Every request contains credentials that include a project ID.
In some cases, you might want quota usage to be charged against another
project. You can use the
X-Goog-User-Project system parameter to
change the project for quota attribution. To know more about
see System parameters.
You can use gcloud CLI to set the project for quota attribution
for a specific request. The gcloud CLI sends
X-Goog-User-Project request header.
You must have the
or a custom role with the
serviceusage.services.use permission on the project
that you are going to use for quota attribution.
The following example shows how to get a list of subscriptions in the project RESOURCE_PROJECT while charging the Administrator operations quota against the project QUOTA_PROJECT. Run the following command in your Google Cloud CLI terminal:
gcloud pubsub subscriptions list --project= RESOURCE_PROJECT --billing-project= QUOTA_PROJECT
QUOTA_PROJECT with the ID of the Google Cloud project against which you want to charge quota.
Note that in Pub/Sub, the billed project is always the one that contains the resource. You can change the project only for quota attribution.
The quotas listed in the following table can be viewed and edited on a per-project basis in the APIs and services quotas dashboard.
Regional quotas are divided into 2 types:
- Large regions:
- Small regions: all other regions
|Quota||Default quota limit||Description|
|Publisher throughput per region||
Quota usage is based on the size of the published
Note that multiple messages can be included in a single publish request, and there is no additional quota charge per message.
|Pull subscriber throughput per region||
Quota usage is based on the size of the returned
|Acknowledger throughput per region||
Quota usage is based on the size of
|Push subscriber throughput per region||
For push delivery requests made to the
push endpoint, quota usage is based on the size of the
|StreamingPull subscriber throughput per region||
Quota usage is based on the size of the
Note that Client Libraries use StreamingPull operations where possible.
|Number of open StreamingPull connections per region||
The number of open StreamingPull connections at any given time. See StreamingPull.
|Administrator operations||6,000 per minute (100 ops/s)||
Each administrator operation, such as GetTopicRequest, charges one unit against this quota.
Throughput quota units
Throughput quota usage is measured in 1kB units. 1 kB is 1000 bytes. For
example, in a
PublishRequest with 105 messages of 50 bytes each, the user data
105 * 50 bytes = 5250 bytes, so the quota usage is
max(1kB, ceil(5250 bytes/1000)) = 6kB.
10,000 attached or detached subscriptions
10,000 attached subscriptions
5,000 attached snapshots
If topic message retention is configured, messages published to a topic can be retained in persistent storage for up to 31 days from the time of publication.
By default, retains unacknowledged messages in persistent storage for 7
days from the time of publication. There is no limit on the
number of retained messages.
If subscribers don't use a subscription, the subscription expires. The default expiration period is 31 days.
|Schema||Schema size (the
10MB (total size)
Message size (the
Attributes per message: 100
Attribute key size: 256 bytes
Attribute value size: 1024 bytes
|Push outstanding messages||
|StreamingPull streams||10 MB/s per open stream|
|Pull/StreamingPull messages||The service might impose limits on the total number of outstanding StreamingPull messages per connection. If you run into such limits, increase the rate at which you acknowledge messages and the number of connections you use.|
|Ordering keys||If messages have ordering keys, the maximum publisher throughput is 1 MBps for each ordering key.|
Tips and caveats
If you use the Google Cloud CLI gcloud tool with a normal user account (that is, a non-service account), Pub/Sub operations are limited to a rate suitable for manual operations. Rates in excess of this limit will result in the RESOURCE_EXHAUSTED error. The solution is to make sure that you are using service account credentials. If you wish to use credentials from the gcloud CLI for automation, activate a service account for your Pub/Sub operations.
If you have additional quota in particular regions, you can route requests to these regions using regional Pub/Sub endpoints. When you publish messages to a global endpoint, the Pub/Sub service might route traffic to a region that does not have sufficient quota.
Quota mismatches can happen when published or received messages are smaller than 1000 bytes. For example:
If you publish 10 500-byte messages in separate requests, your publisher quota usage will be 10,000 bytes. This is because messages that are smaller than 1000 bytes are automatically rounded up to the next 1000-byte increment.
If you receive those 10 messages in a single pull response, your subscriber quota usage might be only 5 kB, since the actual size of each message is combined to determine the overall quota.
The inverse is also true. The subscriber quota usage might be greater than the publisher quota usage if you publish multiple messages in a single publish request or receive the messages in separate Pull requests.