Quotas & limits

Stay organized with collections Save and categorize content based on your preferences.

This page describes quotas and request limits for Cloud Storage. You can request increases to quotas, but limits cannot be adjusted.

Quotas and limits are subject to change.

Buckets

Limit Value Notes
Maximum bucket name size 63 characters

If the name contains a dot (.), the limit is 222 characters.

Maximum bucket creation and deletion rate per project Approximately 1 request every 2 seconds

Plan on fewer buckets and more objects in most cases. For example, a common design choice is to use one bucket per user of your project. However, if you're designing a system that adds many users per second, then design for many users in one bucket (with appropriate permissions) so that the bucket creation rate limit doesn't become a bottleneck.

Highly available applications should not depend on bucket creation or deletion in the critical path of their application. Bucket names are part of a centralized and global namespace: any dependency on this namespace creates a single point of failure for your application. Due to this and the bucket creation/deletion limit, the recommended practice for highly available services on Cloud Storage is to pre-create all the buckets necessary.

Maximum rate of bucket metadata updates per bucket One update per second Rapid updates to a single bucket (for example, changing the CORS configuration) might result in throttling errors.
Maximum number of principals that can be granted IAM roles per bucket

1,500 principals for all IAM roles

100 principals for legacy IAM roles

Examples of principals include individual users, groups, and domains. See Principal types for more information.
Maximum number of Pub/Sub notification configurations per bucket 100 notification configurations
Maximum number of Pub/Sub notification configurations set to trigger for a specific event 10 notification configurations
Maximum number of custom attributes in a Pub/Sub notification configuration 10 custom attributes

Objects

Limit Value Notes
Maximum object size 5 TiB

The maximum size of a single upload request is also 5 TiB. Resumable uploads are the recommended method for uploading large objects.

Maximum combined size of all custom metadata keys and values per object 8 KiB
Maximum object name size 1024 bytes (UTF-8 encoded)
Maximum rate of writes to the same object name One write per second Writing to the same object name at a rate above the limit might result in throttling errors. For more information, see Object immutability.
Maximum rate of object metadata updates to a single object One update per second Updating object metadata at a rate above the limit might result in throttling errors.
Maximum rate of object writes in a bucket Unlimited Includes uploading, updating, and deleting objects. Buckets initially support roughly 1000 writes per second and then scale as needed.
Maximum rate of object reads in a bucket Unlimited Includes reading object data, reading object metadata, and listing objects. Buckets initially support roughly 5000 object reads per second and then scale as needed. However, note that there are bandwidth limits.
Maximum number of access control list entries (ACLs) 100 ACLs per object ACLs can apply to individual users, groups, or domains. See ACL scopes.
Maximum number of source objects in an object composition 32 objects in a single compose request
Maximum number of components that make up a composite object Unlimited While there is no limit to the number of components that make up a composite object, the componentCount metadata associated with a composite object saturates at 2,147,483,647.
Maximum composite object size 5 TiB

JSON API requests

Limit Value Notes
Maximum total request payload of a batch request Less than 10MiB Do not include more than 100 calls in a single request.

XML API requests

Limit Value Notes
Maximum combined size of request URL and HTTP headers 16 KiB
Maximum number of buckets that can be returned when listing buckets 1,000 buckets The XML API returns buckets lexicographically by name.
Maximum number of parts in a multipart upload 10,000 parts
Maximum size of an individual part in a multipart upload 5 GiB
Minimum size of an individual part in a multipart upload 5 MiB There is no minimum size limit on the last part of a multipart upload.
Maximum size of an object assembled from a multipart upload 5 TiB
Maximum length of time a multipart upload and its uploaded parts can remain unfinished or idle in a bucket Unlimited
Maximum number of different multipart uploads that can simultaneously occur for an object Unlimited
Maximum length of time to complete a resumable upload session 7 days The length of time is measured starting from the resumable upload being initiated.

HMAC keys for service accounts

There is a limit of at most 5 HMAC keys per service account. Deleted keys don't count towards this limit.

Bandwidth

Quota Value Notes
Maximum bandwidth for each region or dual-region that has data egress from Cloud Storage to Google services 200 Gbps per project, per region

Egress to Cloud CDN and Media CDN is exempt from this quota.

Data egress from Cloud Storage dual-regions to Google services counts towards the per-project quota of one of the regions that make up the dual-region. For example, if a Compute Engine instance in us-central1 reads data from a bucket in the nam4 dual-region, the bandwidth usage is counted as part of the overall quota for the us-central1 region.

You can request a quota increase for regions on a per-project basis. If you want a quota increase for a dual-region, your increase request should be made for one or both of the regions that make up the dual-region.

Maximum egress bandwidth for Google services accessing data from buckets in a given multi-region 50 Gbps per project, per region

Egress to Cloud CDN and Media CDN is exempt from this quota.

For example, say the project my-project has several Compute Engine instances in us-east1 and several Compute Engine instances in us-west1. The us-east1 instances have a combined 50 Gbps bandwidth quota when reading data from buckets in the us multi-region. The us-west1 instances have a separate 50 Gpbs bandwidth quota when reading from buckets in the us multi-region.

We strongly recommend that you use buckets located in regions or dual-regions for workloads with high egress rates to Google services. For existing multi-region buckets that run large workloads in Google services, you can use Storage Transfer Service to move your data to a region or dual-region bucket.

Except in exceptional cases, increase requests for multi-region bandwidth quotas are unlikely to be approved. To request a quota increase, contact Google Cloud Support.

When a project's bandwidth exceeds quota in a location, requests to affected buckets can be rejected with a retryable 429 error or can be throttled. See Bandwidth usage for information about monitoring your bandwidth.