Quotas & limits

This page describes quotas and request limits for Cloud Storage. You can request increases to quotas, but limits cannot be adjusted.

Quotas and limits are subject to change.

Buckets

Limit Value Notes
Maximum bucket name size 63 characters

If the name contains a dot (.), the limit is 222 characters.

Maximum bucket creation and deletion rate per project Approximately 1 request every 2 seconds

Plan on fewer buckets and more objects in most cases. For example, a common design choice is to use one bucket per user of your project. However, if you're designing a system that adds many users per second, then design for many users in one bucket (with appropriate permissions) so that the bucket creation rate limit doesn't become a bottleneck.

Highly available applications shouldn't depend on bucket creation or deletion in the critical path of their application. Bucket names are part of a centralized and global namespace: any dependency on this namespace creates a single point of failure for your application. Due to this and the bucket creation/deletion limit, the recommended practice for highly available services on Cloud Storage is to pre-create all the buckets necessary.

Maximum rate of bucket metadata updates per bucket One update per second Rapid updates to a single bucket (for example, changing the CORS configuration) might result in throttling errors.
Maximum number of principals that can be granted IAM roles per bucket

1,500 principals for all IAM roles

100 principals for legacy IAM roles

See Principal types for more information.
Maximum number of Pub/Sub notification configurations per bucket 100 notification configurations
Maximum number of Pub/Sub notification configurations set to trigger for a specific event 10 notification configurations
Maximum number of custom attributes in a Pub/Sub notification configuration 10 custom attributes
Maximum retention period that can be set for Bucket Lock 3,155,760,000 seconds (100 years)
Maximum soft delete retention duration 90 days

Objects

Limit Value Notes
Maximum object size 5 TiB

This limit applies regardless of write method, including object composition, resumable uploads, and multipart uploading.

Maximum combined size of all custom metadata keys and values per object 8 KiB
Maximum object name size for objects in a flat namespace bucket 1024 bytes (UTF-8 encoded)
Maximum object name size for objects in a hierarchical namespace enabled bucket Folder name: 512 bytes (UTF-8 encoded)
base name: 512 bytes (UTF-8 encoded)
.
Maximum rate of writes to the same object name One write per second Writing to the same object name at a rate above the limit might result in throttling errors. For more information, see Object immutability.
Maximum rate of object metadata updates to a single object One update per second Updating object metadata at a rate above the limit might result in throttling errors.
Maximum rate of object writes in a bucket Unlimited Includes uploading, updating, and deleting objects. Buckets initially support roughly 1000 writes per second and then scale as needed.
Maximum rate of object reads in a bucket Unlimited Includes reading object data, reading object metadata, and listing objects. Buckets initially support roughly 5000 object reads per second and then scale as needed. However, note that there are bandwidth limits.
Maximum number of access control list entries (ACLs) 100 ACLs per object For more information, see ACL scopes.
Maximum number of source objects in an object composition 32 objects in a single compose request
Maximum number of components that make up a composite object Unlimited While there is no limit to the number of components that make up a composite object, the componentCount metadata associated with a composite object saturates at 2,147,483,647, and the final composite object must adhere to the 5 TiB size limit that applies to all objects in Cloud Storage.
Maximum retention time that can be set for Object Retention Lock 3,155,760,000 seconds (100 years) from the current date and time
Maximum initial queries per second (QPS) limits for reading and writing objects in buckets with hierarchical namespace enabled. up to 8 times higher QPS compared to buckets without hierarchical namespace enabled. For information about optimizing performance when working with folders, see Folder management.

Managed folders

Limit Value Notes
Maximum managed folder name size 1024 bytes (UTF-8 encoded)
Maximum managed folder nesting limit 15
Maximum rate of IAM policy updates per managed folder One update per second

JSON API requests

Limit Value Notes
Maximum total request payload of a batch request Less than 10 MiB Don't include more than 100 calls in a single request.
Maximum size of an object listing glob pattern 1024 bytes in UTF-8 encoding

XML API requests

Limit Value Notes
Maximum combined size of request URL and HTTP headers 16 KiB
Maximum number of buckets that can be returned when listing buckets 1,000 buckets The XML API returns buckets lexicographically by name.
Maximum number of parts in a multipart upload 10,000 parts The object assembled from these parts must adhere to the 5 TiB size limit that applies to all objects in Cloud Storage.
Maximum size of an individual part in a multipart upload 5 GiB
Minimum size of an individual part in a multipart upload 5 MiB There is no minimum size limit on the last part of a multipart upload. As a result, this limit is not enforced at the time a part is uploaded, but is instead enforced at the time you attempt to complete the upload.
Maximum length of time a multipart upload and its uploaded parts can remain unfinished or idle in a bucket Unlimited
Maximum number of different multipart uploads that can simultaneously occur for an object Unlimited
Maximum length of time to complete a resumable upload session 7 days The length of time is measured starting from the resumable upload being initiated.

HMAC keys for service accounts

There is a limit of at most 10 HMAC keys per service account. Deleted keys don't count towards this limit.

Inventory reports

There is a limit of at most 100 inventory report configurations per source bucket.

Bandwidth

Quota Value Notes
Maximum bandwidth for each region that has data egress from Cloud Storage to Google services 200 Gbps per region default quota for most projects, but might be lower based on your project's billing account history

Egress to Cloud CDN and Media CDN is exempt from this quota.

You can request quota increases on a per-project basis.

To learn how to view the Google egress limits for a project, see View and manage quotas.

To learn how to view Google egress usage for a project, see Bandwidth monitoring.

Maximum bandwidth for each dual-region that has data egress from Cloud Storage to Google services 200 Gbps default quota for each region within a dual-region for most projects, but might be lower based on your project's billing account history

Egress to Cloud CDN and Media CDN is exempt from this quota.

You can request a quota increase on a per-project basis.

To learn how to view the Google egress limits for a project, see View and manage quotas.

To learn how to view Google egress usage for a project, see Bandwidth monitoring.

Maximum bandwidth for each multi-region that has data egress from Cloud Storage to Google services 200 Gbps per region default quota for most projects, but might be lower based on your project's billing account history

Egress to Cloud CDN and Media CDN is exempt from this quota.

Each region within the given multi-region has a separate quota. For example, say the project my-project has a multi-region Google egress bandwidth quota of 200 Gbps for all regions within it. In this scenario, the us-east1 region has 200 Gbps of bandwidth when supporting data egress to Google services from buckets in the us multi-region, and the us-west1 region has 200 Gbps of bandwidth when supporting data egress to Google services from buckets in the us multi-region.

To learn how to view the Google egress limits for a project, see View and manage quotas.

To learn how to view Google egress usage for a project, see Bandwidth monitoring.

You can request a quota increase on a per-project basis. Note that generally you should use buckets located in regions or dual-regions for workloads with high egress rates to Google services. For existing buckets in multi-regions that run large workloads in Google services, you can use Storage Transfer Service to move your data to a bucket in a region or dual-region.

Maximum egress bandwidth for Internet requests accessing data from buckets in a region 200 Gbps per region default quota for most projects, but might be lower based on your project's billing account history

Egress to Cloud CDN and Media CDN due to cache misses is included in this quota.

To learn how to view the Internet egress limits for a project, see View and manage quotas.

To learn how to view Internet egress usage for a project, see Bandwidth monitoring.

You can request quota increases on a per-project basis.

Maximum egress bandwidth for Internet requests accessing data from buckets in a dual-region 200 Gbps default quota for each region within a dual-region for most projects, but might be lower based on your project's billing account history

Egress to Cloud CDN and Media CDN due to cache misses is included in this quota.

To learn how to view the Internet egress limits for a project, see View and manage quotas.

To learn how to view Internet egress usage for a project by region, see Bandwidth monitoring.

You can request a quota increase on a per-project basis.

Maximum egress bandwidth for Internet requests accessing data from buckets in a given multi-region 200 Gbps per region default quota for most projects, but might be lower based on your project's billing account history

Egress to Cloud CDN and Media CDN due to cache misses is included in this quota.

Regions within the multi-region have separate multi-region Internet egress quotas. For example, say my-project sends data from a bucket located in the us multi-region to customers around the world. In this scenario, different regions within the us multi-region use their own Internet egress quota as data is sent from the bucket to different parts of the world. Typically, Internet egress quota is counted against the region that's geographically closest to the data's destination.

To learn how to view the Internet egress limits for a project, see View and manage quotas.

To learn how to view Internet egress usage for a project by region, see Bandwidth monitoring.

You can request a quota increase on a per-project basis.

When a project's bandwidth exceeds a certain quota, requests to affected buckets can be throttled or can be rejected with a retryable 429 - rateLimitExceeded error that includes details of the exceeded quota. See Bandwidth usage for information about monitoring your bandwidth.