Quotas & limits

This page describes quotas and request limits for Cloud Storage.

Buckets

  • There is a per-project rate limit to bucket creation and deletion of approximately 1 request every 2 seconds, so plan on fewer buckets and more objects in most cases. For example, a common design choice is to use one bucket per user of your project. However, if you're designing a system that adds many users per second, then design for many users in one bucket (with appropriate permissions) so that the bucket creation rate limit doesn't become a bottleneck.

  • Highly available applications should not depend on bucket creation or deletion in the critical path of their application. Bucket names are part of a centralized and global namespace: any dependency on this namespace creates a single point of failure for your application. Due to this and the 1 request per 2-second limit mentioned above, the recommended practice for highly available services on Cloud Storage is to pre-create all the buckets necessary.

  • There is an update limit on each bucket of once per second, so rapid updates to a single bucket (for example, changing the CORS configuration) won't scale.

  • There is a limit of 100 members holding legacy IAM roles per bucket and a limit of 1500 members holding all IAM roles per bucket. Examples of members include individual users, groups, and domains. See IAM identities.

  • For buckets with Pub/Sub notifications:

    • The bucket can have up to 100 total notification configurations.

    • The bucket can have up to 10 notification configurations set to trigger for a specific event.

    • Each notification configuration can have up to 10 custom attributes.

Objects

  • There is a maximum size limit of 5 TiB for individual objects stored in Cloud Storage.

    • The maximum size of a single upload request is also 5 TiB. For uploads that would take a long time over your connection, consider using resumable uploads in order to recover from intermediate failures. See Resumable uploads for more information.
  • There is a maximum combined size limit for all custom metadata keys and values of 8 KiB per object.

  • There is a write limit to the same object name of once per second, so rapid writes to the same object name won't scale. For more information, see Object immutability.

  • There is no limit to the number of writes across an entire bucket, which includes uploading, updating, and deleting objects. Buckets initially support roughly 1000 writes per second and then scale as needed.

  • There is no limit to the number of reads for objects in a bucket, which includes reading object data, reading object metadata, and listing objects. Buckets initially support roughly 5000 object reads per second and then scale as needed. Note that there are, however, bandwidth limits.

  • There is a limit of 100 access control list entries (ACLs) per object. Members can be individual users, groups, or domains. See ACL scopes.

  • For object composition:

    • Up to 32 objects can be composed in a single composition request.

    • While there is no limit to the number of components that make up a composite object, the componentCount metadata associated with a composite object saturates at 2,147,483,647.

    • Composite objects are subject to the overall 5 TiB size limit for objects stored in Cloud Storage.

JSON API requests

  • For batch requests:

    • The total request payload must be less than 10MB.

    • Do not include more than 100 calls in a single request.

XML API requests

  • When sending requests through the XML API, there is a limit on the combined size of the request URL and HTTP headers of 16 KB.

  • When listing resources with the XML API, there is a limit of 1000 items that get returned.

  • When performing an XML API multipart upload, the following limits apply:

    • A multipart upload can have up to 10,000 parts.
    • An individual part has a maximum size limit of 5 GiB.
    • An individual part has a minimum size limit of 5 MiB, unless it's the last part, which has no minimum size limit.
    • Objects assembled from a multipart upload are subject to the overall 5 TiB size limit for objects stored in Cloud Storage.
    • There is no limit to how long a multipart upload and its uploaded parts can remain unfinished or idle in a bucket.
    • There is no limit to the number of different multipart uploads that can be simultaneously occurring for an object.

HMAC keys for service accounts

  • There is a limit of 5 HMAC keys per service account. Deleted keys do not count towards this limit.

Bandwidth

  • There is a default bandwidth quota for data egress from Cloud Storage to Google services of 200 Gbps per project per region. Egress to Cloud CDN is exempt from this limit.

    • Data egress from Cloud Storage dual-regions to Google services counts towards the quota of one of the regions that make up the dual-region. For example, if a Compute Engine instance in us-central1 reads data from a bucket in the nam4 dual-region, the bandwidth usage is counted as part of the overall quota for the us-central1 region.

    • You can request a quota increase for regions on a per-project basis. If you want a quota increase for a dual-region, your increase request should be made for one or both of the regions that make up the dual-region.

      • For requests below 1 Tbps:

        1. Make sure you have enabled the Cloud Storage service storage.googleapis.com.

        2. In the Google Cloud Console, go to the IAM Quotas page.

        Go to Browser

        1. Search for the metric storage.googleapis.com/google_egress_bandwidth, and select the resource.

        2. Toggle the checkbox for the desired location.

        3. Click Edit Quotas.

        4. Enter your requested quota and business justification.

        5. Click Submit Request.

      • For requests at or above 1 Tbps, please contact your Technical Account Manager or Google representative to file a Cloud Capacity Advisor request on your behalf.

  • There is a default bandwidth quota for data egress from Cloud Storage to Google services of 50 Gbps per project per multi-region. Egress to Cloud CDN is exempt from this limit.

    • We strongly recommend that you use buckets located in regions or dual-regions for workloads with high egress rates to Google services. If that's not possible and you need more than the default quota for multi-regions, contact Google Cloud Support.
  • When a project's egress exceeds quota in a location, requests to affected buckets can be rejected with a retryable 429 error or can be throttled.

Bandwidth monitoring

Cloud Storage provides bandwidth monitoring for you to track Google egress bandwidth usage of your project's buckets. Bandwidth monitoring is aggregated by region and tracks usage for the last 30 days. Bandwidth monitoring is not available for multi-regions.

In order to be tracked by bandwidth monitoring:

  • You must have the Cloud Storage service storage.googleapis.com enabled for your project.

  • The usage must be by Google Cloud resources other than Cloud Storage buckets.

  • If the bucket is located in a region, the usage must be by resources located in the same region.

  • If the bucket is located in a dual-region, the usage must be by resources located in either of the regions that makes up the dual-region.

  • The usage must be either from the JSON API GET Object method or the XML API GET Object method.

To view bandwidth monitoring:

Open Cloud Storage bandwidth monitoring

Multi-region monitoring

While bandwidth monitoring for multi-regions is not available, you can use the network/sent_bytes_count metric as an approximation. When doing so, keep in mind the following:

  • network/sent_bytes_count uses bytes in its measurements, whereas typical networking metrics often uses bits. For example, a 1 GiB/s measurement for network/sent_bytes_count is equivalent to 8 Gib/s.

  • network/sent_bytes_count tracks all the traffic sent over the network, not only the egress to Compute Engine VMs.

  • network/sent_bytes_count metrics are sampled every 60 seconds. If the traffic is spiky, it's possible that requests are throttled for a short period of time even though the average egress over 60 seconds is below the limit.