Quotas & Limits

This page describes quotas and request limits for Cloud Storage.

Buckets

  • There is a per-project rate limit to bucket creation and deletion of approximately 1 operation every 2 seconds, so plan on fewer buckets and more objects in most cases. For example, a common design choice is to use one bucket per user of your project to make permission management straightforward when using end user credentials to create and access objects. However, if you're designing a system that adds many users per second or where objects are created using robot credentials, then design for many users in one bucket (with appropriate ACLs) so that the bucket creation rate limit doesn't become a bottleneck.

  • Highly available applications should not depend on bucket creation or deletion in the critical path of their application. Bucket names are part of a centralized and global namespace: any dependency on this namespace creates a single point of failure for your application. Due to this and the 1 operation per 2 second limit mentioned above, the recommended practice for highly available services on Cloud Storage is to pre-create all the buckets necessary.

Objects

  • There is a maximum size limit for individual objects stored in Cloud Storage of 5 TB.

  • There is an update limit on each object of once per second, so rapid writes to a single object won’t scale. For more information, see Object immutability in Key Terms.

  • There is no limit to writes across multiple objects. Buckets initially support roughly 1000 writes per second and then scale as needed.

  • There is no limit to reads of an object. Buckets initially support roughly 5000 reads per second and then scale as needed.

  • Performance is much better for publicly cacheable objects. If you have an object being used to control many clients and thus want to disable caching to provide the latest data:

    • Consider instead setting the object's Cache-Control metadata to public with max-age of 15-60 seconds. Most applications can tolerate a minute of spread, and the cache hit rate will improve performance drastically.

    • Proxy data transfers through a Google App Engine application located in the same location as your bucket.

    • Use Cache-Control: no-cache for an object to indicate that the object must not be cached for subsequent requests in edge-caches.

    For more information about Cache-Control directives, see RFC 7234: Cache-Control.

  • There is a per-project rate limit on how many components you can compose of approximately 200 per second, so plan how you will use object composition accordingly.

XML API requests

  • When sending requests through the XML API, there is a limit on the combined size of the request URL and HTTP headers of 16KB.

Google Storage Transfer API requests

This section describes the limits for Google Storage Transfer API read and write requests.

Read requests

The read operations for both transferJobs and transferOperations are get and list. The limits for read requests are as follows:

  • Maximum requests per 100 seconds per user: 514
  • Maximum requests per 100 seconds per project: 2500

Create requests for transferJobs operations

The limits for transferJobs create requests are as follows:

  • Maximum requests per user per day: 200
  • Maximum requests per 100 seconds per user: 100
  • Maximum requests per 100 seconds per project: 1000

Patch requests for transferJobs operations

The limits for transferJobs patch requests are as follows:

  • Maximum requests per 100 seconds per user: 100
  • Maximum requests per 100 seconds per project: 1000

Write requests for transferOperations operations

The transferOperations write operations are pause, resume, and cancel. The limits for transferOperations write requests are as follows:

  • Maximum requests per 100 seconds per user: 100
  • Maximum requests per 100 seconds per project: 1500

Send feedback about...

Cloud Storage Documentation