Quotas and limits

This page provides information about the Cloud SQL quotas and limits. Quotas are applied per-project; limits are applied to the instance or to the project, depending on the limit.


A quota restricts how much of a Google Cloud resource your Google Cloud project can use. Cloud SQL is an example of this type of resource.

For Cloud SQL, quotas are part of a system that do the following:

  • Monitor your use or consumption of Cloud SQL instances
  • Restrict your consumption of these instances for reasons including ensuring fairness and reducing spikes in usage
  • Maintain configurations that enforce prescribed restrictions automatically
  • Provide a means to make or request changes to the quota

When a quota is exceeded, in most cases, the system blocks access to the relevant instance immediately, and the task that you're trying to perform fails. Quotas apply to each Google Cloud project and are shared across all instances that use that project.

Permissions to check and increase your quotas

To check and increase your quotas, you need the following permissions:

By default, these permissions are included in the basic IAM roles of Editor and Owner and in the predefined Quota Administrator role. If you need additional permissions, then contact your quota administrator.

Check your quotas

To check the current quotas for resources in your project, go to the Quotas page in the Google Cloud console and filter for Cloud SQL Admin API. These quotas apply only to API calls; they don't include database queries.

Increase your quotas

As your use of Google Cloud expands over time, your quotas can increase accordingly. If you expect a notable upcoming increase in usage, then make your request a few days in advance to ensure your quotas are adequately sized.

There's no charge for requesting for a quota increase. Your costs increase only if you use more resources.

To increase your quotas, follow these steps:

  1. In the Google Cloud console, go to the Quotas page.

    Go to the Quotas page

  2. Filter for the Cloud SQL Admin API service.

    If you don't see this service, then enable the Cloud SQL Admin API.

  3. Select the checkboxes next to the quotas that you want to change, and then click Edit quotas.

  4. For each quota that you selected, in the New limit field, enter the value for the desired limit.

  5. In the Reason description field, enter a reason for your request of a quota increase, and then click Done.

  6. Click Next.

  7. Fill out your name, email, and phone number, and then click Submit request.

    If you have trouble increasing your quotas, then file a support case.

How resource quotas are replenished

Daily quotas are replenished daily at midnight Pacific Time.

Quotas and resource availability

Resource quotas are the maximum amount of resources you can create for that resource type if those resources are available. Quotas do not guarantee that resources are available at all times. If a resource isn't physically available for your region, then you can't create new resources of that type, even if you still have remaining quota in your project.

Rate quotas

Cloud SQL supports rate quotas, which are also known as rate limits or API quotas. Rate quotas define how many requests you can make to the Cloud SQL Admin API.

Each rate quota corresponds to all requests for a category of one or more Cloud SQL Admin API methods. Rate quotas reset after a time interval that's specific to Cloud SQL (for example, the number of API requests per minute).

When you use the gcloud CLI or the Google Cloud console, you're making requests to the Cloud SQL Admin API and these requests count toward your rate quotas. If you use service accounts to access the API, then these requests also count toward your rate quotas.

Cloud SQL enforces and refills rate quotas automatically over 60-second intervals. If your project reaches a rate quota's limit any time within 60 seconds, then you must wait for that quota to refill before making more requests in that category. If your project exceeds this limit, then you receive an HTTP 429 status code with the reason of rateLimitExceeded.

The Cloud SQL Admin API is divided into the following categories:

  • Connect: look up values that are required to connect to a Cloud SQL database.
  • Get: retrieve information about a resource (for example, an instance, an operation, or a backup).
  • List: list resources.
  • Mutate: create, modify, and delete resources.
  • Default per region: interact with a Cloud SQL instance without connecting to, retrieving, listing, or mutating it.
  • Default: list database flags and machine types (tiers) for Cloud SQL instances. The APIs in this category are global.

Cloud SQL imposes rate quotas for each category per minute, per user, and per region. For each unique combination of these attributes, Cloud SQL imposes a separate rate limit.

The Cloud SQL Admin API produces detailed metrics that can help you track your usage of the API, monitor performance of your Cloud SQL instance and the API, and discover problems between your instance and the API. For more information, see Monitoring API usage.

The following table provides information about the metric, APIs, and default limit for each category:

Category Metric APIs Default limit


The number of requests that are made per minute per user per region to use the APIs in this category.



The number of requests that are made per minute per user per region to use the APIs in this category.



The number of requests that are made per minute per user per region to use the APIs in this category.



The number of requests that are made per minute per user per region to use the APIs in this category.

Default per region


The number of default regional requests that are made per minute per user per region to use the APIs in this category.



The number of default requests that are made per minute per user to use the APIs in this category.



There are restrictions on some Cloud SQL resources that are not replenished periodically and not shown on the Quotas page in the Google Cloud console. Some limits can be increased while others cannot.

Configurable limits

Instances per project

By default, you can have up to 1000 instances per project except in some cases, where you might have a limit of 100. File a support case to request an increase. Read replicas are counted as instances.

We recommend that you distribute your instance count across multiple projects to reduce the reliance on quota increase requests. This helps you avoid any potential blockages.

Maximum concurrent connections


You can use the max_connections flag to configure a connections limit. MySQL allows a maximum of 32,000 connections. You can find the connections limit for your instance by connecting to your database and running this command: SHOW VARIABLES LIKE "max_connections";


You can use the max_connections flag to configure connections limits. When you create a Cloud SQL for PostgreSQL instance, the machine type configuration settings automatically adjust the range of memory sizes available, based on the number of cores you select. This also determines the initial default connection limits set for the instance.

You can find the connection limits for your instance by connecting to your database and running this command: SELECT * FROM pg_settings WHERE name = 'max_connections';

The value on replicas must be greater than or equal to the value on the primary. Changes on the primary propagate to replicas that have a value that's lower than the new value on the primary, or that haven't been changed from the default value.

If the value on the primary is default, then the value for the replicas can't be changed. To change the value for the replicas, first, set the value on the primary to an integer.

SQL Server

The actual number of user connections allowed depends on the version of SQL Server that you are using, and also the limits of your application or applications and hardware. SQL Server allows a maximum of 32,767 user connections.

For information about configuring user connections in SQL Server, see the reference documentation.


Quota usage for Cloud SQL Connectors

The Cloud SQL Auth Proxy and other Cloud SQL Connectors use Cloud SQL Admin API's quota. The Cloud SQL Connectors work by running a refresh operation approximately every hour. This refresh operation makes two API calls. One call retrieves the instance metadata and the other call retrieves an ephemeral certificate.

The quota usage is calculated as:

Quota usage = connector processes running * instances * 2 API calls per hour

For example, if you have three processes running a connector, the connector is configured to connect to two Cloud SQL instances, and two API calls are made for one hour, then your quota consumption is 12 (3 processes * 2 instances * 2 API calls).

If you're getting started with Cloud SQL, then keeping note of the above formula, you should be mindful of:

  • How quickly you scale up new DB clients

  • How quickly you add more instances

  • Using different service accounts for each application

Cloud SQL IAM database authentication

There's a per-minute login quota for each instance, which includes both successful and unsuccessful logins. When the quota is exceeded, logins are unavailable temporarily. We recommend that you avoid frequent logins and restrict logins using authorized networks. The quota for authorization of logins is 12,000 per minute, per instance.

Forwarding rule quota

Each Cloud SQL instance consists of a forwarding rule and a load balancer. There's quota limit on the forwarding rule, based on the kind of load balancer to which it's pointing. There are multiple quotas on each kind of forwarding rules, per project, per network and per peering group. There's also an override rule for per network quota and per peering group quotas, for Cloud SQL. This means when we bump up the per network quota for producer networks, the per peering group quota gets bumped to the same value as well.

Cloud SQL producer VPC is peered with customer's VPC, so we often hit per network quota for Cloud SQL producer network, and per peering group quota for customer's VPC.

When we hit the quota, certain operations can fail, which include:

  • Create Operation: We need new forwarding rules when we create new instances.

  • Update Operation: We allow customers to switch the network of instances, so we need new forwarding rules in the new network.

  • Maintenance Operation: Forwarding rules get recreated.

To avoid any issues, consider limiting the total number of instances per network to below 500.

If you experience an issue, then file a support case, and we'll bump the relevant quota(s) for you.

Fixed limits


IOPS are the number of input/output operations (or read/write) operations that your disk can process per second.

Cloud SQL uses Compute Engine virtual machines (VMs) with persistent storage disks. For details on specific VM performance characteristics, see the maximum sustained IOPS table on the persistent disk performance page.

Table limit

Cloud SQL for MySQL has a limit of 50,000 tables by default, or 500,000 tables for an instance if you meet the minimum hardware requirements of at least 32 cores and a minimum of 200G of memory. For optimal performance, we recommend that the number of tables in a single database not exceed 50,000.

Instances that exceed these limits are not covered by the SLA. When a table size reaches 16 TB, the maximum size for Linux partitions, no additional data can be added.

The memory needed by your instance depends on various factors. To learn more about how Cloud SQL for MySQL uses memory, see How MySQL uses memory.

If your active table count is larger than the Cloud SQL defaults, or if the overall cache size is significantly large, then you need to make adjustments to your instance. To maintain optimal performance, you can:

  • Upgrade to the Cloud SQL Enterprise Plus edition for higher memory options.
  • Upgrade your Cloud SQL machine to add memory to your instance.
  • Reduce the value of the innodb_buffer_pool_size database flag.

The table_open_cache and table_definition_cache flags can be used to modify the Cloud SQL for MySQL table cache. You can use the performance schema to get an estimate of the table cache size for your instance.

If the number of active tables is significantly larger than both the Cloud SQL table defaults and the open tables recommendation by MySQL, then Cloud SQL recommends configuring the table_open_cache and table_definition_cache database flags with your instance's active table count. For more information, see How MySQL Opens and Closes Tables.

Operations limit

Micro and small tier machine types limit the number of concurrent operations. Exceeding these limits causes a Too many operations error.

The db-custom-1-3840 (single CPU) machine type limit is 50 concurrent operations.

Metrics collection limit

PostgreSQL metrics are collected for up to 500 databases. If there are more than 500 databases, then only the top 500 are included for a given metric. These databases have the highest transaction counts.

Cloud SQL storage limits

  • Dedicated core: Up to 64 TB.

  • Shared core: Up to 3 TB.

    See Instance pricing for more information.

Cloud SQL storage options

To configure a storage option for best performance, it's important to understand your workload and choose the appropriate disk type and size. For more information on the available choices for Cloud SQL, see instance settings.

App Engine limits

Each App Engine instance running in a standard environment cannot have more than 100 concurrent connections to an instance. For PHP 5.5 apps, the limit is 60 concurrent connections.

App Engine applications are subject to request time limits depending on usage and environment. For more information, see how instances are managed in App Engine standard environment standard and flexible environments.

App Engine applications are also subject to additional App Engine quotas and limits as discussed on the App Engine Quotas page.

Cloud Run limits

If you use the built-in Cloud SQL connection on Cloud Run, then Cloud Run container instances are limited to 100 connections per Cloud SQL database.

Each instance of a Cloud Run service or job can have 100 connections to the database, and as this service or job scales, the total number of connections per deployment can grow.

This limit doesn't apply when using other connection methods such as the Cloud SQL Auth Proxy in a sidecar, the Cloud SQL Language Connectors, or when connecting directly to the IP address of the Cloud SQL instance.

Cloud Functions limits

Cloud Functions (1st gen) limits concurrent executions to one per instance. You never have a situation where a single 1st gen function instance is processing two requests at the same time. In most situations, only a single database connection is needed.

Cloud Functions (2nd gen) is based on Cloud Run and has a limit of 100 database connections per instance.