This page provides information about the settings available for Cloud SQL instances.
|Setting||Modifiable after creation?||Possible values|
|Instance ID||N||Composed of lowercase letters, numbers, and hyphens; must start with a letter.|
|Zone||Y||The possible values depend on the region.|
Console string API enum string PostgreSQL 14
|Private IP||After it is configured, it cannot be disabled.||Configured or not.|
|Public IP||Y||Enabled or disabled.|
|Authorized networks||Y||If Public IP is enabled, IP addresses authorized to connect to the instance. You can also specify this value as an IP address range, in CIDR notation.|
|Machine type||Y||Select from Shared core, Lightweight, Standard (Most common), or High memory. Select the Custom radio button to create a flexible machine type. Learn more|
1 to 96 (must be either 1 or an even number)
Partial for shared vCPU
|Memory||Y||0.9 to 6.5 GB per vCPU (must be a multiple of 256 MB and at least 3.75 GB)|
SSD (default value)
Instances with at least one unshared vCPU can have up to 64 TB.
Instances with a shared vCPU can have up to 3054 GB.
Note that creating or increasing storage capacity to 64 TB might increase latency of common operations, such as backups, dependent on your workload.
|Enable automatic storage increases||Y||
On (default value)
|Automatic storage increase limit||Y||In GBs. 0 (the default) means there is no limit.|
On (default value)
|Locations options||Y||Multi-region (default value)
There is also a drop-down menu with a list of multi-regions when you select Multi-region or regions when you select Region.
|Enable point-in-time recovery||Y
On (default value)|
|Availability: Single zone||Y
||On (default value)
|High availability (regional)||Y
Off (default value)
|Maintenance: Preferred window||Y||
Any (default value)
Day of the week
|Maintenance: Order of update||Y||
Any (default value)
|Database flags||Y||See Configuring Database Flags.|
- Instance ID
The instance ID is the name of the instance. It is used to uniquely identify your instance within the project. Choose an instance name that is aligned with the purpose of the instance when possible.
The total length ofYou do not need to include the project ID in the instance name. This is done automatically where appropriate (for example, in the log files).
project-ID:instance-IDmust be 98 characters or less.
You cannot reuse an instance name for up to a week after you have deleted the instance....see naming guidelines
- The Google Cloud region where your instance is located. You can only set the region during instance creation. To improve performance, keep your data close to the services that need it. For more information, see Instance Locations.
- The Google Cloud zone where your instance is located. If you are connecting from a Compute Engine instance, select the zone where the Compute Engine instance is located. Otherwise, accept the default zone. You can edit the instance later to change the zone, if needed. For more information, see Instance Locations.
- Machine Type
Determines memory and virtual cores available to your Cloud SQL instance.
For performance-sensitive workloads, such as online transaction processing (OLTP), make sure that your instance has enough memory to contain the entire working set. However, there are other factors that can impact memory requirements, such as number of active connections, and internal overhead processes. Perform load testing to avoid performance issues in production.
When you configure your instance, select enough memory and vCPUs to handle your workload, and upgrade as your workload increases. A machine configuration with insufficient vCPUs could lose its SLA coverage. Learn more.
You can also create flexible instance configurations using the
gcloud sql instances createcommand. Flexible instance configurations let you select the amount of memory and CPUs that your instance needs. This flexibility lets you choose the appropriate VM shape for your workload. Machine type names use the format db-custom-_CPU_-_RAM_, where _CPU_ is the number of CPUs in the machine, and _RAM_ is the amount of memory in the machine. When selecting the number of CPUs and amount of memory, there are some restrictions on the configuration you choose:
- vCPUs must be either 1 or an even number between 2 and 96.
- Memory must be:
- 0.9 to 6.5 GB per vCPU
- A multiple of 256 MB
- At least 3.75 GB (3840 MB)
The db-f1-micro and db-g1-small machine types are configured to use a shared-core CPU.
The number of CPUs for your instance. You can also choose to create an instance with less than one CPU (a shared code instance, or shared vCPU).
The amount of memory available for your instance. For performance-sensitive workloads such as online transaction processing (OLTP), make sure that your instance has enough memory to contain the entire working set. However, there are other factors that can impact memory requirements, such as number of active connections. Perform load testing to avoid performance issues in production.
- Database version
- Unless you need a capability provided only by a specific version,
accept the default database version (PostgreSQL 12).
gcloudcommand and REST API usage, see the reference documentation.
- Storage type
- Choosing SSD, the default value, provides your instance with SSD storage. SSDs provide lower latency and higher data throughput. If you do not need high-performance access to your data, for example for long-term storage or rarely accessed data, you can reduce your costs by choosing HDD.
- Storage capacity
Choose a capacity to fit your database size. After you have created your instance, you can manually increase the storage capacity by editing your instance configuration, but you cannot decrease it. Increasing the storage capacity does not cause downtime.
The amount of storage capacity allocated for your instance affects the cost of your instance. For more information, see Storage and Networking Pricing.
For read replicas, the storage capacity must always be at least as high as the storage capacity of the primary instance. When a primary instance is resized, all read replicas are resized, if needed, so that they have at least as much storage capacity as the updated primary instance.
- Enable automatic storage increases
If you enable this setting, Cloud SQL checks your available storage every 30 seconds. If the available storage falls below a threshold size, Cloud SQL automatically adds additional storage capacity. If the available storage repeatedly falls below the threshold size, Cloud SQL continues to add storage until it reaches the maximum of 64 TB.
The automatic storage increase setting of a primary instance automatically applies to any read replicas of that instance. The automatic storage increase setting cannot be independently set for read replicas.
The threshold size depends on the amount of storage currently provisioned for your instance; it cannot be larger than 25 GB.
For instances provisioned with 500 GB of storage (or more), the threshold is always 25 GB.
For instances provisioned with less than 500 GB of storage, this formula is used to calculate the threshold:
5 + (provisioned storage)/25
The result of the division is rounded down to the nearest whole number.
Threshold calculation for an instance with 66 GB storage capacity:
5 + (1/25th of 66 GB) = 5 + (66/25) = 5 + 2.6 -> 5 + 2 = 7 GBThreshold calculation for an instance with 1000 GB storage capacity:
5 + (1/25th of 1000 GB) = 5 + (1000/25) = 5 + 40 = 45 -> maximum value of 25 GB
Amount of storage added
The amount of storage added to the instance is equal to the threshold size, which cannot be larger than 25 GB.
Considerations for large disks
When automatic storage increase is enabled and the disk is large, for example greater than 1 TB, the disk remains at 99% capacity all of the time. It might seem that the disk is full but it's actually not.
Before an operation that rapidly grows disk space usage, such as a large import or a query that requires a large temp table, manually resize the disk, rather than depend on autogrow to keep up..
- Automatic storage increase limit
If you enable the automatic storage increase setting, you can provide a specific limit on how large the storage for your instance can automatically grow. You cannot decrease storage size, so this limit can prevent your instance size from growing too large (due to a temporary increase in traffic). Keep in mind that when an instance becomes unable to add storage that it needs, the instance likely stops accepting incoming connections and could go offline.
Setting this limit to zero, the default value, means that there is no limit (other than the maximum available storage for the instance tier).
To set the limit when you create the instance, use the
--storage-auto-increase-limit=INTEGER_VALUE_OF_GBparameter, as described on the create instance page. To set the limit on an existing instance, use the same parameter with the
gcloud beta sql instances patchcommand.
The automatic storage increase limit setting of a primary instance automatically applies to any read replicas of that instance. The automatic storage increase limit setting cannot be independently set for read replicas.
- Automated backups and point-in-time recovery
These settings determine whether automated backups are performed and
whether write-ahead logging is enabled.
Both options add a small performance cost and use additional storage, but
are required for the creation of replicas and clones, and for point-in-time
recovery. When you select this option, you can also select a timeframe when
automated backups occur. Automated backups happen daily, during the time
window you choose. At the end of seven days, the oldest backup is deleted.
For information about point-in-time recovery, see Overview of point-in-time recovery.
- Retention settings for automated backups
- The default value for the number of retained backups is 7 but you can
change it to any value in the range 1 to 365.
See Automated backup and transaction log retention for more information.
- Location options
You can choose to store backups in multiple or single regions. Multi-region is the default, and the recommended choice. Backups are stored in regions that are closest to the instance.
You also have the option of selecting a custom location for your backup. Only use this option if required by regulation or if an organization policy requires your backups to be in specific multiple or single regions. See Custom locations for more information.
- Enable point-in-time recovery
Point-in-time recovery lets you recover from a backup, starting from a specific point in time.
For information about point-in-time recovery, see Overview of point-in-time recovery.
- Availability: Zonal
Puts your instance and backups in a single zone. When you select this option, there is no failover in the event of an outage.
- High availability (regional)
When you select High availability (regional), if there is an outage, your instance fails over to another zone in the region where your instance is located, as long as the failover zone is not having an outage. It is recommended that you select High availability (regional) for instances in your production environment.
- Maintenance window
The day and hour when disruptive updates (updates that require an instance restart) to this Cloud SQL instance can be made. If the maintenance window is set for an instance, Cloud SQL does not initiate a disruptive update to that instance outside of the window. The update is not guaranteed to complete before the end of the maintenance window, but restarts typically complete within a couple of minutes.
Read replicas do not support the maintenance window setting; they can experience a disruptive upgrade at any time.
Failover events do not occur during a maintenance window.
- Maintenance timing
This setting lets you provide a preference about the relative timing of instance updates that require a restart. Receiving updates earlier lets you test your application with an update before your instances that get the update later.
The relative timing of updates is not observed between projects; if you have instances with an earlier timing setting in a different project than your instances with a later timing setting, Cloud SQL makes no attempt to update the instances with the earlier timing setting first.
If you do not set the Maintenance timing setting, Cloud SQL chooses the timing of updates to your instance (within its Maintenance window, if applicable).
The Maintenance timing setting does not affect the software version Cloud SQL applies to your instance.
- Private IP
- Configures your instance to use private IP. Learn more.
- Public IP
- If enabled, your instance is allocated a public IPv4 address. When you disable Public IP, that address is released; you can reenable Public IP later, but you receive a different IPv4 address. By default, the public IP address is blocked for all addresses. Use Authorized networks to enable access.
- You can add specific IP addresses or ranges of addresses to
open your instance to those addresses.
For information about configuring IP addresses, see Configuring IP connectivity.
- Activation policy
- You change the activation policy by starting and stopping the instance. Stopping the instance prevents further instance charges.
- Database flags
You can set specific database flags on the Cloud SQL instance.
For a complete list of the database flags you can set, see Configuring Database Flags.
Impact of changing instance settings
For most instance settings, Cloud SQL applies the change immediately and connectivity to the instance is unaffected.Changing the number of CPUs, memory size, or the zone of the instance results in the instance going offline for several minutes. Plan to make these kinds of changes when your application can handle an outage of this length.
- Learn how to edit your instance.
- Learn more about database flags.
- Learn how to authorize IP access for your instance.
- Learn more about replication options.
- See pricing for your instance.
- Learn more about options for connecting to your instance.
- Learn how to configure an IP address for your instance.