This page describes how to create a Cloud SQL for PostgreSQL instance.
For detailed information about all instance settings, see Instance settings.
A newly-created instance has a postgres
database.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
Make sure you have the Cloud SQL Admin and Compute Viewer roles on
your user account.
Learn more about roles and permissions.
Create a PostgreSQL instance
Console
-
In the Google Cloud console, go to the Cloud SQL Instances page.
- Click Create instance.
- On the Choose your database engine panel of the Create an instance page, click Choose PostgreSQL.
- In the Instance ID field of the Instance info pane, enter
an ID for your instance.
You do not need to include the project ID in the instance name. This is done automatically where appropriate (for example, in the log files).
- Enter a password for the
postgres
user. Optional: Configure a password policy for the instance as follows:
- Select the Enable password policies checkbox.
- Click the Set password policy button, set one or more of
the following options, and click Save.
- Minimum length: Specifies the minimum number of characters that the password must have.
- Password complexity: Checks if the password is a combination of lowercase, uppercase, numeric, and non-alphanumeric characters.
- Restrict password reuse: Specifies the number of previous passwords that you can't reuse.
- Disallow username: Prevents the use of the username in the password.
- Set password change interval: Specifies the minimum number of hours after which you can change the password.
- Select the database version for your instance: PostgreSQL 17, PostgreSQL 16 (default), PostgreSQL 15, PostgreSQL 14, PostgreSQL
13, PostgreSQL 12, PostgreSQL 11, PostgreSQL 10, or PostgreSQL
9.6.
The database version can't be edited after the instance has been created.
- Select the Cloud SQL edition for your instance:
Enterprise
orEnterprise plus
. For more information about Cloud SQL editions, see Introduction to Cloud SQL editions. - In the Choose region and zonal availability section, select the region
and zone for your instance.
Region availability might be different based on your
Cloud SQL edition. For more information, see About instance settings.
Place your instance in the same region as the resources that access it. The region you select can't be modified in the future. In most cases, you don't need to specify a zone.
If you are configuring your instance for high availability, you can select both a primary and secondary zone.
The following conditions apply when the secondary zone is used during instance creation:
- The zones default to
Any
for the primary zone andAny (different from primary)
for the secondary zone. - If both the primary and secondary zones are specified, they must be distinct zones.
- The zones default to
- In the Customize your instance section, update settings for your
instance.
Begin by clicking SHOW CONFIGURATION OPTIONS to display the groups
of settings. Then, expand desired groups to review and customize settings.
A Summary of all the options you select is shown on the right.
Customizing these instance settings is optional. Defaults are assigned in
every case where no customizations are made.
The following table is a quick reference to instance settings. For more details about each setting, see the instance settings page.
Setting Notes Machine type Machine type Select from Shared core or Dedicated core. For Shared core, each machine type is classified by the number of CPUs (cores) and amount of memory for your instance. Cores The number of vCPUs for your instance. Learn more. Memory The amount of memory for your instance, in GBs. Learn more. Custom For the Dedicated core machine type, instead of selecting a predefined configuration, select the Custom button to create an instance with a custom configuration. When you select this option, you need to select the number of cores and amount of memory for your instance. Learn more. Storage Storage type Determines whether your instance uses SSD or HDD storage. Learn more. Storage capacity The amount of storage provisioned for the instance. Learn more. Enable automatic storage increases Determines whether Cloud SQL automatically provides more storage for your instance when free space runs low. Learn more. Encryption Google-managed encryption The default option. Customer key-managed encryption key (CMEK) Select to use your key with Google Cloud Key Management Service. Learn more. Connections Private IP Adds a private IP address for your instance. To enable connecting to the instance, additional configuration is required.
Optionally, you can specify an allocated IP range for your instances to use for connections.- Expand Show allocated IP range option.
- Select an IP range from the drop-down menu.
Your instance can have both a public and a private IP address.
- Learn more about using private IP.
- Learn more about allocated IP address ranges.
Public IP Adds a public IP address for your instance. You can then add authorized networks to connect to the instance. Your instance can have both a public and a private IP address.
Learn more about using public IP.
Authorized networks Add the name for the new network and the Network address. Learn more.
Private path for Google Cloud services By selecting this check box, you allow other Google Cloud services, such as BigQuery, to access data in Cloud SQL and make queries against this data over a private connection.
Data protection Automate backups The window of time when you would like backups to start. Learn more. Choose where to store your backups Select Multi-region for most use cases. If you need to store backups in a specific region, for example, if there are regulatory reasons to do so, select Region and select your region from the Location drop-down menu. Choose how many automated backups to store The number of automated backups you would like to retain (from 1 to 365 days). Learn more. Enable point-in-time recovery Enables point-in-time recovery and write-ahead logging. Learn more. Enable deletion protection Determines whether to protect an instance against accidental deletion. Learn more. Choose how many days of logs to retain Configure write-ahead log retention from 1 to 7 days. The default setting is 7 days. Learn more. Maintenance Preferred window Determines a one-hour window when Cloud SQL can perform disruptive maintenance on your instance. If you do not set the window, then disruptive maintenance can be done at any time. Learn more. Order of updates Your preferred timing for instance updates, relative to other instances in the same project. Learn more. Flags ADD FLAG You can use database flags to control settings and parameters for your instance. Learn more. Labels ADD LABEL Add a key and value for each label that you add. You use labels to help organize your instances. Data cache Enable data cache (optional) Enables data cache for Cloud SQL for PostgreSQL Enterprise Plus edition instances. For more information about data cache, see data cache. -
Click Create Instance.
Note: It might take a few minutes to create your instance. However, you can view information about the instance while it's being created.
To see the password in clear text, click the Show password icon.
You can either enter the password manually or click Generate to have Cloud SQL create a password for you automatically.
gcloud
For information about installing and getting started with the gcloud CLI, see Installing gcloud CLI. For information about starting Cloud Shell, see the Cloud Shell documentation.
- Use the
gcloud sql instances create
command to create the instance: - vCPUs must be either 1 or an even number between 2 and 96.
- Memory must be:
- 0.9 to 6.5 GB per vCPU
- A multiple of 256 MB
- At least 3.75 GB (3840 MB)
- The zones must be valid zones.
- If the secondary zone is specified, the primary must also be specified.
- If the primary and secondary zones are specified, they must be distinct zones.
- If the primary and secondary zones are specified, they must belong to the same region.
- You use the
--no-assign-ip
parameter. - You use the
--network
parameter to specify the name of the VPC network that you want to use to create a private connection. GOOGLE_MANAGED_INTERNAL_CA
: this is the default value. With this option, an internal CA dedicated to each Cloud SQL instance signs the server certificate for that instance.GOOGLE_MANAGED_CAS_CA
: with this option, a CA hierarchy consisting of a root CA and subordinate server CAs managed by Cloud SQL and hosted on Google Cloud Certificate Authority Service (CA Service) is used. The subordinate server CAs in a region sign the server certificates and are shared across instances in the region.- The zones must be valid zones.
- If the secondary zone is specified, the primary must also be specified.
- Note the automatically assigned IP address.
If you are not using the Cloud SQL Auth Proxy, you will use this address as the host address that your applications or tools use to connect to the instance.
- Set the password for the
postgres
user:gcloud sql users set-password postgres \ --instance=INSTANCE_NAME \ --password=PASSWORD
For Cloud SQL Enterprise Plus edition instances:
gcloud sql instances create INSTANCE_NAME \ --database-version=DATABASE_VERSION \ --region=REGION \ --tier=TIER \ --edition=ENTERPRISE_PLUS
For Cloud SQL Enterprise edition instances:
gcloud sql instances create INSTANCE_NAME \ --database-version=DATABASE_VERSION \ --region=REGION \ --cpu=NUMBER_CPUS \ --memory=MEMORY_SIZE \ --edition=ENTERPRISEOr, alternatively, you can use the
--tier
flag if you choose db-f1-micro or
db-g1-small as the machine type:
gcloud sql instances create INSTANCE_NAME \ --tier=API_TIER_STRING \ --region=REGION
There are restrictions on the values for vCPUs and memory size:
For example, the following command creates a Cloud SQL Enterprise edition instance with two vCPUs and 7,680 MB of memory:
gcloud sql instances create myinstance \ --database-version=POSTGRES_16 \ --cpu=2 \ --memory=7680MB \ --region=us-central1
The following command creates a Cloud SQL Enterprise Plus edition instance with four cores:
gcloud sql instances create myinstance \ --database-version=POSTGRES_16 \ --tier=db-perf-optimized-N-4 \ --edition=ENTERPRISE_PLUS \ --region=us-central1See Custom instance configuration for more information about how to size
--cpu
and --memory
.
The default value for REGION is us-central1
.
Do not include sensitive or personally identifiable information
in your instance name; it is externally visible.
You do not need to include the project ID in the instance name. This is done automatically where
appropriate (for example, in the log files).
If you are creating an instance for
high availability, you
can specify both the primary and secondary zones, using the --zone
and --secondary-zone
parameters. The following conditions
apply when the secondary zone is used during instance creation or edit:
You can add more parameters to determine other instance settings:
Setting | Parameter | Notes |
---|---|---|
Required parameters | ||
Database version | --database-version |
The database version, which is based on your Cloud SQL edition. |
Region | --region |
See valid values. |
Set password policy | ||
Enable password policy | --enable-password-policy |
Enables the password policy when used. By default, the password policy is disabled.
When disabled using the --clear-password-policy parameter, the other password policy parameters are reset.
|
Minimum length | --password-policy-min-length |
Specifies the minimum number of characters that the password must have. |
Password complexity | --password-policy-complexity |
Enables the password complexity check to ensure that the password
contains one of each of these types of characters: lowercase, uppercase,
numeric, and non-alphanumeric. Set the value to
COMPLEXITY_DEFAULT . |
Restrict password reuse | --password-policy-reuse-interval |
Specifies the number of previous passwords that you can't reuse. |
Disallow username | --password-policy-disallow-username-substring |
Prevents the use of the username in the password. Use
the --no-password-policy-disallow-username-substring
parameter to disable the check. |
Set password change interval | --password-policy-password-change-interval |
Specifies the minimum duration after which you can change the password, for example, 2m for 2 minutes. |
Connectivity | ||
Private IP | --network
|
--network : Specifies the name of the VPC network you want
to use for this instance. Private services access must already be
configured for the network. Available only for the beta command
(gcloud beta sql instances create ).
This parameter is valid only if: |
Public IP | --authorized-networks |
For public IP connections, only connections from authorized networks can connect to your instance. Learn more. |
SSL Enforcement |
|
The The |
Server CA mode | --server-ca-mode |
The Using the |
Machine type and storage | ||
Machine type | --tier |
Used to specify a shared-core instance
(db-f1-micro
or db-g1-small ).
For a custom instance configuration, use the --cpu or
--memory parameters instead. See Custom
instance configuration.
|
Storage type | --storage-type |
Determines whether your instance uses SSD or HDD storage. Learn more. |
Storage capacity | --storage-size |
The amount of storage provisioned for the instance, in GB. Learn more. |
Automatic storage increase | --storage-auto-increase |
Determines whether Cloud SQL automatically provides more storage for your instance when free space runs low. Learn more. |
Automatic storage increase limit | --storage-auto-increase-limit |
Determines how large Cloud SQL can automatically grow storage.
Available only for the beta command
(gcloud beta sql instances create ).
Learn more.
|
Data cache (optional) | --enable-data-cache |
Enables or deactivates the data cache for instances. For more information, see data cache. |
Automatic backups and high availability | ||
High availability | --availability-type |
For a highly-available instance, set to REGIONAL .
Learn more.
|
Secondary zone | --secondary-zone |
If you're creating an instance for
high availability,
you can specify both the primary and secondary zones using the
--zone and --secondary-zone parameters . The
following restrictions apply when the secondary zone is used during
instance creation or edit:
If the primary and secondary zones are specified, they must be distinct zones. If the primary and secondary zones are specified, they must belong to the same region. |
Automatic backups | --backup-start-time |
The window of time when you would like backups to start. Learn more. |
Retention settings for automated backups | --retained-backups-count |
The number of automated backups to retain. Learn more. |
Point-in-time recovery | --enable-point-in-time recovery |
Enables point-in-time recovery and write-ahead logging. Learn more. |
Retention settings for binary logging | --retained-transaction-log-days |
The number of days to retain write-ahead logs for point-in-time recovery.Learn more. |
Add database flags | ||
Database flags | --database-flags |
You can use database flags to control settings and parameters for your instance. Learn more about database flags. |
Maintenance schedule | ||
Maintenance window | --maintenance-window-day ,
--maintenance-window-hour |
Determines a one-hour window when Cloud SQL can perform disruptive maintenance on your instance. If you do not set the window, then disruptive maintenance can be done at any time. Learn more. |
Maintenance timing | --maintenance-release-channel |
Your preferred timing for instance updates, relative to other
instances in the same project. Use preview for earlier
updates, and production for later updates.
Learn more.
|
Integration with Vertex AI | ||
--enable-google-ml-integration |
Enables Cloud SQL instances to connect to Vertex AI to pass requests for real-time predictions and insights to the AI. | |
--database-flags cloudsql.enable_google_ml_integration=on |
By turning this flag on, Cloud SQL can integrate with Vertex AI. |
Terraform
To create an instance, use a Terraform resource.
Apply the changes
To apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.
Prepare Cloud Shell
- Launch Cloud Shell.
-
Set the default Google Cloud project where you want to apply your Terraform configurations.
You only need to run this command once per project, and you can run it in any directory.
export GOOGLE_CLOUD_PROJECT=PROJECT_ID
Environment variables are overridden if you set explicit values in the Terraform configuration file.
Prepare the directory
Each Terraform configuration file must have its own directory (also called a root module).
-
In Cloud Shell, create a directory and a new
file within that directory. The filename must have the
.tf
extension—for examplemain.tf
. In this tutorial, the file is referred to asmain.tf
.mkdir DIRECTORY && cd DIRECTORY && touch main.tf
-
If you are following a tutorial, you can copy the sample code in each section or step.
Copy the sample code into the newly created
main.tf
.Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.
- Review and modify the sample parameters to apply to your environment.
- Save your changes.
-
Initialize Terraform. You only need to do this once per directory.
terraform init
Optionally, to use the latest Google provider version, include the
-upgrade
option:terraform init -upgrade
Apply the changes
-
Review the configuration and verify that the resources that Terraform is going to create or
update match your expectations:
terraform plan
Make corrections to the configuration as necessary.
-
Apply the Terraform configuration by running the following command and entering
yes
at the prompt:terraform apply
Wait until Terraform displays the "Apply complete!" message.
- Open your Google Cloud project to view the results. In the Google Cloud console, navigate to your resources in the UI to make sure that Terraform has created or updated them.
Delete the changes
To delete your changes, do the following:
- To disable deletion protection, in your Terraform configuration file set the
deletion_protection
argument tofalse
.deletion_protection = "false"
- Apply the updated Terraform configuration by running the following command and
entering
yes
at the prompt:terraform apply
-
Remove resources previously applied with your Terraform configuration by running the following command and entering
yes
at the prompt:terraform destroy
REST v1
Create the instance
This example creates an instance. Some optional parameters, such as backups and binary logging are also included. For a complete list of parameters for this call, see the Instances:insert page. For information about instance settings, including valid values for region, see Instance settings.
Do not include sensitive or personally identifiable information
in your instance ID; it is externally visible.
You do not need to include the project ID in the instance name. This is done automatically where
appropriate (for example, in the log files).
Before using any of the request data, make the following replacements:
- PROJECT_ID: your project ID
- INSTANCE_ID: your instance ID
- REGION: the region
- DATABASE_VERSION: enum string of the database version (for example,
POSTGRES_16
) - PASSWORD: the password for the root user
- MACHINE_TYPE: enum string of the machine (tier) type, as:
db-custom-[CPUS]-[MEMORY_MBS]
EDITION_TYPE: your Cloud SQL edition
- DATA_CACHE_ENABLED: (optional) set to
true
to enable data cache for your instance - PRIVATE_NETWORK: specify the name of the Virtual Private Cloud (VPC) network that you want to use for this instance. Private services access must already be configured for the network.
- AUTHORIZED_NETWORKS: for public IP connections, specify the connections from authorized networks that can connect to your instance.
- CA_MODE: specify a
certificate authority hierarchy
for the instance, either
GOOGLE_MANAGED_INTERNAL_CA
orGOOGLE_MANAGED_CAS_CA
. If you don't specifyserverCaMode
, then the default configuration isGOOGLE_MANAGED_INTERNAL_CA
. This feature is in Preview.
To set a password policy while creating an instance, include the passwordValidationPolicy object in the request. Set the following parameters, as required:
enablePasswordPolicy
: Enables the password policy when set totrue
.To remove the password policy, you can use a
PATCH
request withnull
as the value forenablePasswordPolicy
. In this case, the other password policy parameters are reset.minLength
: Specifies the minimum number of characters that the password must have.complexity
: Checks if the password is a combination of lowercase, uppercase, numeric, and non-alphanumeric characters. The default value isCOMPLEXITY_DEFAULT
.reuseInterval
: Specifies the number of previous passwords that you can't reuse.disallowUsernameSubstring
: Prevents the use of the username in the password when set totrue
.passwordChangeInterval
: Specifies the minimum duration after which you can change the password. The value is in seconds with up to nine fractional digits, terminated bys
. For example,3.5s
.
To create the instance so that it can integrate with Vertex AI, include the enableGoogleMlIntegration object in the request. This integration lets you apply large language models (LLMs), which are hosted in Vertex AI, to a Cloud SQL for PostgreSQL database.
Set the following parameters, as required:
enableGoogleMlIntegration
: when this parameter is set totrue
, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AIcloudsql.enable_google_ml_integration
: when this parameter is set toon
, Cloud SQL can integrate with Vertex AI
HTTP method and URL:
POST https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances
Request JSON body:
{ "name": "INSTANCE_ID", "region": "REGION", "databaseVersion": "DATABASE_VERSION", "rootPassword": "PASSWORD", "settings": { "tier": "MACHINE_TYPE", "edition": "EDITION_TYPE", "enableGoogleMlIntegration": "true" | "false" "databaseFlags": [ { "name": "cloudsql.enable_google_ml_integration", "value": "on" | "off" } ] "dataCacheConfig" = { "dataCacheEnabled": DATA_CACHE_ENABLED }, "backupConfiguration": { "enabled": true }, "passwordValidationPolicy": { "enablePasswordPolicy": true "minLength": "MIN_LENGTH", "complexity": COMPLEXITY_DEFAULT, "reuseInterval": "REUSE_INTERVAL", "disallowUsernameSubstring": "DISALLOW_USERNAME_SUBSTRING", "passwordChangeInterval": "PASSWORD_CHANGE_INTERVAL" } "ipConfiguration": { "privateNetwork": "PRIVATE_NETWORK", "authorizedNetworks": [AUTHORIZED_NETWORKS], "ipv4Enabled": false, "enablePrivatePathForGoogleCloudServices": true, "serverCaMode": "CA_MODE" } } }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID", "status": "PENDING", "user": "user@example.com", "insertTime": "2019-09-25T22:19:33.735Z", "operationType": "CREATE", "name": "OPERATION_ID", "targetId": "INSTANCE_ID", "selfLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/operations/OPERATION_ID", "targetProject": "PROJECT_ID" }
The response is a long-running operation, which might take a few minutes to complete.
Retrieve the IPv4 address
Retrieve the automatically assigned IPv4 address for the new instance:
Before using any of the request data, make the following replacements:
- project-id: your project ID
- instance-id: instance ID created in prior step
HTTP method and URL:
GET https://sqladmin.googleapis.com/v1/projects/project-id/instances/instance-id
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
{ "kind": "sql#instance", "state": "RUNNABLE", "databaseVersion": "MYSQL_8_0_18", "settings": { "authorizedGaeApplications": [], "tier": "db-f1-micro", "kind": "sql#settings", "pricingPlan": "PER_USE", "replicationType": "SYNCHRONOUS", "activationPolicy": "ALWAYS", "ipConfiguration": { "authorizedNetworks": [], "ipv4Enabled": true }, "locationPreference": { "zone": "us-west1-a", "kind": "sql#locationPreference" }, "dataDiskType": "PD_SSD", "backupConfiguration": { "startTime": "18:00", "kind": "sql#backupConfiguration", "enabled": true, "binaryLogEnabled": true }, "settingsVersion": "1", "storageAutoResizeLimit": "0", "storageAutoResize": true, "dataDiskSizeGb": "10" }, "etag": "--redacted--", "ipAddresses": [ { "type": "PRIMARY", "ipAddress": "10.0.0.1" } ], "serverCaCert": { ... }, "instanceType": "CLOUD_SQL_INSTANCE", "project": "project-id", "serviceAccountEmailAddress": "redacted@gcp-sa-cloud-sql.iam.gserviceaccount.com", "backendType": "SECOND_GEN", "selfLink": "https://sqladmin.googleapis.com/v1/projects/project-id/instances/instance-id", "connectionName": "project-id:region:instance-id", "name": "instance-id", "region": "us-west1", "gceZone": "us-west1-a" }
Look for the ipAddress
field in the response.
REST v1beta4
Create the instance
This example creates an instance. Some optional parameters, such as backups and binary logging are also included. For a complete list of parameters for this call, see the instances:insert page. For information about instance settings, including valid values for region, see Instance settings
Do not include sensitive or personally identifiable information
in your instance ID; it is externally visible.
You do not need to include the project ID in the instance name. This is done automatically where
appropriate (for example, in the log files).
Before using any of the request data, make the following replacements:
- PROJECT_ID: your project ID
- INSTANCE_ID: your instance ID
- REGION: the region
- DATABASE_VERSION: enum string of the database version (for example,
POSTGRES_16
) - PASSWORD: the password for the root user
- MACHINE_TYPE: enum string of the machine (tier) type, as:
db-custom-[CPUS]-[MEMORY_MBS]
EDITION_TYPE: your Cloud SQL edition
- DATA_CACHE_ENABLED: (optional) set to
true
to enable data cache for your instance - PRIVATE_NETWORK: Specify the name of the Virtual Private Cloud (VPC) network that you want to use for this instance. Private services access must already be configured for the network.
- AUTHORIZED_NETWORKS: For public IP connections, specify the connections from authorized networks that can connect to your instance.
- CA_MODE: specify a
certificate authority hierarchy
for the instance, either
GOOGLE_MANAGED_INTERNAL_CA
orGOOGLE_MANAGED_CAS_CA
. If you don't specifyserverCaMode
, then the default configuration isGOOGLE_MANAGED_INTERNAL_CA
. This feature is in Preview.
To set a password policy while creating an instance, include the passwordValidationPolicy object in the request. Set the following parameters, as required:
enablePasswordPolicy
: Enables the password policy when set totrue
.To remove the password policy, you can use a
PATCH
request withnull
as the value forenablePasswordPolicy
. In this case, the other password policy parameters are reset.minLength
: Specifies the minimum number of characters that the password must have.complexity
: Checks if the password is a combination of lowercase, uppercase, numeric, and non-alphanumeric characters. The default value isCOMPLEXITY_DEFAULT
.reuseInterval
: Specifies the number of previous passwords that you can't reuse.disallowUsernameSubstring
: Prevents the use of the username in the password when set totrue
.passwordChangeInterval
: Specifies the minimum duration after which you can change the password. The value is in seconds with up to nine fractional digits, terminated bys
. For example,3.5s
.
To create the instance so that it can integrate with Vertex AI, include the enableGoogleMlIntegration object in the request. This integration lets you apply large language models (LLMs), which are hosted in Vertex AI, to a Cloud SQL for PostgreSQL database.
Set the following parameters, as required:
enableGoogleMlIntegration
: when this parameter is set totrue
, Cloud SQL instances can connect to Vertex AI to pass requests for real-time predictions and insights to the AIcloudsql.enable_google_ml_integration
: when this parameter is set toon
, Cloud SQL can integrate with Vertex AI
HTTP method and URL:
POST https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances
Request JSON body:
{ "name": "INSTANCE_ID", "region": "REGION", "databaseVersion": "DATABASE_VERSION", "rootPassword": "PASSWORD", "settings": { "tier": "MACHINE_TYPE", "edition": "EDITION_TYPE", "enableGoogleMlIntegration": "true" | "false" "databaseFlags": [ { "name": "cloudsql.enable_google_ml_integration", "value": "on" | "off" } ] "dataCacheConfig" = { "dataCacheEnabled": DATA_CACHE_ENABLED }, "backupConfiguration": { "enabled": true }, "passwordValidationPolicy": { "enablePasswordPolicy": true "minLength": "MIN_LENGTH", "complexity": COMPLEXITY_DEFAULT, "reuseInterval": "REUSE_INTERVAL", "disallowUsernameSubstring": "DISALLOW_USERNAME_SUBSTRING", "passwordChangeInterval": "PASSWORD_CHANGE_INTERVAL" } "ipConfiguration": { "privateNetwork": "PRIVATE_NETWORK", "authorizedNetworks": [AUTHORIZED_NETWORKS], "ipv4Enabled": false, "enablePrivatePathForGoogleCloudServices": true, "serverCaMode": "CA_MODE" } } }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-01T19:13:21.834Z", "operationType": "CREATE", "name": "OPERATION_ID", "targetId": "INSTANCE_ID", "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/operations/OPERATION_ID", "targetProject": "PROJECT_ID" }
The response is a long-running operation, which might take a few minutes to complete.
Retrieve the IPv4 address
Retrieve the automatically assigned IPv4 address for the new instance:
Before using any of the request data, make the following replacements:
- project-id: your project ID
- instance-id: instance ID created in prior step
HTTP method and URL:
GET https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
{ "kind": "sql#instance", "state": "RUNNABLE", "databaseVersion": "MYSQL_8_0_18", "settings": { "authorizedGaeApplications": [], "tier": "db-f1-micro", "kind": "sql#settings", "pricingPlan": "PER_USE", "replicationType": "SYNCHRONOUS", "activationPolicy": "ALWAYS", "ipConfiguration": { "authorizedNetworks": [], "ipv4Enabled": true }, "locationPreference": { "zone": "us-west1-a", "kind": "sql#locationPreference" }, "dataDiskType": "PD_SSD", "backupConfiguration": { "startTime": "18:00", "kind": "sql#backupConfiguration", "enabled": true, "binaryLogEnabled": true }, "settingsVersion": "1", "storageAutoResizeLimit": "0", "storageAutoResize": true, "dataDiskSizeGb": "10" }, "etag": "--redacted--", "ipAddresses": [ { "type": "PRIMARY", "ipAddress": "10.0.0.1" } ], "serverCaCert": { ... }, "instanceType": "CLOUD_SQL_INSTANCE", "project": "project-id", "serviceAccountEmailAddress": "redacted@gcp-sa-cloud-sql.iam.gserviceaccount.com", "backendType": "SECOND_GEN", "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id", "connectionName": "project-id:region:instance-id", "name": "instance-id", "region": "us-west1", "gceZone": "us-west1-a" }
Look for the ipAddress
field in the response.
Generate the write endpoint
If you plan to create a Cloud SQL Enterprise Plus edition instance, and you want Cloud SQL to generate a write endpoint automatically for the instance, then enable the Cloud DNS API for your Google Cloud project.
If you already have a Cloud SQL Enterprise Plus edition instance and you want Cloud SQL to generate a write endpoint automatically, then create a replica that's enabled for advanced disaster recovery.
A write endpoint is a global domain name service (DNS) name that resolves to the IP address of the current primary instance automatically. This endpoint redirects incoming connections to the new primary instance automatically in case of a replica failover or switchover operation. You can use the write endpoint in a SQL connection string instead of an IP address. By using a write endpoint, you can avoid having to make application connection changes when a regional outage occurs.
For more information about obtaining the write endpoint for the instance, see View instance information. For more information about using the write endpoint to connect to the instance, see Connect by using a write endpoint.
Custom instance configurations
Determines memory and virtual cores available to your Cloud SQL instance. Machine type availability is determined by your Cloud SQL edition.
For workloads that require real-time processing, such as online transaction processing (OLTP), make sure that your instance has enough memory to contain the entire working set. However, there are other factors that can impact memory requirements, such as number of active connections, and internal overhead processes. You should perform load testing to avoid performance issues in your production environment.
When you configure your instance, select enough memory and vCPUs to handle your workload, and upgrade as your workload increases. A machine configuration with insufficient vCPUs could lose its SLA coverage. For more information, see Operational guidelines.
Machine types for Cloud SQL Enterprise Plus edition instances
For Cloud SQL Enterprise Plus edition instances, machine types are predefined as follows:
Enterprise plus machine type | vCPUs | Memory (GB) | Local SSD |
---|---|---|---|
db-perf-optimized-N-2 | 2 | 16 | 375 |
db-perf-optimized-N-4 | 4 | 32 | 375 |
db-perf-optimized-N-8 | 8 | 64 | 375 |
db-perf-optimized-N-16 | 16 | 128 | 750 |
db-perf-optimized-N-32 | 32 | 256 | 1500 |
db-perf-optimized-N-48 | 48 | 384 | 3000 |
db-perf-optimized-N-64 | 64 | 512 | 6000 |
db-perf-optimized-N-80 | 80 | 640 | 6000 |
db-perf-optimized-N-96 | 96 | 768 | 6000 |
db-perf-optimized-N-128 | 128 | 864 | 9000 |
Machine types for Cloud SQL Enterprise edition instances
For Cloud SQL Enterprise edition instances, you can
also create custom instance configurations
using the gcloud sql
instances create
command.
Custom instance configurations let you select the amount of memory and CPUs
that your instance needs. This flexibility lets you choose the
appropriate VM shape for your workload.
Machine type names use the following format:
db-custom-#-#
Replace the first # placeholder with the number of CPUs in the machine, and the second # placeholder with the amount of memory in the machine.
For example, if your machine name is db-custom
, and your
machine has 1 CPU and 3840 MB of RAM, then the format for the machine would be
db-custom-1-3840
.
When selecting the number of CPUs and amount of memory, there are some restrictions on the configuration you choose:
- vCPUs must be either 1 or an even number between 2 and 96.
- Memory must be:
- 0.9 to 6.5 GB per vCPU
- A multiple of 256 MB
- At least 3.75 GB (3840 MB)
The following table lists the names of each legacy machine type, the number of CPUs and RAM for each machine type, and the resultant string for the machine type.
You can create the equivalent machine type by specifying the equivalent CPU and
RAM in the Google Cloud console, by using the gcloud CLI, or by
specifying db-custom-#-#
in the API.
Legacy machine type | vCPUs | Memory (MBs) | db-custom-CPU-RAM string (API tier string) |
---|---|---|---|
db-n1-standard-1 | 1 | 3840 | db-custom-1-3840 |
db-n1-standard-2 | 2 | 7680 | db-custom-2-7680 |
db-n1-standard-4 | 4 | 15360 | db-custom-4-15360 |
db-n1-standard-8 | 8 | 30720 | db-custom-8-30720 |
db-n1-standard-16 | 16 | 61440 | db-custom-16-61440 |
db-n1-standard-32 | 32 | 122880 | db-custom-32-122880 |
db-n1-standard-64 | 64 | 245760 | db-custom-64-245760 |
db-n1-standard-96 | 96 | 368640 | db-custom-96-368640 |
db-n1-highmem-2 | 2 | 13312 | db-custom-2-13312 |
db-n1-highmem-4 | 4 | 26624 | db-custom-4-26624 |
db-n1-highmem-8 | 8 | 53248 | db-custom-8-53248 |
db-n1-highmem-16 | 16 | 106496 | db-custom-16-106496 |
db-n1-highmem-32 | 32 | 212992 | db-custom-32-212992 |
db-n1-highmem-64 | 64 | 425984 | db-custom-64-425984 |
db-n1-highmem-96 | 96 | 638976 | db-custom-96-638976 |
Troubleshoot
Issue | Troubleshooting |
---|---|
Error message: Failed to create subnetwork. Couldn't
find free blocks in allocated IP ranges. Please allocate new ranges for
this service provider . |
There are no more available addresses in the allocated IP range. There
can be several possible scenarios:
To resolve this issue, you can either expand the existing allocated IP range or allocate an additional IP range to the private service connection. For more information, see Allocate an IP address range. If you used the If you're allocating a new range, take care that the allocation doesn't overlap with any existing allocations. After creating a new IP range, update the vpc peering with the following command: gcloud services vpc-peerings update \ --service=servicenetworking.googleapis.com \ --ranges=OLD_RESERVED_RANGE_NAME,NEW_RESERVED_RANGE_NAME \ --network=VPC_NETWORK \ --project=PROJECT_ID \ --force If you're expanding an existing allocation, take care to increase only the allocation range and not decrease it. For example, if the original allocation was 10.0.10.0/24, then make the new allocation at least 10.0.10.0/23. In general, if starting from a /24 allocation, decrementing the /mask by 1 for each condition (additional instance type group, additional region) is a good rule of thumb. For example, if trying to create both instance type groups on the same allocation, going from /24 to /23 is enough. After expanding an existing IP range, update the vpc peering with following command: gcloud services vpc-peerings update \ --service=servicenetworking.googleapis.com \ --ranges=RESERVED_RANGE_NAME \ --network=VPC_NETWORK \ --project=PROJECT_ID |
Error message: Failed to create subnetwork. Router status is
temporarily unavailable. Please try again later. Help Token:
[token-ID] . |
Try to create the Cloud SQL instance again. |
Error message: Failed to create subnetwork. Required
'compute.projects.get' permission for PROJECT_ID . |
When you create an instance using with a Private IP address, a service account is created just-in-time using the Service Networking API. If you have only recently enabled the Service Networking API, then the service account might not get created and the instance creation fails. In this case, you must wait for the service account to propagate throughout the system or manually add it with the required permissions. |
What's next
- Create a PostgreSQL database on the instance.
- Create PostgreSQL users on the instance.
- Secure and control access to the instance.
- Connect to the instance with a PostgreSQL client.
- Import data into the database.