This document explains how to set up Cloud SQL for PostgreSQL for production use. Cloud SQL for PostgreSQL is a fully managed RDBMS service that is integrated with the Google Cloud ecosystem. Cloud SQL for PostgreSQL has a number of key features and capabilities:
- Serves multiple applications and users globally through the use of Google Cloud's managed security capabilities, including VPC and automatic data encryption at-rest and in-transit.
- Supports high-availability architecture using primary and standby instances and automatic failover between them.
- Supports distributed database workloads by enabling read/write separation between the primary node and read-replicas under the same database cluster.
- Supports automatic backups integrated with Cloud Storage and automatic database maintenance.
- Supports a variety of online transactional processing (OLTP) workloads.
Deploying a Cloud SQL for PostgreSQL instance
You can set up a Cloud SQL for PostgreSQL instance in a few steps by using the Google Cloud console or the Google Cloud CLI. Both methods are described here.
Console
In the Google Cloud console, go to the SQL>Instances page.
Click Create instance, and then click Choose PostgreSQL.
In the Create a PostgreSQL instance page, provide the following details:
- Instance ID: Enter a name for the instance. The instance name is permanent and cannot be changed later.
- Default user password: Choose the postgres user password as the default administrator account. (You can create additional users after the PostgreSQL instance deployment.)
- Region and Zone: Select a region and zone. It's a best practice to deploy the PostgreSQL instance in the same region as the associated Google Cloud services (for example, applications) or in geographic proximity to the users in order to reduce latency for data processing. Once you select a region, you can't modify it later.
- Database version: Choose the latest version, or if you need it, the most recently available version.
Click Create to deploy the PostgreSQL instance. Or click Show configuration options to set additional configurations.
Additional configuration options:
Connectivity: Typically, you connect the PostgreSQL instance to the network through a public IP address, a private IP address, and authorized networks. Authorized networks are allowed connections that you can set in order to establish a remote connection—for example, approving a connection from a specific IP address from a client.
Machine type and storage: Choose the machine type by allocated resources (vCPUs, RAM), storage type (SSD or HDD), and storage capacity. Increasing storage capacity also increases the supported disk throughput (MB/s) and the read and write IOPs for your database. Adjust the storage capacity based on the expected disk throughput and IOPs requirements.
Auto backups and high availability: You use the automated backups feature, which is enabled by default, to define the time frame in which to perform automated backups. Additionally, the point-in-time recovery option, which uses write-ahead logs, is also required for creating a read replica. These logs update regularly, and they use storage space. To avoid unexpected storage issues, we recommend enabling automatic storage increases when using point-in-time recovery. High-availability is disabled by default (multi-zone). To enable automatic failover, you select the High availability (regional) option.
Flags: This setting specifies the Cloud SQL method for controlling settings and parameters for your instance; it is equivalent to the
postgresql.conf
file of an unmanaged PostgreSQL instance. For a complete listing, see the product documentation. Changing the value of a flag or setting a new flag might require an instance restart.Maintenance: This section specifies your preferred time window for performing maintenance tasks, including bug fixes and minor version upgrades. Note that maintenance operations generally require an instance restart and might cause a short service disruption. You can sign up to be notified by email of upcoming maintenance events.
Labels: In this section, you define key/value pairs to categorize your PostgreSQL instance—for example:
gcloud
Create the PostgreSQL instance:
gcloud sql instances create postgresql01 \ --cpu=2 \ --memory=7680MB \ --region=us-central1 --zone=us-central1-a
Assign a password for the PostgreSQL default user (syntax example):
gcloud sql users set-password postgres \ --instance INSTANCE_NAME \ --password PASSWORD
You can specify these additional options:
- Database version: One of the supported PostgreSQL versions.
- Storage type: Either SSD or HDD as the storage type.
- Storage capacity: The initial storage settings for the instance.
- Automatic storage increase: Cloud SQL automation for adding additional storage when free space runs low.
- High-availability: Cloud SQL high-availability.
- Automatic backups: The start-time window for backups.
- Point-in-time recovery: Point-in-time recovery and write-ahead logging.
- Maintenance window: A one-hour window when Cloud SQL can perform disruptive maintenance.
- Maintenance timing: The preferred timing for performing updates on the PostgreSQL instance. You can specify preview for earlier updates or production for later updates.
- Database flags: The PostgreSQL database flags for controlling settings and parameters.
The following
gcloud
command creates a Cloud SQL for PostgreSQL instance with some additional options:gcloud sql instances create postgresql01 \ --cpu=2 \ --memory=7680MB \ --region=us-central1 \ --zone=us-central1-a \ --database-version=POSTGRES_12 \ --storage-type=SSD \ --storage-size=100 \ --storage-auto-increase \ --availability-type=regional \ --backup-start-time=23:30 \ --enable-point-in-time-recovery \ --maintenance-window-day=sun \ --maintenance-window-hour=11 \ --maintenance-release-channel=production \ --database-flags max_connections=100
For more information, see Creating instances.
Instance selection
Instance selection or sizing involves selecting a machine type that can support your Oracle® workload on Cloud SQL for PostgreSQL. The instance types are divided into two main groups:
- Shared-core machines: Cost effective.
- Dedicated-core instances: Support multiple vCPUs and memory ratios.
For more information about instance types, see Cloud SQL pricing.
In order to size your instance, start by analyzing the resources allocated to
and used by your source database. You can get the Oracle database resources
settings from the
V$OSSTAT
system view or from an Oracle AWR report (see the following examples):
Physical memory (total number of bytes of physical memory in the database server):
SQL> SELECT ROUND(MAX(VALUE)/1024/1024/1024) AS MEM_SIZE_GB
FROM V$OSSTAT
WHERE STAT_NAME = 'PHYSICAL_MEMORY_BYTES';
Allocated memory:
SQL> SELECT NAME, VALUE, DISPLAY_VALUE FROM V$PARAMETER
WHERE NAME LIKE '%sga%' OR NAME LIKE '%memory%';
CPU cores (number of CPU cores available):
SQL> SELECT VALUE FROM V$OSSTAT
WHERE STAT_NAME = 'NUM_CPU_CORES';
CPU cores (identified by an Oracle instance using the V$LICENSE
view):
SQL> SELECT CPU_CORE_COUNT_CURRENT FROM V$LICENSE;
Oracle AWR report resource example (An Oracle AWR report can provide additional insights about a specific Oracle instance's workload characteristics):
When you have the resource information of your source database, we recommend choosing the closest matching Cloud SQL instance type and running some benchmarks. The results from your benchmarks help you finalize your instance selection.
High-availability configuration
To implement a disaster recovery solution, similar to Oracle's Data Guard, Cloud SQL for PostgreSQL offers high-availability capabilities that provide automatic failover from the cluster's primary instance to its standby Instance. The standby instance is located in a different zone in the same region as the primary instance. The standby instance is kept in sync through synchronous replication between the persistent disks of the primary and standby instances. This method ensures that all data modifications to the primary are also applied to the standby.
In the event of a primary failure, such as an unresponsive instance or zone-level failure, Cloud SQL performs an automatic failover. The primary instance is monitored by heartbeats, which occur in 1-second intervals, with a failover activating after approximately 60 seconds of no heartbeats received from the primary instance. At this point, the primary instance fails over to the standby, providing data access to applications or clients transparently, while the existing read-replicas remain operational. Note that unlike Active Data Guard, the standby instance is not open for reads while acting as a standby; with Cloud SQL, only read-replicas can be used to offload reads from the primary.
You can enable the Cloud SQL for PostgreSQL high-availability (HA) feature when you create the instance or for an existing PostgreSQL instance. Here are the steps:
Console
- On the instance creation page, click Show configuration options>Auto backups and high availability>Availability, and then select the High availability (regional) option.
- For an existing PostgreSQL instance, edit the PostgreSQL instance by following the preceding step. This requires a database reboot.
To initiate a failover for testing purposes, go to the Cloud SQL page and click Failover.
You can enable failback in the same manner.
gcloud
Enable HA by setting the
availability-type
parameter toregional
:gcloud sql instances create postgresql01 \ --cpu=2 \ --memory=7680MB \ --region=us-central1 \ --zone=us-central1-a \ --availability-type=regional
Check if an existing PostgreSQL instance has HA configured:
gcloud sql instances describe INSTANCE_NAME
If the output from this command includes
availabilityType: REGIONAL
, then HA is already enabled. If the output includesavailabilityType: ZONAL
, then HA is not configured and can be enabled using thepatch
command:gcloud sql instances patch INSTANCE_NAME --availability-type REGIONAL
Initiate a failover test from the primary to the standby:
gcloud sql instances failover PRIMARY_INSTANCE_NAME
To fail back, run the same failover command on the new primary.
Admin users and accounts
Two default PostgreSQL user accounts come with any Cloud SQL for PostgreSQL
installation. These accounts are postgres
and cloudsqlimportexport
.
postgres account
The postgres
account is the administrator account and is equivalent to Oracle
SYS
or SYSTEM
users under Cloud PaaS. Because Cloud SQL for PostgreSQL
is a managed service, the postgres
user, unlike Oracle SYS
or SYSTEM
users, is restricted from accessing certain system procedures and tables that
require advanced privileges.
The postgres
user is part of the cloudsqlsuperuser
role, and has the
following attributes (privileges): CREATEROLE
, CREATEDB
, and LOGIN
. It
does not have the SUPERUSER
or REPLICATION
attributes.
cloudsqlimportexport account
The cloudsqlimportexport
account is created with the minimal set of privileges
needed for CSV import/export operations. You have the option to create your own
users to perform these operations, but if you don't, the default
cloudsqlimportexport
user is used. The cloudsqlimportexport
user is a system
user, and you cannot directly use it.
Account management (add, delete, or change password)
Account management entails creating new user accounts, modifying the password
of an existing account, and deleting an account that is no longer needed. You
can perform these account operations through the Google Cloud console, the
gcloud
tool, or the PostgreSQL client.
Console
List the existing accounts from the Google Cloud console: Go to Cloud SQL Console>Select PostgreSQL Instance>Users.
Modify the account's password or delete the account entirely by clicking the three dots (More) icon next to the account.
From this screen, you can click Create user account to create a new PostgreSQL user.
gcloud
List the existing user accounts:
gcloud sql users list --instance=postgresql01
The output is similar to the following:
NAME HOST Postgres
Create the
appuser
user account, set the password, and deleteappuser
:gcloud sql users create appuser \ --instance=postgresql01 --password=PASSWORD gcloud sql users set-password appuser \ --host=% --instance=postgresql01 --prompt-for-password gcloud sql users delete appuser --instance=postgresql01
PostgreSQL
Perform these same actions directly from a standard PostgreSQL client—for example:
postgres=> create user appuser with login password 'my_password'; postgres=> alter user appuser password 'my_password'; postgres=> drop user appuser;
You can configure PostgreSQL database-level permissions (for example, reading from a specific table or view) by using the
GRANT
/REVOKE
commands through the PostgreSQL client.
Monitoring and alerting
Cloud Logging is the main logging tool on Google Cloud. It is used for collecting and viewing a variety of monitoring logs for resources such as Cloud SQL for PostgreSQL.
Cloud Logging lets you view logs for Cloud SQL for PostgreSQL filtered by event level (for example, Critical, Error, or Warning), event timeframe, and free text search, as in the following screenshot.
PostgreSQL database instance monitoring
Oracle's main monitoring tools are Enterprise Manager and Grid/Cloud Control. These tools let you do real-time database instance monitoring at a database session and SQL statement level.
Cloud SQL for PostgreSQL provides comparable monitoring capabilities through the Google Cloud console. From there, you can get a summary view of your database instances, including CPU utilization, storage usage, memory usage, read/write operations, active connections, transactions per second, and ingress/egress bytes. Note that Google Cloud Observability provides additional monitoring metrics for Cloud SQL for PostgreSQL, such as auto-failover requests and replication lag between the primary and read-replicas.
The following example graph shows a graph of transactions per second for the last 6 hours:
Monitoring read-replicas
You can monitor read-replicas through the Google Cloud console the same way that you monitor the primary instance. There are specific metrics for checking the replication status between the primary and read-replica instances. These metrics are used to populate the read-replica instance overview page in the Google Cloud console.
Alternatively, you can check the replication status from the command line:
gcloud sql instances describe REPLICA_NAME
A third option is to check the replication status through a PostgreSQL client. The following PostgreSQL command checks the read-replica status:
postgres=> \x on Expanded display is on. postgres=> select * from pg_stat_replication; -[ RECORD 1 ]----+------------------------------------------- pid | 74733 usesysid | 16388 usename | cloudsqlreplica application_name | PROJECT_ID:REPLICA_NAME client_addr | REPLICA_IP client_hostname | client_port | 41660 backend_start | 2020-09-28 06:59:38.783981+00 backend_xmin | state | streaming sent_lsn | 0/2939FFA8 write_lsn | 0/2939FFA8 flush_lsn | 0/2939FFA8 replay_lsn | 0/2939FFA8 write_lag | flush_lag | replay_lag | sync_priority | 0 sync_state | async reply_time | 2020-09-28 07:17:52.714969+00 postgres=>
PostgreSQL database monitoring
This section describes some additional monitoring tasks that are considered routine for a PostgreSQL DBA.
Session monitoring
Oracle sessions are monitored by querying the dynamic performance views known
as the "V$" views. The V$SESSION
and the V$PROCESS
views are commonly used
to gain real-time insights about current database activity through SQL
statements. You can monitor session activity in PostgreSQL in a similar manner,
both through PostgreSQL commands and SQL statements.
The PostgreSQL
pg_stat_activity
dynamic view provides detailed information on current database session
activity:
postgres=> \x on postgres=> select * from pg_stat_activity where backend_type = 'client backend' and usename != 'cloudsqladmin'; -[ RECORD 1 ]----+----------------------------------------------------------------------------------------------------- datid | 14052 datname | postgres pid | 74750 usesysid | 16389 usename | postgres application_name | psql client_addr | CLIENT_IP client_hostname | client_port | 51904 backend_start | 2020-09-28 07:01:30.214099+00 xact_start | 2020-09-28 07:28:48.982115+00 query_start | 2020-09-28 07:28:48.982115+00 state_change | 2020-09-28 07:28:48.982117+00 wait_event_type | wait_event | state | active backend_xid | backend_xmin | 88513 query | select * from pg_stat_activity where backend_type = 'client backend' and usename != 'cloudsqladmin'; backend_type | client backend postgres=>
Long transaction monitoring
In order to identify long running transactions that might lead to performance
issues, query the pg_stat_activity
dynamic view. You can identify long-running
queries by applying appropriate filters on columns such as query_start
and
state
.
Locks monitoring
You can monitor database locks through the
pg_locks
dynamic view, which provides real-time information about any lock contention
that can lead to performance issues.
Alerting
You can use alerting in addition to monitoring and logging. You can also create alerts for conditions.
Scaling
Cloud SQL for PostgreSQL supports both vertical and horizontal scaling options.
You scale vertically by adding more resources to the Cloud SQL instance, such as increasing the instance-assigned number of CPUs and memory. Network throughput of your instance depends on the values you choose for CPU and memory.
Cloud SQL supports up to 30 TB of storage space. Adding storage capacity generally increases throughput and disk IOPs of an instance. Note that network throughput of a Cloud SQL instance includes reads/writes of your data (disk throughput) as well as the content of queries, calculations, and other data not stored on your database. It's important to consider these factors when vertically scaling your Cloud SQL instance.
You scale horizontally by creating read replicas. Read replicas let you scale your read workloads onto separate Cloud SQL instances without affecting the performance and availability of the primary instance.
Backup and recovery
There are two database backup methods for Cloud SQL for PostgreSQL: on-demand and automated. You can do on-demand backups anytime and they are persisted until you delete them. Automated backups use a 4-hour backup window and are retained for 7 days.
You can restore Cloud SQL for PostgreSQL database backups to the same instance, overwriting the existing data, or to a new instance. In addition, Cloud SQL for PostgreSQL lets you restore a PostgreSQL database to a specific point in time as long as point-in-time recovery is turned on and the automated backup option is enabled.
Cloud SQL for PostgreSQL provides database cloning capabilities. The clone must be created from the primary instance (that is, it cannot be created from a replica). You can run database backups, restores, and clones from either the Google Cloud console or the gcloud CLI.
Automation
You can use the Cloud SQL Admin API to completely automate administering a Cloud SQL for PostgreSQL instance. The Cloud SQL Admin API is a REST API for controlling different types of resources such as Instances, Databases, Users, Flags, Operations, SslCerts, Tiers, and BackupRuns. For more information, see the API documentation.
What's next
- Explore more about Cloud SQL for PostgreSQL user accounts.
- Learn more about Cloud SQL for PostgreSQL for Oracle users:
- Migrating Oracle users to Cloud SQL for PostgreSQL: Terminology and functionality
- Migrating Oracle users to Cloud SQL for PostgreSQL: Data types, users, and tables
- Migrating Oracle users to Cloud SQL for PostgreSQL: Queries, stored procedures, functions, and triggers
- Migrating Oracle users to Cloud SQL for PostgreSQL: Security, operations, monitoring, and logging
- Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center.