Generally, connection issues fall into one of the following three areas:
- Connecting - are you able to reach your instance over the network?
- Authorizing - are you authorized to connect to the instance?
- Authenticating - does the database accept your database credentials?
Each of those can be further broken down into different paths for investigation. The following section includes examples of questions you can ask yourself to help further narrow down the issue:
Connection issues checklist
- Private IP
- Have you enabled the
Service Networking APIfor your project?
- Are you using a Shared VPC?
- Does your user or service account have the required IAM permissions to manage a private services access connection?
- Is private services access connection configured for your project?
- Did you allocate an IP address range for the private connection?
- Did your allocated IP address ranges contain at least a /24 space for every region where you are planning to create postgres instances?
- If you are specifying an allocated IP address range for your postgres instances, does the range contain at least a /24 space for every region where you are planning to create postgres instances in this range?
- Is the private connection created?
- If the private connection was changed, were the vpc-peerings updated?
- Do the VPC logs indicate any errors?
- Is your source machine's IP a non-RFC 1918 address?
- Public IP
- Cloud SQL Auth proxy
- Is the Cloud SQL Auth proxy up to date?
- Is the Cloud SQL Auth proxy running?
- Is the instance connection name formed correctly in the Cloud SQL Auth proxy connection command?
- Have you checked the Cloud SQL Auth proxy output? Pipe the output to a file, or watch the terminal where you started the Cloud SQL Auth proxy.
- Does your user or service account have the required IAM permissions to connect to a Cloud SQL instance?
- Have you enabled the
Cloud SQL Admin APIfor your project?
- If you have an outbound firewall policy, make sure it allows connections to port 3307 on the target Cloud SQL instance.
- If you are connecting using UNIX domain sockets, confirm that the sockets were created by listing the directory specified with the -dir when you started the Cloud SQL Auth proxy.
- Cloud SQL connectors and language-specific code
- Is the connection string formed correctly?
- Have you compared your code with the sample code for your programming language?
- Are you using a runtime or framework for which we don't have sample code?
- If so, have you looked to the community for relevant reference material?
- Self-managed SSL/TLS certificates
- Is the client certificate installed on the source machine?
- Is the client certificate spelled correctly in the connection arguments?
- Is the client certificate still valid?
- Are you getting errors when connecting using SSL?
- Is the server certificate still valid?
- Authorized networks
- Is the source IP address included?
- Are you using a non-RFC 1918 IP address?
- Are you using an unsupported IP address?
- Connection failures
- Native database authentication (username/password)
- IAM database authentication
- Have you enabled the
cloudsql.iam_authenticationflag on your instance?
- Did you add a policy binding for the account?
- Are you using the Cloud SQL Auth proxy with the
-enable_iam_loginor an Oauth 2.0 token as the database password?
- If using a service account, are you using the shortened email name?
- Learn more about IAM database authentication in PostgreSQL.
For specific API error messages, see the Error messages reference page.
Additional connectivity troubleshooting
For other issues, see the Connectivity section in the troubleshooting page.
Common connection issues
Verify that your application is closing connections properly
If you see errors containing "
Aborted connection nnnn to db:", it usually
indicates that your application is not stopping connections properly. Network issues can also cause this error. The error does not mean that
there are problems with your Cloud SQL instance. You are also encouraged to run
tcpdump to inspect the packets to track down the source of the problem.
For examples of best practices for connection management, see Managing database connections.
Verify that your certificates have not expired
If your instance is configured to use SSL, go to the Cloud SQL Instances page in the Cloud Console and open the instance. Open its Connections page and make sure that your server certificate is valid. If it has expired, you must add a new certificate and rotate to it.
Verify that you are authorized to connect
If your connections are failing, check that you are authorized to connect:
- If you are having trouble connecting using an IP address, for example,
you are connecting from your on-premises environment with the psql
client, then make sure that the IP address you are connecting from
is authorized to connect
to the Cloud SQL instance.
Connections to a Cloud SQL instance using a private IP address are automatically authorized for RFC 1918 address ranges. This way, all private clients can access the database without going through the Cloud SQL Auth proxy. Non-RFC 1918 address ranges must be configured as authorized networks.
Cloud SQL doesn't learn Non-RFC 1918 subnet routes from your VPC by default. You need to update the network peering to Cloud SQL to export any Non-RFC 1918 routes. For example:
gcloud compute networks peerings update cloudsql-postgres-googleapis-com \ --network=NETWORK \ --export-subnet-routes-with-public-ip \ --project=PROJECT_ID
Here's your current IP address.
- Try the
gcloud sql connectcommand to connect to your instance. This command authorizes your IP address for a short time. You can run this command in an environment with Cloud SDK and psql client installed. You can also run this command in Cloud Shell, which is available in the Google Cloud Console and has Cloud SDK and the psql client pre-installed. Cloud Shell provides a Compute Engine instance that you can use to connect to Cloud SQL.
- Temporarily allow all IP addresses to connect to an instance by
Verify how you connectIf you get an error message like:
FATAL: database `user` does not exist.
gcloud sql connect --user command only works with the default
postgres). The workaround is to connect using the default user, then use
"\c" psql command to reconnect as the different user.
Determining how connections are being initiated
You can see information about your current connections by connecting to your database and running the following command:
SELECT * from pg_stat_activity ;
Connections that show an IP address, such as
126.96.36.199, are connecting using IP.
cloudsqlproxy~188.8.131.52 are using the Cloud SQL Auth proxy, or else they
originated from App Engine. Connections from
localhost may be
used by some internal Cloud SQL processes.
Understanding connection limits
There are no QPS limits for Cloud SQL instances. However, there are connection, size, and App Engine specific limits in place. See Quotas and Limits.
Database connections consume resources on the server and the connecting application. Always use good connection management practices to minimize your application's footprint and reduce the likelihood of exceeding Cloud SQL connection limits. For more information, see Managing database connections.
Show connections and threads
To see the processes that are running on your database, use the pg_stat_activity table:
select * from pg_stat_activity;
Connections timeout (from Compute Engine)
Connections with a Compute Engine instance timeout after 10 minutes of inactivity, which can affect long-lived unused connections between your Compute Engine instance and your Cloud SQL instance. For more information, see Networking and Firewalls in the Compute Engine documentation.
To keep long-lived unused connections alive, you can set the TCP keepalive. The following commands set the TCP keepalive value to one minute and make the configuration permanent across instance reboots.
Display the current tcp_keepalive_time value.
Set tcp_keepalive_time to 60 seconds and make it permanent across reboots.
echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
Apply the change.
sudo /sbin/sysctl --load=/etc/sysctl.conf
Display the tcp_keepalive_time value to verify the change was applied.
Tools for debugging connectivity
tcpdump is a tool to capture packets. It is highly encouraged to run
tcpdump to capture and inspect the packets between your host and the CloudSQL instances when you are debugging the connectivity problems.
Locate your local IP address
If you don't know the local address of your host, then run the
ip -br address show command. On Linux, this shows the network interface,
the status of the interface, the local IP, and MAC addresses. For example:
eth0 UP 10.128.0.7/32 fe80::4001:aff:fe80:7/64.
Alternatively, you can run
ifconfig to see
the status of your network interfaces.
Testing with Connectivity Test
Connectivity Test is a diagnostics tool that lets you check connectivity between endpoints in your network. It analyzes your configuration and in some cases performs run-time verification. It supports Cloud SQL now. Follow these instructions to run tests with your Cloud SQL instances.
Testing your connection
You can use the psql client to test your ability to connect from your local environment. For more information, see Connecting the psql client using IP addresses and Connecting the psql client using the Cloud SQL Auth proxy.
Determining the IP address for your application
To determine the IP address of a computer running your application so you can authorize access to your Cloud SQL instance from that address, use one of the following options:
- If the computer is not behind a proxy or firewall, log in to the computer and use the What is my IP? site to determine its IP address.
- If the computer is behind a proxy or firewall, log in to the computer and use a tool or service like whatismyipaddress.com to determine its true IP address.
Open local ports
To verify that your host is listening on the ports you think it is, run the
ss -tunlp4 command. This tells you what ports are open and
For example, if you have a PostgreSQL database running, then port 5432 should be
up and listening. For SSH, you should see port 22.
All local port activity
netstat command to see all the local port activity. For
netstat -lt shows all the currently active ports.
Connect to your Cloud SQL instance using telnet
To verify that you can connect to your Cloud SQL instance using
telnet command. Telnet attempts to connect to the IP address and
port you give it.
telnet 184.108.40.206 5432.
On success, you see the following:
Connected to 220.127.116.11.
Connected to 18.104.22.168.
On failure, you see Trying 22.214.171.124...
telnet hangs until you force-close the attempt:
Client authentication is controlled by a configuration file, which is named
pg_hba.conf (HBA stands for host-based authentication).
Make sure the replication connections section of the
file on the source database is updated to accept connections from the
Cloud SQL VPC's IP address range.
You can view logs for Cloud SQL instances and other Google Cloud projects such as Cloud VPN or Compute Engine instances. To view logs for your Cloud SQL instance log entries:
In the Google Cloud Console, go to the Cloud Logging page.
- Select an existing Cloud SQL project at the top of the page.
- In the Query builder, add the following:
- Resource: Select Cloud SQL Database. In the dialog, select a Cloud SQL instance.
- Log names: Scroll to the Cloud SQL section and select
appropriate log files for your instance. For example:
- Severity: Select a log level.
- Time range: Select a preset or create a custom range.
gcloud logging read "projects/PROJECT_ID/logs/cloudsql.googleapis.com/postgres.log" \ --limit=10
Private IP addresses
Connections to a Cloud SQL instance using a private IP address are automatically authorized for RFC 1918 address ranges. Non-RFC 1918 address ranges must be configured in Cloud SQL as authorized networks. You also need to update the network peering to Cloud SQL to export any Non-RFC 1918 routes. For example:
gcloud compute networks peerings update cloudsql-postgres-googleapis-com
The IP range 172.17.0.0/16 is reserved for the Docker bridge network. Any Cloud SQL instances created with an IP address in that range will be unreachable. Connections from any IP address within that range to Cloud SQL instances using private IP address will fail.
See the Cloud VPN troubleshooting page.