Debugging connectivity

Debug connectivity

You've set up connectivity between your source and destination databases, but how do you know they're connected? When your communications fail between them, how can you find out what's gone wrong and where?

The most basic tools are ping and traceroute.


Ping performs a basic test to determine if the destination ("remote host") is available from the source. Ping sends an ICMP Echo Request packet to a remote host, and it expects an ICMP Echo Reply in return. If ping doesn't succeed, then there's no route from the source to the destination. Success, however, doesn't mean that your packets can get through, only that in general, the remote host can be reached.

While ping can tell if a host is alive and responding, it's not guaranteed to be reliable. Some network providers block ICMP as a security precaution, which can make connectivity debugging more difficult.


Traceroute tests the complete route network packets take from one host to another. It shows all the steps ("hops") that the packet takes along the way, and how long each step takes. If the packet doesn't go through all the way to the destination, traceroute doesn't complete, but ends with a series of asterisks. In this case, look for the last IP address that was successfully reached along the way. This is where connectivity broke down.

Traceroute can time out. It can also fail to complete if a gateway along the way isn't configured correctly to pass the packet along to the next hop.

When traceroute fails to complete, you might be able to figure out where it stopped. Find the last IP address listed in the traceroute output, and do a browser search for who owns [IP_ADDRESS]. Results may or may not show the owner of the address, but it's worth a try.


The mtr tool is a form of traceroute that remains live and continuously updated, similar to how the top command works for local processes.

Locate your local IP address

If you don't know the local address of your host, then run the ip -br address show command. On Linux, this shows the network interface, the status of the interface, the local IP, and MAC addresses. For example: eth0 UP fe80::4001:aff:fe80:7/64.

Alternatively, you can run ipconfig or ifconfig to see the status of your network interfaces.

Locate the outgoing IP address

If you don't know the IP address that the source and destination databases use to communicate with each other (the outgoing IP address), then complete the following steps:

  1. Go to the SQL Instances page in the Google Cloud Console.

    Go to the Cloud SQL Instances page

  2. Click the name of the instance that's associated with the migration job that you're debugging.

  3. Scroll down until the Connect to this instance pane appears. In this pane, the outgoing IP address appears.

Open local ports

To verify that your host is listening on the ports you think it is, run the ss -tunlp4 command. This tells you what ports are open and listening. For example, if you have a PostgreSQL database running, then port 5432 should be up and listening. For SSH, you should see port 22.

All local port activity

Use the netstat command to see all the local port activity. For example, netstat -lt shows all the currently active ports.

Connect to the remote host using telnet

To verify that you can connect to the remote host using TCP, run the telnet command. Telnet attempts to connect to the IP address and port you give it.

If your remote host is running a PostgreSQL database, for example, then you should be able to telnet to it on port 5432: telnet 5432.

On success, you see the following:


Connected to .

On failure, you see telnet stops responding until you force-close the attempt:


^C. .

Client authentication

Client authentication is controlled by a configuration file, which is named pg_hba.conf (HBA stands for host-based authentication).

Make sure the replication connections section of the pg_hba.conf file on the source database is updated to accept connections from the Cloud SQL VPC's IP address range.

Cloud Logging

Database Migration Service and Cloud SQL use Cloud Logging. See the Cloud Logging documentation for complete information and review the Cloud SQL sample queries.

View logs

You can view logs for Cloud SQL instances and other Google Cloud projects such as Cloud VPN or Compute Engine instances. To view logs for your Cloud SQL instance log entries:


  1. Go to the Logs Viewer
  2. Select an existing Cloud SQL project at the top of the page.
  3. In the Query builder, add the following:
    • Resource: Select Cloud SQL Database. In the dialog, select a Cloud SQL instance.
    • Log names: Scroll to the Cloud SQL section and select appropriate log files for your instance. For example:
    • Severity: Select a log level.
    • Time range: Select a preset or create a custom range.


Use the gcloud logging command to view log entries. In the example below, replace PROJECT_ID. The limit flag is an optional parameter that indicates the maximum number of entries to return.

gcloud logging read "projects/[PROJECT_ID]/logs/" --limit=10

Private IP addresses

Connections to a Cloud SQL instance using a private IP address are automatically authorized for RFC 1918 address ranges. Non-RFC 1918 address ranges must be configured in Cloud SQL as authorized networks. You also need to update the network peering to Cloud SQL to export any Non-RFC 1918 routes. For example: gcloud compute networks peerings update cloudsql-postgres-googleapis-com --network=NETWORK --export-subnet-routes-with-public-ip --project=PROJECT

The IP range is reserved for the Docker bridge network. Any Cloud SQL instances created with an IP address in that range will be unreachable. Connections from any IP address within that range to Cloud SQL instances using private IP address will fail.

VPN troubleshooting

See the Google Cloud Cloud VPN troubleshooting page.

Troubleshooting reverse SSH tunnel issues

SSH tunneling is a method to forward some communication on top of an SSH connection. Reverse SSH tunneling allows setting up an SSH tunnel, but maintaining that the destination network is the one that initiates the tunnel connection. This is useful when you don't want to open a port in your own network for security purposes.

What you're trying to achieve is to setup the following:

Cloud SQL DB ---> Compute Engine VM bastion ---> tunnel ---> source network bastion ---> source DB

It's assumed that:

  • The Compute Engine VM bastion can access the Cloud SQL DB.
  • The source network bastion can access the source DB (this is achieved by peering the Cloud SQL network to the Compute Engine VM network).

You then setup an SSH tunnel from the source network bastion to the Compute Engine VM bastion, which routes any incoming connections to some port on the Compute Engine VM bastion through the tunnel to the source DB.

Each link in the above scenario can be setup improperly and prevent the entire flow from working. Troubleshoot each link, one by one:

source network bastion ---> source DB

  1. Connect to the source network bastion using SSH, or from the terminal if it's the local machine.
  2. Test connectivity to the source DB using one of the following methods:
    • telnet [source_db_host_or_ip] [source_db_port] - expect to see the telnet connection strings, ending with Connected to x.x.x.x.
    • [db_client] -h[source_db_host_or_ip] -P[source_db_port] - expect to see access denied

If this fails, then you need to verify that you enabled access from this bastion to the source DB.

Compute Engine VM bastion ---> source DB

  1. SSH to the Compute Engine VM bastion (using gcloud compute ssh VM_INSTANCE_NAME)
  2. Test connectivity to the source DB using one of the following methods:
    • telnet [tunnel_port] - expect to see the telnet connection strings, ending with Connected to x.x.x.x.
    • [db_client] -h127.0.0.1 -P[tunnel_port] - expect to see access denied

If this fails, then you need to verify that the tunnel is up and running properly. Running sudo netstat -tupln shows all listening processes on this VM, and you should see sshd listening on the tunnel_port.

Cloud SQL DB ---> source DB

This is best tested by testing the migration job from Database Migration Service. If this fails, then it means there's some issue with VPC peering or routing between the Cloud SQL network and the Compute Engine VM bastion network.

The source database server's firewall must be configured to allow the entire internal IP range allocated for the private service connection of the VPC network that the Cloud SQL destination instance is going to use as the privateNetwork field of its ipConfiguration settings.

To find the internal IP range in the console:

  1. Go to the VPC networks page in the Google Cloud Console.

  2. Select the VPC network that you want to use.


You can also view the traffic between the Cloud SQL instance and the Compute Engine VM instance in the Cloud Logging console in the Cloud VPN gateway project. In the Compute Engine VM logs, look for traffic coming from the Cloud SQL instance. In the Cloud SQL instance's logs, look for traffic from the Compute Engine VM.