This page lists known issues with Cloud SQL for SQL Server, along with ways you can avoid or recover from these issues.
Instance connection issues
Expired SSL/TLS certificates
If your instance is configured to use SSL, go to the Cloud SQL Instances page in the Google Cloud console and open the instance. Open its Connections page, select the Security tab and make sure that your server certificate is valid. If it has expired, you must add a new certificate and rotate to it.
Cloud SQL Auth Proxy version
If you are connecting using the Cloud SQL Auth Proxy, make sure you are using the most recent version. For more information, see Keeping the Cloud SQL Auth Proxy up to date.
Not authorized to connect
If you try to connect to an instance that does not exist in that project, the error message only says that you are not authorized to access that instance.
Can't create a Cloud SQL instance
If you see the
Failed to create subnetwork. Router status is temporarily unavailable. Please try again later. Help Token: [token-ID]
error message, try to create the Cloud SQL instance again.
Administrative issues
A large export operation can adversely affect instance availability
Before starting a large export, ensure that at least 25 percent of the database size is free (on the instance). Doing so helps prevent issues with aggressive autogrowth, which can affect the availability of the instance.
If your SQL Server instance uses a SQL Server Express edition:
If you specify a flag when you create a new instance, then creating the instance fails.
You can't set database flags on an existing instance.
Long-running Cloud SQL import and export instance operations can't be cancelled or stopped
Only one operation can run at a time on a Cloud SQL instance. Make sure you don't need to perform other operations on an instance when you start a long-running operation.
When you start a long-running Cloud SQL instance operation, such as an import or export operation, there's no way to cancel the operation without restarting the instance.
If you cancel an import from a BAK file, then the database that you're importing is left in a partial state. You must drop the database. If you cancel an import from a SQL file, then you must clean up partial data manually.
Issues with importing and exporting data
Do not create a BAK file (for import) from a read-only database or from a database that is in single-user mode. If you create a BAK file from a read-only database or from a database that is in single-user mode, and import that file, an error may occur.
If you're trying to import and export data from a large database (for example, a database that has 500 GB of data or greater), then the import and export operations might take a long time to complete. In addition, other operations (for example, the backup operation) aren't available for you to perform while the import or export is occurring. A potential option to improve the performance of the import and export process is to restore a previous backup using
gcloud
or the API.Cloud SQL supports bulk insert only on SQL Server 2022.
Cloud SQL only supports the
RAW
codepage.Cloud SQL doesn't support bulk insert on read replicas.
Cloud SQL supports bulk insert only for importing data to tables.
- Cloud Storage supports a maximum single-object size up five terabytes. If you have databases larger than 5TB, the export operation to Cloud Storage fails. In this case, you need to break down your export files into smaller segments.
Transaction logs and disk growth
Logs are purged once daily, not continuously. When the number of days of log retention is configured to be the same as the number of backups, a day of logging might be lost, depending on when the backup occurs. For example, setting log retention to seven days and backup retention to seven backups means that between six and seven days of logs will be retained.
We recommend setting the number of backups to at least one more than the days of log retention to guarantee a minimum of specified days of log retention.
Issues related to Cloud Monitoring or Cloud Logging
Instances with the following region names are displayed incorrectly in certain contexts, as follows:
us-central1
is displayed asus-central
europe-west1
is displayed aseurope
asia-east1
is displayed asasia
This issue occurs in the following contexts:
- Alerting in Cloud Monitoring
- Metrics Explorer
- Cloud Logging
You can mitigate the issue for Alerting in Cloud Monitoring, and for Metrics
Explorer, by using
Resource metadata labels.
Use the system metadata label region
instead of the
cloudsql_database
monitored resource label region
.