Cloud SQL for MySQL error messages

This page discusses some of the error messages encountered in Cloud SQL.

Overview

Error messages in Cloud SQL come from many sources and appear in many places. Some error messages come from the database engines themselves, some from the Cloud SQL service, some from client applications, and some are returned by calls to the Cloud SQL Admin API.

This page includes some of the most common errors seen in Cloud SQL. If you do not find the error code or message you are looking for here, you can look for source reference material here:

If you don't find the reference material for the error message that you're seeing, you can also search in some of these places where other users may have relevant experience:

Unknown errors

Sometimes you see the following error message: Unknown Error. This message can have many possible sources and meanings:

  • Cloud SQL runs third-party binaries (for example, mysqld), which can generate unknown error messages. Such errors are in the scope of the third-party binaries and are outside the scope of Cloud SQL.

  • Sometimes a third-party tool reports the message on the command-line, but a more specific error is in the Cloud SQL log files.

  • Sometimes it is an error code that is unknown to the Cloud SQL service. In this case, it really means Unknown Error Code.

The following table shows some known cases where an Unknown Error can occur, and lists specific remedies where applicable. However, this is not a complete list. If you don't find your case in the table, check with the public issue tracker for Cloud SQL. If you don't find the issue there, consider submitting a report, or reviewing other support options.

Operation The issue might be... Things to try...
Backup If you see this during automated or manual backups, it's likely the instance disk is full. If the temporary file size is taking up too much space, you can restart the instance to remove the file and free up the disk space. Otherwise, you might need to upgrade your instance to a larger disk size.
Clone This can occur when there is a shortage of resources in the selected zone. Try another zone in the region, or wait and try again later.
Create instance
  • This can occur when you are trying to re-use the same name as a recently-deleted instance.
  • It can also be caused by intermittent connectivity issues.
  • The logs might show that the Service Networking API is not enabled for the project.
  • Instance names cannot be re-used until about a week after deletion.
  • In the case of intermittent connectivity issues, the only remedy is to try again.
  • Enable the Service Networking API for the project.
Create replica It's likely that a more specific error is in the log files. Inspect the logs in Cloud Logging to find the actual error. If the error is: set Service Networking service account as servicenetworking.serviceAgent role on consumer project, disable and re-enable the Service Networking API. This action creates the service account necessary to continue with the process.
Export If you see this while trying to export a database to a Cloud Storage bucket, the transfer may be failing due to a bandwidth issue. The Cloud SQL instance may be located in a different region than the Cloud Storage bucket. Reading and writing data from one continent to another involves a lot of network usage, and can cause intermittent issues like this.
Failover (legacy) If you're using the legacy failover configuration, this can happen when the failover replica machine is not large enough to handle the failover. The best solution is to migrate to the current high availability configuration. Otherwise you need to update the failover replica to a larger machine.
Failover (automatic) An automatic failover operation can produce this error message when the service detects that the primary instance is still responsive. There is nothing to be done in this case. The failover won't occur because it isn't needed.
Import The import file may contain statements which require the superuser role. Edit the file to remove any statements which require the superuser role.

Other errors

Aborted connection

You see the error message Got an error reading communication packets or Aborted connection xxx to db: DB_NAME.

The issue might be

The Aborted connection error indicates that an application is not ending connections properly.

Things to try

Check for the following conditions:

  • The application did not call mysql_close() before exiting.
  • Communication errors.
  • The application may have been sleeping more than the number of seconds specified in wait_timeout or interactive_timeout without issuing any requests to the server. See Section 5.1.7, Server System Variables.
  • The application ended abruptly in the middle of a data transfer.
  • The max_allowed_packet variable value may be too small or queries require more memory than the allocated for mysqld. This can be resolved by raising the max_allowed_packet flag to a much larger number.


Bad request

Your see the error message HTTP Error 400 Bad Request.

The issue might be

Bad Request can have many causes. Illegal Argument is one of the most common. In this case, the request is either using the wrong argument or an invalid value for the argument. For the many other causes, the error message might contain a useful hint.

Things to try

For Illegal Argument, check the request to make sure each argument is permissible and each value for the argument is valid. For all other causes, check the log files to see if there is more information available.


Access denied for user

You see the error message Access denied for user 'XXX'@'XXX' (using password: XXX).

The issue might be

There could be several causes, including:

  • The username (or password) is incorrect.
  • The user is connecting from a URL other than @XXX.
  • The user doesn't have the correct privileges for the database they are trying to connect to.

Things to try

  • Verify the username and corresponding password.
  • Check the origin of the connection to see if it matches the URL where the user has access privileges.
  • Check the user's grant privileges in the database.

Disk is full

You see the error message Disk is full, or you notice you have run out of disk space.

The issue might be

The primary instance disk size can become full during replica creation.

Things to try

Edit the primary instance to upgrade it to a larger disk size.


Enabling a flag crashes the instance

After enabling a flag the instances loop between panicking and crashing.

The issue might be

Setting the max_connections flag value too high has been known to cause this error.

Things to try

Setting the max_connections flag value too high can result in the instance not being able to accommodate all connections, resulting in Error 1040: Too many connections and heartbeat failures. Contact customer support to request a flag removal followed by a hard drain. This forces the instance to restart on a different host with a fresh configuration, without the flag or setting.


Fatal error during upgrade

You see the error message ERROR_INTERNAL_FATAL when upgrading resources on an instance.

The issue might be

There could be many causes.

Things to try

Logs may reveal more information. Otherwise, contact customer support to force recreate the instance.


HTTP Error 408: (Timeout) during export

You see the error message HTTP Error 408 (Timeout) while performing an export job in Cloud SQL.

The issue might be

CSV and SQL formats do export differently. The SQL format includes the entire database and is likely to take longer to complete. Using CSV you can specify which elements of the database to include in the export operation.

Things to try

Use the CSV format and run multiple, smaller export jobs to reduce the size and length of each operation.


HTTP Error 409

You see the error message HTTP Error 409 in a Terraform script, or Error 409 Operation failed because another operation was already in progress.

The issue might be

  1. Check the Terraform script. The script must wait for each operation to complete before beginning the next one.
  2. HTTP Error 409: Operation failed because another operation was already in progress.
  3. You are trying to re-use the same name for a new instance as a recently deleted one before the end of the waiting period for name re-use.

Things to try

  1. If running a Terraform script, halt execution until each instance operation is completed. Have the script poll and wait until a 200 is returned for the previous operation ID. Otherwise, wait for the currently running operation to finish.
  2. After deleting an instance, you have to wait a week before you can re-use the same name for a new instance.

Instance cannot be deleted

Error message

You see the error message ERROR: (gcloud.sql.instances.delete) HTTP Error 409: The instance or operation is not in an appropriate state to handle the request, or the instance has a INSTANCE_RISKY_FLAG_CONFIG flag status.

The issue might be

  1. Another operation is in progress.
  2. The INSTANCE_RISKY_FLAG_CONFIG warning is triggered whenever at least one BETA flag is being used.

Things to try

  1. Cloud SQL operations do not run concurrently. Wait for the other operation to complete.
  2. Remove the risky flag settings and restart the instance.

Internal error

You see the error message {"ResourceType":"sqladmin.v1beta4.instance", "ResourceErrorCode":"INTERNAL_ERROR","ResourceErrorMessage":null}.

The issue might be

The service project could be missing the service networking service account required for this feature.

Things to try

To repair service permissions, disable the Service Networking API, wait five minutes and then re-enable it.


MySQL Error 1040

You see the error message MySQL Error 1040: Too many connections.

The issue might be

Setting the max_connections flag value too high can cause this error.

Things to try

Contact customer support to request a flag removal followed by a hard drain. This forces the instance to restart on a different host with a fresh configuration, without the flag or setting.


Network association failed

You see the error message Network association failed due to the following error: set Service Networking service account as servicenetworking.serviceAgent role on consumer project.

The issue might be

The Service Networking API is not enabled in the project.

Things to try

Enable the Service Networking API in your project. If you see this error when you are trying to assign a private IP address to a Cloud SQL instance, and you are using a shared VPC, you also need to enable the Service Networking API for the host project.


Not authorized to connect

You see the error message Unauthorized to connect.

The issue might be

There can be many causes, as authorization occurs at many levels.

  • At the database level, the database user must exist and its password match,
  • At the project level, the user may not have the correct IAM permissions, including the serviceusage.services.use or cloudsql.instances.connect permissions.
  • At the network level, if the Cloud SQL instance is using public IP the connection's source IP must be in an authorized network.

Things to try

  • Ensure the user exists and its password matches.
  • Assign the Service Usage Consumer role to the user account. This role includes the permission serviceusage.services.use.
  • If using public IP, ensure the source IP is in an authorized network.

Operation isn't valid for this instance

You see the error message HTTP Error 400: This operation isn't valid for this instance from an API call to instances.restoreBackup.

The issue might be

You cannot restore from backup to an instance with a storage size (XX GB) smaller than the backup size (YY GB).

Things to try

Edit the target instance to increase its storage size.


Out of memory during automated backup

You see the error message [ERROR] InnoDB: Write to file ./ibtmp1 failed at offset XXXX, YYYY bytes should have been written, only 0 were written..

The issue might be

The instance reached a hard limit when conducting an automated backup.

Things to try

Check that your OS and file system support files of this size. Check that the disk is not full or out of disk quota. You can request an increase to your quotas from the Google Cloud Console or edit the instance to upgrade it to a larger disk size.


Quota exceeded

You see the error message RESOURCE_EXHAUSTED: Quota exceeded, or GLOBAL_ADDRESSES_QUOTA_EXCEEDED.

The issue might be

You reached the limit of your per-minute or daily quota. Review the quotas and limits for Cloud SQL.

Things to try

Request an increase to your quotas from the Google Cloud Console.


Remaining connection slots are reserved

You see the error message FATAL: remaining connection slots are reserved for non-replication superuser connections.

The issue might be

The maximum allowed connections have been reached.

Things to try

Increase the max_connections flag. Note that increasing max_connections above a certain value results in losing SLA support. See Configuring database flags.


Request is missing a valid API key

You see the error message HTTP Error 403: PERMISSION_DENIED. The request is missing a valid API key.

The issue might be

You might not have a valid service account key JSON file, or it might not be stored in the expected location.

Things to try

Verify that you have a valid service account key JSON file in the location stored in the GOOGLE_APPLICATION_CREDENTIALS environment variable and that the variable points to the correct location.


System error occurred

You see the error message [ERROR_RDBMS] system error occurred

The issue might be

  1. The user might not have all the Cloud Storage permissions it needs.
  2. The database table might not exist.

Things to try

  1. Check that you have at least WRITER permissions on the bucket and READER permissions on the export file. For more information on configuring access control in Cloud Storage, see Create and Manage Access Control Lists
  2. Ensure the table exists. If the table does exist, confirm that you have the correct permissions on the storage bucket.

Table definition has changed

You see the error message Table definition has changed, please retry transaction when dumping table.

The issue might be

During the export process a change occurred in the table.

Things to try

The dump transaction can fail if you use the following statements during the export operation:

  • ALTER TABLE
  • CREATE TABLE
  • DROP TABLE
  • RENAME TABLE
  • TRUNCATE TABLE
Remove any of these statements from the dump operation.


Temporary file size exceeds temp_file_limit

You see the error message Temporary file size exceeds temp_file_limit.

The issue might be

The temp_file_limit flag is set too low for your database usage.

Things to try

Increase the temp_file_limit size. See Configuring database flags.