This page provides best practices for importing and exporting data with Cloud SQL. For step-by-step instructions for importing data into Cloud SQL, see Importing Data.
To export data from Cloud SQL for use in a MySQL instance that you manage, see Exporting Data.
Best practices for importing and exporting
The following are best practices to consider when importing and exporting data:
- Use the same SQL mode for import and export.
- Don't use Cloud Storage Requester Pays buckets.
- Use the correct flags when you create a SQL dump file.
- Compress data to reduce cost.
- Reduce long-running import and export processes.
- Use InnoDB.
- MySQL import and migration jobs containing metadata with DEFINER clause.
- Verify the imported database.
Use the same SQL Mode for import and export
The SQL Mode setting affects how Cloud SQL interprets SQL queries. For example, if you export from a database without Strict SQL enabled, then try to import to Cloud SQL (which enables Strict SQL by default), the import might fail. The best practice is to use the same SQL Mode on import that you used for export.
Review the SQL Mode on both the source and target databases for compatibility. Pay particular attention to the flags that enable Strict SQL mode. If Strict SQL is NOT set on your database, you will likely want to remove it in Cloud SQL. If you remove Strict SQL, you must set another flag.
To verify that your Cloud SQL instance has the desired mode set,
run SELECT @@GLOBAL.sql_mode;
.
Don't use Cloud Storage Requester Pays buckets
You cannot use a Cloud Storage bucket that has Requester Pays enabled for imports and exports from Cloud SQL.
Use the correct flags when you create a SQL dump file
If you do not use the right flags when you export your data to a SQL dump file, your import might be unsuccessful. For information about creating a SQL dump file for import into Cloud SQL, see Creating a SQL dump file.Compress data to reduce cost
Cloud SQL supports importing and exporting both compressed and uncompressed files. Compression can save significant storage space on Cloud Storage and reduce your storage costs, especially when you are exporting large instances.
When you export a SQL dump or CSV file, use a.gz
file extension to compress the data. When you import a file with an
extension of .gz
, it is decompressed automatically.
Reduce long-running import and export processes
Imports and exports into Cloud SQL using the import functionality (with a Cloud Storage bucket) can take a long time to complete, depending on the size of the database. This can have the following impacts:
- You cannot stop a long-running Cloud SQL instance operation.
- You can perform only one import or export operation at a time for each instance.
You can decrease the amount of time it takes to complete each operation by using the Cloud SQL import or export functionality with smaller batches of data.
For exports, you can use serverless export to minimize the impact on database performance and allow other operations to run on your instance while an export is running. For more tips, see Diagnosing Issues with Cloud SQL Instances.
Use InnoDB
InnoDB is the only supported storage engine for MySQL instances.
You can convert your tables from MyISAM to InnoDB by piping the output of mysqldump through a sed script as follows:
mysqldump --databases [DATABASE_NAME] \ -h [INSTANCE_IP] -u [USERNAME] -p [PASSWORD] \ --hex-blob --default-character-set=utf8mb4 | sed 's/ENGINE=MyISAM/ENGINE=InnoDB/g' > [DATABASE_FILE].sql
MySQL import and migration jobs containing metadata with DEFINER clause
Because a MySQL import or migration job doesn't migrate user data,
sources and dump files which contain metadata defined by users with the
DEFINER
clause will fail to be imported or migrated as the users do not yet
exist there.
To identify which DEFINER
values exist in your metadata, use the following queries
(or search in your dump file) and check if there are entries for either
root%localhost
or users that don't exist in the target instance.
SELECT DISTINCT DEFINER FROM INFORMATION_SCHEMA.EVENTS; SELECT DISTINCT DEFINER FROM INFORMATION_SCHEMA.ROUTINES; SELECT DISTINCT DEFINER FROM INFORMATION_SCHEMA.TRIGGERS; SELECT DISTINCT DEFINER FROM INFORMATION_SCHEMA.VIEWS;
To run an import or migration job from a source which includes such metadata, you can do one of the following:
- Create the users on your target Cloud SQL instance before starting your import or migration job.
- Update the
DEFINER
clause toINVOKER
on your source MySQL instance or dump file before starting your import or migration job.
Verify the imported database
After an import operation is complete, connect to your database and run the appropriate database commands to make sure the contents are correct. For example, connect and list the databases, tables, and specific entries.
Troubleshooting
Click the links in the table for details:
For this problem... | The issue might be... | Try this... |
---|---|---|
Can't see the operation status. | The user interface only shows success or failure. | Use these database commands to find out more. |
408 Error (Timeout) during export. |
SQL export can take a long time depending on database size and export content. | Use multiple CSV exports to reduce the size of each operation. |
CSV export worked but SQL export failed. | SQL export is more likely to encounter compatibility issues with Cloud SQL. | Use CSV exports to export only what you need.. |
Export is taking too long. | Cloud SQL does not support concurrent synchronous operations. | Use export offloading. Learn more. |
Import is taking too long. | Too many active connections can interfere with import operations. | Close unused connections, or restart the Cloud SQL instance before beginning an import operation. |
Error 1412: Table definition has changed . |
The table changed during export. | Remove any table change statements from the dump operation. |
Import fails. | Exported file may contain database users who do not yet exist. | Clean up the failed database before retrying the import. Create the database users before doing the import. |
Connection closed during the export operation. | Query must produce data within first seven minutes. | Test the query manually. Learn more. |
Unknown error during export. | Possible bandwidth issue. | Ensure that both the instance and the Cloud Storage bucket are in the same region. |
You want to automate exports. | Cloud SQL does not provide a way to automate exports. | Build your own pipeline to perform this functionality. Learn more. |
ERROR_RDBMS: system error occurred . |
Cloud Storage permissions or non-existent table. | Check permissions OR ensure table exists. |
Error during import: table doesn't exist. | A required table doesn't exist at the moment. | Disable FOREIGN_KEY_CHECKS
at beginning of import. |
Error message: Access denied; you need (at least one of) the SUPER
privilege(s) for this operation . |
There could be an event, a view, a function, or a procedure in the dump file using super user@localhost (such as root@localhost). This is not supported by Cloud SQL. | Learn more
about DEFINER usage in and potential workarounds in Cloud SQL. |
Error message: ERROR 1045 (28000) at line {line_number}: Access denied for user 'cloudsqlimport'@'localhost' . |
There is a DEFINER in the dump file that doesn't exist in the database. |
See more information about DEFINER usage and potential workarounds in Cloud SQL. |
Error message: Unknown table 'COLUMN_STATISTICS' in information_schema . |
This happens if you use the mysqldump binary from MySQL 8.0 to dump
data from a MySQL 5.7 database and import to a MySQL 8.0 database. |
If you dump data from a MySQL 5.7 database and import to a MySQL 8.0 database,
make sure to use the mysqldump binary from MySQL 5.7. If you use the
mysqldump binary from MySQL 8.0, you need to add the
--column-statistics=0 flag. |
Can't see the operation status
You can't see the status of an ongoing operation.
The issue might be
The Google Cloud Console reports only success or failure when done, and is not designed to return warnings.
Things to try
Connect to the database and run SHOW WARNINGS
.
408 Error (Timeout) during export
You see the error message 408 Error (Timeout)
while performing an
export job in Cloud SQL.
The issue might be
CSV and SQL formats do export differently. The SQL format exports the entire database, and likely takes longer to complete. The CSV format lets you define which elements of the database to include in the export,
Things to try
Use the CSV format, and run multiple, smaller export jobs to reduce the size and length of each operation.
CSV export worked but SQL export failed
CSV export worked but SQL export failed.
The issue might be
CSV and SQL formats do export differently. The SQL format exports the entire database, and likely takes longer to complete. The CSV format lets you define which elements of the database to include in the export,
Things to try
Use CSV exports to export only what you need.
Export is taking too long
Export is taking too long, blocking other operations.
The issue might be
Cloud SQL does not support concurrent synchronous operations.
Things to try
Try exporting smaller datasets at a time.
Import is taking too long
Import is taking too long, blocking other operations.
The issue might be
Too many active connections can interfere with import operations. Connections consume CPU and memory, limiting the resources available.
Things to try
Close unused operations. Check CPU and memory usage to make sure there are plenty of resources available. The best way to ensure maximum resources for the import operation is to restart the instance before beginning the operation. A restart:
- Closes all connections.
- Ends any tasks that may be consuming resources.
mysqldump: Error 1412: Table definition has changed
You see the error message mysqldump: Error 1412: Table definition has
changed, retry transaction when dumping the table
.
The issue might be
During the export process, there was a change in the table.
Things to try
The dump transaction can fail if you use the following statements during the export operation:
ALTER TABLE
CREATE TABLE
DROP TABLE
RENAME TABLE
TRUNCATE TABLE
Import fails
Import fails when one or more users referenced in the exported SQL dump file does not exist.
The issue might be
Before importing a SQL dump, all the database users who own objects or were granted permissions on objects in the dumped database must exist. If they do not, the restore fails to recreate the objects with the original ownership and/or permissions.
Things to try
Clean up the failed database before retrying the import. Create the database users before importing the SQL dump.
Connection closed during the export operation
Connection closed during the export operation.
The issue might be
The connection to Cloud Storage may be timing out because the query running in the export is not producing any data within the first seven minutes since the export is initiated.
Things to try
Test the query manually by connecting from any client and sending the output of your query to STDOUT with the command below:
COPY (INSERT_YOUR_QUERY_HERE) TO STDOUT WITH ( FORMAT csv, DELIMITER ',', ENCODING 'UTF8', QUOTE '"', ESCAPE '"' )
.
This is expected behavior since when the export is initiated, the client is expected to start sending data right away. Keeping the connection with no data sent ends up breaking the connection and eventually resulting in the export failing and leaving the operation in an uncertain state. Also, this is what the error message from gcloud is trying to say with this message:
operation is taking longer than expected
.
Unknown error during export
You see the error message Unknown error
while trying to export a
database to a Cloud Storage bucket.
The issue might be
The transfer might be failing due to a bandwidth issue.
Things to try
The Cloud SQL instance may be located in a different region from the Cloud Storage bucket. Reading and writing data from one continent to another involves a lot of network usage, and can cause intermittent issues like this. Check the regions of your instance and bucket.
Want to automate exports
You want to automate exports.
The issue might be
Cloud SQL does not provide a way to automate exports.
Things to try
You could build your own automated export system using Google Cloud products such as Cloud Scheduler, Pub/Sub, and Cloud Functions.
ERROR_RDBMS system error occurred
You see the error message [ERROR_RDBMS] system error occurred
.
The issue might be
- The user might not have all the Cloud Storage permissions it needs.
- The database table might not exist.
Things to try
- Check that you have at least
WRITER
permissions on the bucket andREADER
permissions on the export file. For more information on configuring access control in Cloud Storage, see Create and Manage Access Control Lists. - Ensure the table exists. If the table exists, confirm that you have the correct permissions on the bucket.
Error during import: table doesn't exist
An import operation fails with an error that a table doesn't exist.
The issue might be
Tables can have foreign key dependencies on other tables, and depending on the order of operations, one or more of those tables might not yet exist during the import operation.
Things to try
Add the following line at the start of the dump file:
SET FOREIGN_KEY_CHECKS=0;
Additionally, add this line at the end of the dump file:
SET FOREIGN_KEY_CHECKS=1;
These settings deactivate data integrity checks while the import operation is in progress, and reactivate them after the data is loaded. This doesn't affect the integrity of the data on the database, because the data was already validated during the creation of the dump file.
Access denied; you need (at least one of) the SUPER privilege(s) for this operation
You see the error Access denied; you need (at least one of) the SUPER privilege(s) for this operation
.
The issue might be
There could be an event, a view, a function, or a procedure in the dump file using super user@localhost (such as root@localhost). This is not supported by Cloud SQL.
Things to try
Refer to this
document on importing a database with DEFINER
clauses.
Error message: ERROR 1045 (28000) at line {line_number}: Access denied for user 'cloudsqlimport'@'localhost'
You see the error ERROR 1045 (28000) at line {line_number}: Access denied
for user 'cloudsqlimport'@'localhost'
.
The issue might be
The root cause is that a user in the dump file with the
DEFINER
clause does not exist on the database and that user
is cross-referenced in the object definitions in the database.
Things to try
Refer to this
document on importing a database with DEFINER
clauses in the dump
file. You may first need to create one or more users in the database.
Error message: Unknown table 'COLUMN_STATISTICS' in information_schema
You see the error message Unknown table 'COLUMN_STATISTICS' in information_schema
.
The issue might be
This happens if you use the mysqldump
binary from MySQL 8.0 to dump
data from a MySQL 5.7 database and import to a MySQL 8.0 database.
Things to try
If you dump data from a MySQL 5.7 database and import to a MySQL 8.0 database,
make sure to use the mysqldump
binary from MySQL 5.7. If you use the
mysqldump
binary from MySQL 8.0, you need to add the
--column-statistics=0
flag.
What's next
- Learn how to import data into Cloud SQL.
- Learn how to export data using Cloud SQL.
- Learn how to enable automatic backups.
- Learn how to restore from backups.