Learn how to use the open source backup solution Percona XtraBackup, to take hot backups of your InnoDB- or XtraDB-based MySQL databases, and how to store and manage your backup files using Google Cloud (Google Cloud) storage.
Percona XtraBackup is explicitly designed with hot backups in mind. At a high level, XtraBackup achieves hot backup consistency as follows: it copies your MySQL server's live data files, opens a small MySQL server of its own, and then applies the logs of all of the transactions performed in the interim to the copy. This process results in a consistent backup that does not require downtime.
In addition, XtraBackup provides several useful features for streamlining your backup and recovery processes, such as backup encryption, compression, and streaming; support for multithreaded operations; and support for point-in-time recovery.
Objectives
- Install the backup toolkit.
- Take a full backup of your database.
- Take incremental backups of your database.
- Store and retrieve your backup files using Cloud Storage.
- Restore your MySQL server using your backup files.
Costs
This tutorial uses the following billable components of Google Cloud:
- Compute Engine virtual machine instances.
- Compute Engine Persistent Disk.
- Cloud Storage.
To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.
Before you begin
This tutorial assumes the following:
- You have a Linux development environment.
- You have an existing MySQL test environment.
You're using InnoDB or XtraDB as your default MySQL storage engine.
Set up your environment
Before you perform your backup, install the required tools and set up the needed Linux and MySQL users in your MySQL test environment.
Install Percona XtraBackup
Begin by installing Percona XtraBackup and the additional tools on which it depends:
Debian/Ubuntu
- Open a command-line shell.
Add Percona to the Debian repositories:
wget --quiet https://repo.percona.com/apt/percona-release_0.1-5.$(lsb_release -sc)_all.deb
sudo dpkg -i percona-release_0.1-5.$(lsb_release -sc)_all.deb
sudo apt-get update
rm -f percona-release_0.1-5.$(lsb_release -sc)_all.deb
Install Percona XtraBackup and additional tools it depends on:
sudo apt-get install -y percona-xtrabackup libgcrypt20 openssl
wget http://www.quicklz.com/qpress-11-linux-x64.tar
sudo tar -xf qpress-11-linux-x64.tar -C /usr/bin/
CentOS
- Open a command-line shell.
Add Percona to the RPM repositories:
sudo yum -y install http://www.percona.com/downloads/percona-release/redhat/0.1-5/percona-release-0.1-5.noarch.rpm
Install Percona XtraBackup and additional tools it depends on:
yum -y install percona-xtrabackup wget libgcrypt openssl qpress
Set up a Google Cloud project and install the Cloud SDK
Next, create and configure a new Google Cloud Console project, and then install the Cloud SDK to your Linux development environment. You'll use the Cloud SDK to upload your backup files to, and download them from, Cloud Storage:
-
Sign in to your Google Account.
If you don't already have one, sign up for a new account.
-
In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.
- Enable the Cloud Storage APIs.
- Install and initialize the Cloud SDK.
Create backup user accounts
As a security best practice, you should set up special MySQL and Linux users for backups, granting them only the permissions necessary to perform backups and restores.
Create and configure a MySQL backup user
Create a new MySQL user, backup
, and give it the permissions necessary to
perform backups of your MySQL databases:
On your MySQL test server, open MySQL as an administrator:
sudo mysql
Create a new user named
backup
. Replace[PASSWORD]
with a secure password of your choice:CREATE USER 'backup'@'localhost' IDENTIFIED BY '[PASSWORD]';
Grant the necessary permissions to your
backup
user:GRANT CREATE TABLESPACE, RELOAD, PROCESS, SUPER, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'backup'@'localhost';
Create the database
PERCONA_SCHEMA
:CREATE DATABASE IF NOT EXISTS PERCONA_SCHEMA;
Grant read and write permissions for
PERCONA_SCHEMA
to yourbackup
user. Your backup scripts will store useful historical data for each backup in this table, enabling you to streamline the backup and restore process later on.GRANT CREATE, INSERT, SELECT ON PERCONA_SCHEMA.* TO 'backup'@'localhost';
Commit your changes:
FLUSH PRIVILEGES;
Exit
mysql
.exit
Create and configure a Linux backup user
Next, create and configure your Linux backup user:
Create a new Linux user named
backup
and add it to the groupmysql
:sudo useradd -g mysql backup
Set a password for the
backup
user:sudo passwd backup
Grant the
backup
user andmysql
group recursive access to your MySQL data directory. Replace[DATA_DIR]
with the absolute path of your data directory:sudo chown -R mysql: [DATA_DIR]
sudo find [DATA_DIR] -type d -exec chmod 750 {} \;
By default, the MySQL data directory directory is
/var/lib/mysql
. If you're unsure where your data directory is located, open MySQL and run the following query:mysql> SHOW GLOBAL VARIABLES LIKE 'datadir';
Next, ensure that the backup
user has access to any future files added
to the MySQL data directory. Select the tab for the system
initialization daemon that your system uses, and then follow the steps.
init.d
Add the following lines to the top of your
/etc/init.d/mysql
startup script:UMASK_DIR=0750 export UMASK_DIR
Restart MySQL:
sudo service mysql restart
systemd
Add the following line below the
[Service]
section of the MySQL service file, typically/etc/systemd/system/mysql.service
:Environment="UMASK_DIR=0750"
Reload the service configuration:
systemctl daemon-reload
Restart MySQL:
sudo service mysql restart
Perform the backup
Now that you've set up the required tools and appropriate users, you can perform a hot backup of your MySQL database. In this section, you'll create two backup scripts: one for performing full backups, and another for performing incremental backups.
Prepare your environment
Before you create your backup scripts, you need to create a few files and directories that the scripts will use:
- A directory which will be used to locally store your backups, or a
symlink to the directory you already use to locally store backups.
This tutorial uses
/mnt/backups
. /etc/mysql/backup-my.cnf
, a filtered version ofmy.cnf
that contains only the configuration details necessary to perform the backups./etc/mysql/.mykeyfile
, an AES encryption key that the scripts will use to encrypt the backups.
If you already have a local backup directory, create a symlink to that directory, and then grant the appropriate permissions to the directory:
Create a symlink to your local backup directory. Replace
[BACKUP_DIR_PATH]
with the path to the backup directory:sudo ln -s [BACKUP_DIR_PATH] /mnt/backups
Grant ownership of your backup directory to your Linux backup user:
sudo chown backup:mysql /mnt/backups
Otherwise, create the local backup directory, and then grant the appropriate permissions to the directory:
Choose a target disk for backups. This disk should be different than the disk your MySQL database uses, and should be able to store several backups.
On your target disk, create a new directory. This directory will be used to store your backup files locally:
sudo mkdir -p /mnt/backups
Grant ownership of your new backup directory to your Linux backup user:
sudo chown backup:mysql /mnt/backups
Next, create your backup-my.cnf
and encryption key files:
Create a new Bash script:
nano backup-assist.sh
Copy the following into the file:
#!/bin/bash # backup-assist.sh read -p "Please enter the password you created for the mysql backup user: " -s ans backuppwd="${ans}" mkdir -p /etc/mysql/ # Creating AES Encryption Key file. Use `echo -n` to ensure that no extra characters get added. echo -n `openssl rand -base64 24` > /etc/mysql/.mykeyfile SOCKET="socket=$(mysql -ubackup -p"${backuppwd}" -s -N -e "select @@socket;" 2> /dev/null)" DATADIR="datadir=$(mysql -ubackup -p"${backuppwd}" -s -N -e "select @@datadir;" 2> /dev/null)" INNODB_LOG_GROUP_HOME_DIR="innodb_log_group_home_dir=$(mysql -ubackup -p"${backuppwd}" -s -N -e "select @@innodb_log_group_home_dir;" 2> /dev/null)" INNODB_DATA_DIR_HOME=$(mysql -ubackup -p"$backuppwd" -s -N -e "select @@innodb_data_home_dir;" 2> /dev/null) if [ $INNODB_DATA_DIR_HOME = "NULL" ]; then INNODB_DATA_DIR_HOME="" else INNODB_DATA_DIR_HOME="innodb_data_home_dir=${INNODB_DATA_DIR_HOME}" fi cat > /etc/mysql/backup-my.cnf <<EOF [client] user=backup password=$backuppwd [mysqld] ${SOCKET} ${DATADIR} ${INNODB_LOG_GROUP_HOME_DIR} ${INNODB_DATA_DIR_HOME} EOF chown backup /etc/mysql/backup-my.cnf chown backup /etc/mysql/.mykeyfile chmod 600 /etc/mysql/backup-my.cnf chmod 600 /etc/mysql/.mykeyfile printf "\nCreated Files: \n /etc/mysql/backup-my.cnf \n /etc/mysql/.mykeyfile \n"
Save the script file.
Make the script executable:
sudo chmod +x backup-assist.sh
Run the script as root:
sudo ./backup-assist.sh
Perform a full backup
Now that you've created the files and directories on which the
scripts depend, it's time to create the first backup script. This
initial script handles full backups, and is built around
innobackupex
, a utility included with XtraBackup. innobackupex
acts as a wrapper around XtraBackup, providing additional backup
and restore options not directly available through the XtraBackup
binary.
This script uses innobackupex
to perform the following tasks:
- Encrypt the backed-up files.
- Compress the files into a single archive file.
Write a backup history entry to the
PERCONA_SCHEMA.XtraBackup_history
table. Each entry includes the following keys (among others):- name: The date that the backup was taken.
- innodb_to_lsn: The log sequence number (LSN) of the last
successful backup. Later in the tutorial,
innobackupex
will need this number to perform an incremental backup based on this backup.
To perform a full backup:
Create a new Bash script called
mysql-full-backup.sh
:nano mysql-full-backup.sh
Paste the following into the script file:
#!/bin/bash # mysql-full-backup.sh # Change directories to the backup location. cd /mnt/backups # Execute the compressed and encrypted full backup. innobackupex --defaults-file=/etc/mysql/backup-my.cnf \ --no-timestamp \ --use-memory=1G \ --stream=xbstream \ --parallel=4 \ --encrypt=AES256 \ --encrypt-key-file=/etc/mysql/.mykeyfile \ --encrypt-threads=4 \ --compress \ --compress-threads=4 \ --history=$(date +%d-%m-%Y) ./ > \ mysqlbackup$(date +%d-%m-%Y).qp.xbc.xbs 2> \ backup-progress.log &
In addition to the expected backup-oriented flags, this script uses a few additional
innobackupex
flags to optimize performance:--parallel
: Specifies the number of threads that the XtraBackup child process should use to back up files concurrently.--use-memory
: Specifies the amount of memory that XtraBackup can use for crash recovery while preparing a backup.
When running this script, tune these two parameters to reflect the capacity and resources available on your server so that you can avoid impacting the performance of the running MySQL service.
If you're backing up from a replica, add the following flags to the
innobackupex
command in the script as well:--slave-info --safe-slave-backup
Save the script file.
Make the script executable:
sudo chmod +x mysql-full-backup.sh
Run the script as the
backup
user. Replace[PATH]
with the absolute path to the script:sudo su - backup -s /bin/bash -c [PATH]/mysql-full-backup.sh
Run
tail -f
to monitor the progress of the script:tail -f /mnt/backups/backup-progress.log
After you receive the message "completed OK!", press Ctrl-C to exit the
tail -f
command.
Perform an incremental backup
In this section, you'll create a script that performs an incremental backup. In contrast to a full backup, which copies your whole database, an incremental backup captures only the changes that have occurred since your last successful backup. In general, it's best to perform one full backup each day, and then perform incremental backups periodically throughout the rest of the day.
innobackupex
needs certain information about the last full
backup before it can run an incremental backup. Usually, this
information is locked away inside the last full backup's
encrypted archive file, making it difficult to retrieve.
However, because you set up the PERCONA_SCHEMA.XtraBackup_history
table earlier in this tutorial and invoked innobackupex
's
--history
flag in your full backup script, you can retrieve
this information from the PERCONA_SCHEMA.XtraBackup_history
record that was created by the script.
The incremental backup script uses information from the
PERCONA_SCHEMA.XtraBackup_history
record as follows:
- The incremental backup script uses the name value to identify the last successful backup, and to continue its date-based naming convention in subsequent backups.
- In the incremental backup script,
innobackupex
uses the last successful backup's innodb_to_lsn value, which is the log sequence number from which the incremental backup will start.
Like the full backup script, the incremental backup script uses
innobackupex
to perform the following tasks:
- Encrypt the backed-up files.
- Compress the files into a single archive file.
- Write a backup history entry to the
PERCONA_SCHEMA.XtraBackup_history
table.
To perform an incremental backup:
Create a new Bash script called
mysql-incr-backup.sh
:nano mysql-incr-backup.sh
Paste the following into the script file:
#!/bin/bash # mysql-incr-backup.sh # Change directories to the backup location. cd /mnt/backups # Execute the incremental backup. innobackupex --defaults-file=/etc/mysql/backup-my.cnf \ --no-timestamp \ --use-memory=1G \ --stream=xbstream \ --parallel=4 \ --encrypt=AES256 \ --encrypt-key-file=/etc/mysql/.mykeyfile \ --encrypt-threads=4 \ --compress \ --compress-threads=4 \ --incremental \ --incremental-history-name=$(date +%d-%m-%Y) \ --history=$(date +%d-%m-%Y) ./ > \ incremental$(date +%H)-mysqlbackup$(date +%d-%m-%Y).qp.xbc.xbs 2> \ incbackup_progress.log &
This script is nearly identical to the full backup script, but includes additional flags in the
innobackupex
command:--incremental
: Instructs innobackupex to attempt an incremental backup.--incremental-history-name
: Searches thePERCONA_SCHEMA.XtraBackup_history
table for the most recent backup record containing the defined name value, and then starts the backup using itsinnodb_to_lsn
value.
Save the script file.
Make the script executable:
sudo chmod +x mysql-incr-backup.sh
Run the script as the
backup
user. Replace[PATH]
with the absolute path to the script:sudo su - backup -s /bin/bash -c [PATH]/mysql-incr-backup.sh
Run
tail -f
to monitor the progress of the script:tail -f /mnt/backups/incbackup-progress.log
After you receive the message "completed OK!", press Ctrl-C to exit the
tail -f
command.
Manage your backups
In this section, you'll learn how to store your backups remotely in Cloud Storage, configure your Cloud Storage bucket to prune old backups automatically, and schedule your backups.
Upload your backups to Cloud Storage
Now that you have a few backup files, you can upload your backups to Cloud Storage:
Create a new Cloud Storage bucket. Bucket names must be globally unique across Google Cloud. To ensure that your bucket name is unique, consider namespacing it with your project name, as shown here:
gsutil mb gs://[PROJECT_NAME]-mysql-backups
Copy your files to your Cloud Storage bucket. In the following command, replace
[BUCKET_NAME]
with the name of the bucket you just created. If you configured your scripts to output your backup files to a different location than/mnt/backups
, adjust the local path in the command to reflect that location as well:gsutil cp -n /mnt/backups/*mysqlbackup$(date +%d-%m-%Y).qp.xbc.xbs gs://[BUCKET_NAME]
The gsutil cp command can be used to create entire directory structures and to upload multiple files at once. If you're running multiple database servers, consider creating multiple directories within your Cloud Storage bucket to represent each different server.
Set up automatic file pruning in Cloud Storage
Older backups eventually outlive their usefulness and need to be removed. To help automate this process, Cloud Storage has a lifecycle management mechanism you can use to manage the lifecycle of your backup files.
To configure lifecycle management for the objects in your bucket:
- Create a new JSON file called
lifecycle.json
. Paste the following JSON into the file. This configuration will delete files in your Cloud Storage bucket after 30 days:
{ "lifecycle": { "rule": [{ "action": { "type": "Delete" }, "condition": { "age": 30 } }] } }
Set the lifecycle configuration for your Cloud Storage bucket. Replace
[BUCKET_NAME]
with the name of your bucket:gsutil lifecycle set lifecycle.json gs://[BUCKET_NAME]
Schedule your backups
In general, these scripts assume that you're taking at least one daily full backup early in the day, and that you're scheduling incremental backups to run frequently throughout the rest of the day.
If you choose to combine your backup scripts into a single script that is scheduled by way of a cron job, make sure to include some logical validation steps at each point to verify successful completion. If validation fails, make sure the script returns an alert of some sort. In addition, the script should remove the local backup file after successfully uploading to Cloud Storage to avoid filling up the local disk.
Restoring your backup
In this section, you'll learn how to restore your MySQL database from the encrypted and compressed backup files you stored in Cloud Storage.
Download the backup from Cloud Storage
Begin by obtaining your backups from your Cloud Storage bucket:
If you already use a specific directory for storing restore files, create a symlink to the directory:
sudo ln -s [RESTORE_DIR_PATH] /mnt/restore
Create a directory to temporarily hold your downloaded backup files (
/mnt/restore/tmp
), a directory to hold your uncompressed full backup files (/mnt/restore/full
), and individual directories for each incremental backup (/mnt/restore/inc/(1,2,...}
).sudo mkdir -p /mnt/restore/tmp /mnt/restore/full /mnt/restore/inc/1
Grant ownership of the directories to the Linux
backup
user:sudo chown -R backup: /mnt/restore/tmp /mnt/restore/full /mnt/restore/inc/1
Download your backup files from Cloud Storage. Replace
[BUCKET_NAME]
with the name of your MySQL backup storage bucket, and replace[DATE]
with the date in your backup archive file's filename.sudo gsutil cp gs://[BUCKET_NAME]/*mysqlbackup[DATE].qp.xbc.xbs /mnt/restore/tmp
Extract, decrypt, and decompress your full backup
When you performed the full backup, innobackupex
encrypted each file
individually before adding it to the archive file. As such, these files
will need to be decrypted one by one before they are usable.
To extract, decrypt, and decompress your full backup:
Extract the encrypted files from the archive. Replace
[DATE]
with the date in your backup archive's filename:cat /mnt/restore/tmp/mysqlbackup[DATE].qp.xbc.xbs | sudo xbstream -x -C /mnt/restore/full/
Decrypt the files:
sudo find /mnt/restore/full -name "*.xbcrypt" -type f -exec bash -c 'f={} && nf=${f%.xbcrypt} && \ cat {} | sudo xbcrypt -da AES256 -f /etc/mysql/.mykeyfile -o ${nf} && rm {}' \;
Decompress the backup:
sudo innobackupex --use-memory=1G --parallel=4 --decompress /mnt/restore/full
If there are
.qp
files remaining after the decompression phase, remove them by running the following command:sudo find /mnt/restore/full/ -name "*.qp" -exec rm {} \;
Prepare your backup
Before you restore your backup data file, you need to apply
any uncommitted transaction log entries to the file. In
the Percona universe, this step is known as the
prepare stage. During the prepare stage, innobackupex
creates a small version of a running MySQL server, and then
executes the transactions from the log entries onto your
recovered data files.
Usually, the prepare stage occurs immediately after the backup is taken. However, in this tutorial, the backup is streamed, compressed, and encrypted while it is being taken, making it impossible to apply the transactions to the data at that point. Instead, the transactions are applied during the restore process.
The prepare command differs depending on whether you're restoring from a full backup or from an incremental backup.
Full backup
If you plan to restore directly from a full backup, run the following command:
sudo innobackupex --use-memory=1G --apply-log /mnt/restore/full
After you've prepared the backup, you can skip ahead to Restore your backup.
Incremental backup
If you plan to restore from an incremental backup,
you have to start the prepare process with the full backup
that was taken prior to your target incremental file.
To do so, run the following command. Note that this
command is almost identical to the standard full
backup preparation command, but has an additional flag,
--redo-only
:
sudo innobackupex --use-memory=1G --redo-only --apply-log /mnt/restore/full
The backup has not been fully prepared at this point—you must apply the relevant incremental backups to the file, as demonstrated in the next section.
Apply incremental backups to a full backup
This section demonstrates how to apply incremental backup files to your full backup. Perform the following steps for each incremental backup file in the order that the backups were performed:
Extract the encrypted files from the archive. Replace
[DATE]
with the date in your incremental backup file's name, and replace[INC_NUM]
with an appropriate numerical directory name. For example, if this is the first incremental backup, the directory should be/mnt/restore/inc/1/
:cat /mnt/restore/tmp/incremental*mysqlbackup[DATE].qp.xbc.xbs | sudo xbstream -x -C /mnt/restore/inc/[INC_NUM]/
Decrypt the files. Replace `[INC_NUM] with the same numerical directory name:
sudo find /mnt/restore/inc/[INC_NUM] -name "*.xbcrypt" -type f -exec bash -c 'f={} && nf=${f%.xbcrypt} && \ cat {} | sudo xbcrypt -da AES256 -f /etc/mysql/.mykeyfile -o ${nf} && rm {}' \;
Decompress the backup. Replace
[INC_NUM]
with the same numerical directory name:sudo innobackupex --use-memory=1G --parallel=4 --decompress /mnt/restore/inc/[INC_NUM]
If there are
.qp
files remaining after the decompression phase, you can remove them by running the following command. Replace[INC_NUM]
with the same numerical directory name:sudo find /mnt/restore/inc/[INC_NUM] -name "*.qp" -exec rm {} \;
Apply the incremental backup to the full backup, replacing
[INC_NUM]
with the same numerical directory name:sudo innobackupex --use-memory=1G --apply-log --redo-only --incremental-dir=/mnt/restore/inc/[INC_NUM] /mnt/restore/full
After you've applied all of the incremental backups to your full backup, you can prepare it as you would prepare a standard full backup:
sudo innobackupex --use-memory=1G --apply-log /mnt/restore/full
Restore your backup
Now that you've prepared your backup file, you can complete the restore process:
Stop your MySQL service.
sudo service mysql stop
Empty your MySQL data directory. Replace
[DATA_DIR]
with the absolute path of your MySQL data directory:sudo rm -rf [DATA_DIR]/*
Restore from your backup:
sudo innobackupex --copy-back /mnt/restore/full
Verify that the restored files are in your MySQL data directory. Replace
[DATA_DIR]
with the absolute path of your MySQL data directory:ls -l [DATA_DIR]
Reset the owner and group for the MySQL data directory to match the MySQL process. Replace
[DATA_DIR]
with the absolute path of your MySQL data directory:sudo chown -R mysql:mysql [DATA_DIR]
Repair permissions for the MySQL data directory so that the
backup
user can access the directory when performing future backups. Replace[DATA_DIR]
with the absolute path of your MySQL data directory:sudo find [DATA_DIR] -type d -exec chmod 750 {} \;
Restart the MySQL service:
sudo service mysql start
What's next
Set up MySQL on Google Cloud
Take a look at the various ways you can set up MySQL on Google Cloud, and choose the method that best suits your needs.
Learn about disaster recovery on Google Cloud
For a comprehensive look at how you can best integrate Google Cloud into your disaster recovery plan, check out Designing a Disaster Recovery Plan with Google Cloud Platform. For a discussion of specific disaster recovery use cases, with example implementations on Google Cloud, see the Disaster Recovery Cookbook.
Try other tutorials
Try out other Google Cloud features for yourself. Have a look at our tutorials.