Performing MySQL Hot Backups with Percona XtraBackup and Google Cloud Storage

Learn how to use Percona XtraBackup, a popular, open source backup solution, to take hot backups of your InnoDB- or XtraDB-based MySQL database. As you work through this tutorial, you'll also learn how to store and manage your backup files using Google Cloud Storage.

Percona XtraBackup is uniquely suited to performing hot backups, as it is explicitly designed with hot backups in mind. At a high level, XtraBackup achieves hot backup consistency as follows: it copies your MySQL server's live data files, opens a small MySQL server of its own, and then applies the logs of all of the transactions performed in the interim to the copy. This process results in a consistent backup that does not require downtime.

In addition, XtraBackup provides several useful features for streamlining your backup and recovery processes, such as backup encryption, compression, and streaming; support for multithreaded operations; and support for point-in-time recovery.

Objectives

  • Install the backup toolkit.
  • Take a full backup of your database.
  • Take incremental backups of your database.
  • Store and retrieve your backup files using Cloud Storage.
  • Restore your MySQL server using your backup files.

Prerequisites

This tutorial assumes the following:

  • You have a Linux development environment.
  • You have an existing MySQL test environment.
  • You're using InnoDB or XtraDB as your default MySQL storage engine.

Set up your environment

Before you perform your backup, install the required tools and set up the needed Linux and MySQL users in your MySQL test environment.

Install Percona XtraBackup

Begin by installing Percona XtraBackup and the additional tools on which it depends:

Debian/Ubuntu

  1. Open a command-line shell.
  2. Add Percona to the Debian repositories:

    wget --quiet https://repo.percona.com/apt/percona-release_0.1-3.$(lsb_release -sc)_all.deb \
      && sudo dpkg -i percona-release_0.1-3.$(lsb_release -sc)_all.deb && sudo apt-get update \
      && rm -f percona-release_0.1-3.$(lsb_release -sc)_all.deb
    
  3. Install Percona XtraBackup and additional tools it depends on:

    sudo apt-get install -y percona-xtrabackup libgcrypt20 openssl \
      && wget http://www.quicklz.com/qpress-11-linux-x64.tar \
      && sudo tar -xf qpress-11-linux-x64.tar -C /usr/bin/
    

CentOS

  1. Open a command-line shell.
  2. Add Percona to the RPM repositories:

    sudo yum -y install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
    
  3. Install Percona XtraBackup and additional tools it depends on:

    sudo yum -y install percona-xtrabackup wget libgcrypt openssl qpress
    

Set up a Cloud Platform project and install the Cloud SDK

Next, create and configure a new Google Cloud Platform Console project, and then install the Google Cloud SDK to your Linux development environment. You'll use the Cloud SDK to upload your backup files to, and download them from, Cloud Storage:

  1. Sign in to your Google account.

    If you don't already have one, sign up for a new account.

  2. Select or create a Cloud Platform project.

    Go to the Projects page

  3. Enable billing for your project.

    Enable billing

  4. Enable the Cloud Storage APIs.

    Enable the APIs

  5. Install and initialize the Cloud SDK.

Create backup user accounts

As a security best practice, you should set up special MySQL and Linux users for backups, granting them only the permissions necessary to perform backups and restores.

Create and configure a MySQL backup user

Create a new MySQL user, backup, and give it the permissions necessary to perform backups of your MySQL databases:

  1. On your MySQL test server, open MySQL as an administrator:

    sudo mysql
    
  2. Create a new user named backup. Replace <password> with a secure password of your choice:

    CREATE USER 'backup'@'localhost' IDENTIFIED BY '<password>';
    
  3. Grant the necessary permissions to your backup user:

    GRANT CREATE TABLESPACE, RELOAD, PROCESS, SUPER, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'backup'@'localhost';
    
  4. Create the database PERCONA_SCHEMA:

    CREATE DATABASE IF NOT EXISTS PERCONA_SCHEMA;
    
  5. Grant read and write permissions for PERCONA_SCHEMA to your backup user. Your backup scripts will store useful historical data for each backup in this table, enabling you to streamline the backup and restore process later on.

    GRANT CREATE, INSERT, SELECT ON PERCONA_SCHEMA.* TO 'backup'@'localhost';
    
  6. Commit your changes:

    FLUSH PRIVILEGES;
    
  7. Exit mysql.

    exit
    

Create and configure a Linux backup user

Next, create and configure your Linux backup user:

  1. Create a new Linux user named backup and add it to the group mysql:

    sudo useradd -g mysql backup
    
  2. Set a password for the backup user:

    passwd backup
    
  3. Grant the backup user and mysql group recursive access to your MySQL data directory. Replace <datadir> with the absolute path of your data directory:

    sudo chown -R mysql: <datadir> \
      && sudo find <datadir> -type d -exec chmod 750 {} \;
    

Next, ensure that the backup user has access to any future files added to the MySQL data directory. Select the tab for the system initialization daemon that your system uses, and then follow the steps.

init.d

  1. Add the following lines to the top of your /etc/init.d/mysql startup script:

    UMASK_DIR=0750
    export UMASK_DIR
    
  2. Restart MySQL:

    sudo service mysql restart
    

systemd

  1. Add the following line below the [Service] section of the MySQL service file, typically /etc/systemd/system/mysql.service:

    Environment="UMASK_DIR=0750"
    
  2. Reload the service configuration:

    systemctl daemon-reload
    
  3. Restart MySQL:

    sudo service mysql restart
    

Perform the backup

Now that you've set up the required tools and appropriate users, you can perform a hot backup of your MySQL database. In this section, you'll create two backup scripts: one for performing full backups, and another for performing incremental backups.

Prepare your environment

Before you create your backup scripts, you need to create a few files and directories that the scripts will use:

  • A directory which will be used to locally store your backups, or a symlink to the directory you already use to locally store backups. This tutorial uses /mnt/backups.
  • /etc/mysql/backup-my.cnf, a filtered version of my.cnf that contains only the configuration details necessary to perform the backups.
  • /etc/mysql/.mykeyfile, an AES encryption key that the scripts will use to encrypt the backups.

If you already have a local backup directory, create a symlink to that directory, and then grant the appropriate permissions to the directory:

  1. Create a symlink to your local backup directory:

    sudo ln -s <backup_dir_path> /mnt/backups
    
  2. Grant ownership of your backup directory to your Linux backup user:

    sudo chown backup:mysql /mnt/backups
    

Otherwise, create the local backup directory, and then grant the appropriate permissions to the directory:

  1. Choose a target disk for backups. This disk should be different than the disk your MySQL database uses, and should be able to store several backups.

  2. On your target disk, create a new directory. This directory will be used to store your backup files locally:

    sudo mkdir -p /mnt/backups
    
  3. Grant ownership of your new backup directory to your Linux backup user:

    sudo chown backup:mysql /mnt/backups
    

Next, create your backup-my.cnf and encryption key files:

  1. Create a new Bash script:

    nano backup-assist.sh
    
  2. Copy the following into the file:

    #!/bin/bash
    # backup-assist.sh
    read -p "Please enter the password you created for the mysql backup user: " -s ans
    backuppwd="${ans}"
    
    mkdir -p /etc/mysql/
    
    # Creating AES Encryption Key file. Use `echo -n` to ensure that no extra characters get added.
    echo -n `openssl rand -base64 24` > /etc/mysql/.mykeyfile
    SOCKET="socket=$(mysql -ubackup -p"${backuppwd}" -s -N -e "select @@socket;" 2> /dev/null)"
    DATADIR="datadir=$(mysql -ubackup -p"${backuppwd}" -s -N -e "select @@datadir;" 2> /dev/null)"
    INNODB_LOG_GROUP_HOME_DIR="innodb_log_group_home_dir=$(mysql -ubackup -p"${backuppwd}" -s -N -e "select @@innodb_log_group_home_dir;" 2> /dev/null)"
    INNODB_DATA_DIR_HOME=$(mysql -ubackup -p"$backuppwd" -s -N -e "select @@innodb_data_home_dir;" 2> /dev/null)
    if [ $INNODB_DATA_DIR_HOME = "NULL" ]; then
      INNODB_DATA_DIR_HOME=""
    else
        INNODB_DATA_DIR_HOME="innodb_data_home_dir=${INNODB_DATA_DIR_HOME}"
    fi
    
    cat > /etc/mysql/backup-my.cnf <<EOF
    [client]
    user=backup
    password=$backuppwd
    [mysqld]
    ${SOCKET}
    ${DATADIR}
    ${INNODB_LOG_GROUP_HOME_DIR}
    ${INNODB_DATA_DIR_HOME}
    EOF
    
    chown backup /etc/mysql/backup-my.cnf
    chown backup /etc/mysql/.mykeyfile
    chmod 600 /etc/mysql/backup-my.cnf
    chmod 600 /etc/mysql/.mykeyfile
    printf "\nCreated Files: \n  /etc/mysql/backup-my.cnf \n  /etc/mysql/.mykeyfile \n"
    
  3. Save the script file.

  4. Make the script executable:

    sudo chmod +x backup-assist.sh
    
  5. Run the script as root:

    sudo ./backup-assist.sh
    

Perform a full backup

Now that you've created the files and directories on which the scripts depend, it's time to create the first backup script. This initial script handles full backups, and is built around innobackupex, a utility included with XtraBackup. innobackupex acts as a wrapper around XtraBackup, providing additional backup and restore options not directly available through the XtraBackup binary.

This script uses innobackupex to perform the following tasks:

  • Encrypt the backed-up files.
  • Compress the files into a single archive file.
  • Write a backup history entry to the PERCONA_SCHEMA.XtraBackup_history table. Each entry includes the following keys (among others):

    • name: The date that the backup was taken.
    • innodb_to_lsn: The log sequence number (LSN) of the last successful backup. Later in the tutorial, innobackupex will need this number to perform an incremental backup based on this backup.

To perform a full backup:

  1. Create a new Bash script called mysql-full-backup.sh:

    nano mysql-full-backup.sh
    
  2. Paste the following into the script file:

    #!/bin/bash
    # mysql-full-backup.sh
    
    # Change directories to the backup location.
    cd /mnt/backups
    
    # Execute the compressed and encrypted full backup.
    innobackupex --defaults-file=/etc/mysql/backup-my.cnf \
      --no-timestamp \
      --use-memory=1G \
      --stream=xbstream \
      --parallel=4 \
      --encrypt=AES256 \
      --encrypt-key-file=/etc/mysql/.mykeyfile \
      --encrypt-threads=4 \
      --compress \
      --compress-threads=4 \
      --history=$(date +%d-%m-%Y) ./ > \
      mysqlbackup$(date +%d-%m-%Y).qp.xbc.xbs 2> \
      backup-progress.log &
    

    In addition to the expected backup-oriented flags, this script uses a few additional innobackupex flags to optimize performance:

    • --parallel: Specifies the number of threads that the XtraBackup child process should use to back up files concurrently.
    • --use-memory: Specifies the amount of memory that XtraBackup can use for crash recovery while preparing a backup.

    When running this script, tune these two parameters to reflect the capacity and resources available on your server so that you can avoid impacting the performance of the running MySQL service.

  3. If you're backing up from a slave replica, add the following flags to the innobackupex command in the script as well:

    --slave-info --safe-slave-backup
    
  4. Save the script file.

  5. Make the script executable:

    sudo chmod +x mysql-full-backup.sh
    
  6. Run the script as the backup user. Replace <path> with the absolute path to the script:

    sudo su - backup -s /bin/bash -c <path>/mysql-full-backup.sh
    
  7. Run tail -f to monitor the progress of the script:

    tail -f /mnt/backups/backup-progress.log
    
  8. After you receive the message "completed OK!", press Ctrl-C to exit the tail -f command.

Perform an incremental backup

In this section, you'll create a script that performs an incremental backup. In contrast to a full backup, which copies your whole database, an incremental backup captures only the changes that have occurred since your last successful backup. In general, it's best to perform one full backup each day, and then perform incremental backups periodically throughout the rest of the day.

innobackupex needs certain information about the last full backup before it can run an incremental backup. Usually, this information is locked away inside the last full backup's encrypted archive file, making it difficult to retrieve. However, because you set up the PERCONA_SCHEMA.XtraBackup_history table earlier in this tutorial and invoked innobackupex's --history flag in your full backup script, you can retrieve this information from the PERCONA_SCHEMA.XtraBackup_history record that was created by the script.

The incremental backup script uses information from the PERCONA_SCHEMA.XtraBackup_history record as follows:

  • The incremental backup script uses the name value to identify the last successful backup, and to continue its date-based naming convention in subsequent backups.
  • In the incremental backup script, innobackupex uses the last successful backup's innodb_to_lsn value, which is the log sequence number from which the incremental backup will start.

Like the full backup script, the incremental backup script uses innobackupex to perform the following tasks:

  • Encrypt the backed-up files.
  • Compress the files into a single archive file.
  • Write a backup history entry to the PERCONA_SCHEMA.XtraBackup_history table.

To perform an incremental backup:

  1. Create a new Bash script called mysql-incr-backup.sh:

    nano mysql-incr-backup.sh
    
  2. Paste the following into the script file:

    #!/bin/bash
    # mysql-incr-backup.sh
    
    # Change directories to the backup location.
    cd /mnt/backups
    
    # Execute the incremental backup.
    innobackupex --defaults-file=/etc/mysql/backup-my.cnf \
      --no-timestamp \
      --use-memory=1G \
      --stream=xbstream \
      --parallel=4 \
      --encrypt=AES256 \
      --encrypt-key-file=/etc/mysql/.mykeyfile \
      --encrypt-threads=4 \
      --compress \
      --compress-threads=4 \
      --incremental \
      --incremental-history-name=$(date +%d-%m-%Y) \
      --history=$(date +%d-%m-%Y) ./ > \
      incremental$(date +%H)-mysqlbackup$(date +%d-%m-%Y).qp.xbc.xbs 2> \
      incbackup_progress.log &
    

    This script is nearly identical to the full backup script, but includes additional flags in the innobackupex command:

    • --incremental: Instructs innobackupex to attempt an incremental backup.
    • --incremental-history-name: Searches the PERCONA_SCHEMA.XtraBackup_history table for the most recent backup record containing the defined name value, and then starts the backup using its innodb_to_lsn value.
  3. Save the script file.

  4. Make the script executable:

    sudo chmod +x mysql-incr-backup.sh
    
  5. Run the script as the backup user. Replace <path> with the absolute path to the script:

    sudo su - backup -s /bin/bash -c <path>/mysql-incr-backup.sh
    
  6. Run tail -f to monitor the progress of the script:

    tail -f /mnt/backups/incbackup-progress.log
    
  7. After you receive the message "completed OK!", press Ctrl-C to exit the tail -f command.

Manage your backups

In this section, you'll learn how to store your backups remotely in Cloud Storage, configure your Cloud Storage bucket to prune old backups automatically, and schedule your backups.

Upload your backups to Cloud Storage

Now that you have a few backup files, you can upload your backups to Cloud Storage:

  1. Create a new Cloud Storage bucket. Bucket names must be globally unique across Cloud Platform. To ensure that your bucket name is unique, consider namespacing it with your project name, as shown here:

    gsutil mb gs://<project>-mysql-backups
    
  2. Copy your files to your Cloud Storage bucket. In the following command, replace <bucket_name> with the name of the bucket you just created. If you configured your scripts to output your backup files to a different location than /mnt/backups, adjust the local path in the command to reflect that location as well:

    gsutil cp -n /mnt/backups/*mysqlbackup$(date +%d-%m-%Y).qp.xbc.xbs gs://<bucket_name>
    

The gsutil cp command can be used to create entire directory structures and to upload multiple files at once. If you're running multiple database servers, consider creating multiple directories within your Cloud Storage bucket to represent each different server.

Set up automatic file pruning in Cloud Storage

Older backups eventually outlive their usefulness and need to be removed. To help automate this process, Cloud Storage has a lifecycle management mechanism you can use to manage the lifecycle of your backup files.

To configure lifecycle management for the objects in your bucket:

  1. Create a new JSON file called lifecycle.json.
  2. Paste the following JSON into the file. This configuration will delete files in your Cloud Storage bucket after 30 days:

    {
      "lifecycle": {
        "rule": [{
          "action": { "type": "Delete" },
          "condition": { "age": 30 }
        }]
      }
    }
    
  3. Set the lifecycle configuration for your Cloud Storage bucket. Replace <bucket_name> with the name of your bucket:

    gsutil lifecycle set lifecycle.json gs://<bucket_name>
    

Schedule your backups

In general, these scripts assume that you're taking at least one daily full backup early in the day, and that you're scheduling incremental backups to run frequently throughout the rest of the day.

If you choose to combine your backup scripts into a single script that is scheduled by way of a cron job, make sure to include some logical validation steps at each point to verify successful completion. If validation fails, make sure the script returns an alert of some sort. In addition, the script should remove the local backup file after successfully uploading to Cloud Storage to avoid filling up the local disk.

Restoring your backup

In this section, you'll learn how to restore your MySQL database from the encrypted and compressed backup files you stored in Cloud Storage.

Download the backup from Cloud Storage

Begin by obtaining your backups from your Cloud Storage bucket:

  1. If you already use a specific directory for storing restore files, create a symlink to the directory:

    sudo ln -s <restore_dir_path> /mnt/restore
    
  2. Create a directory to temporarily hold your downloaded backup files (/mnt/restore/tmp), a directory to hold your uncompressed full backup files (/mnt/restore/full), and individual directories for each incremental backup (/mnt/restore/inc/(1,2,...}).

    sudo mkdir -p /mnt/restore/tmp /mnt/restore/full /mnt/restore/inc/1
    
  3. Grant ownership of the directories to the Linux backup user:

    sudo chown -R backup: /mnt/restore/tmp /mnt/restore/full /mnt/restore/inc/1
    
  4. Download your backup files from Cloud Storage. Replace <bucket_name> with the name of your MySQL backup storage bucket, and replace <date> with the date in your backup archive file's filename.

    sudo gsutil cp gs://<bucket_name>/*mysqlbackup<date>.qp.xbc.xbs  /mnt/restore/tmp
    

Extract, decrypt, and decompress your full backup

When you performed the full backup, innobackupex encrypted each file individually before adding it to the archive file. As such, these files will need to be decrypted one by one before they are usable.

To extract, decrypt, and decompress your full backup:

  1. Extract the encrypted files from the archive. Replace <date> with the date in your backup archive's filename:

    cat /mnt/restore/tmp/mysqlbackup<date>.qp.xbc.xbs | sudo xbstream -x -C /mnt/restore/full/
    
  2. Decrypt the files:

    sudo find /mnt/restore/full -name "*.xbcrypt" -type f -exec bash -c 'f={} && nf=${f%.xbcrypt} && \
      cat {} | sudo xbcrypt -da AES256 -f /etc/mysql/.mykeyfile -o ${nf} && rm {}' \;
    
  3. Decompress the backup:

    sudo innobackupex --use-memory=1G --parallel=4 --decompress /mnt/restore/full
    
  4. If there are .qp files remaining after the decompression phase, remove them by running the following command:

    sudo find /mnt/restore/full/ -name "*.qp" -exec rm {} \;
    

Prepare your backup

Before you restore your backup data file, you need to apply any uncommitted transaction log entries to the file. In the Percona universe, this step is known as the prepare stage. During the prepare stage, innobackupex creates a small version of a running MySQL server, and then executes the transactions from the log entries onto your recovered data files.

Usually, the prepare stage occurs immediately after the backup is taken. However, in this tutorial, the backup is streamed, compressed, and encrypted while it is being taken, making it impossible to apply the transactions to the data at that point. Instead, the transactions are applied during the restore process.

The prepare command differs depending on whether you're restoring from a full backup or from an incremental backup.

Full backup

If you plan to restore directly from a full backup, run the following command:

sudo innobackupex --use-memory=1G --apply-log /mnt/restore/full

After you've prepared the backup, you can skip ahead to Restore your backup.

Incremental backup

If you plan to restore from an incremental backup, you have to start the prepare process with the full backup that was taken prior to your target incremental file. To do so, run the following command. Note that this command is almost identical to the standard full backup preparation command, but has an additional flag, --redo-only:

sudo innobackupex --use-memory=1G --redo-only --apply-log /mnt/restore/full

The backup has not been fully prepared at this point—you must apply the relevant incremental backups to the file, as demonstrated in the next section.

Apply incremental backups to a full backup

This section demonstrates how to apply incremental backup files to your full backup. Perform the following steps for each incremental backup file in the order that the backups were performed:

  1. Extract the encrypted files from the archive. Replace <date> with the date in your incremental backup file's name, and replace <inc_num> with an appropriate numerical directory name. For example, if this is the first incremental backup, the directory should be /mnt/restore/inc/1/:

    cat /mnt/restore/tmp/incremental*mysqlbackup<date>.qp.xbc.xbs | sudo xbstream -x -C /mnt/restore/inc/<inc_num>/
    
  2. Decrypt the files. Replace <inc_num> with the same numerical directory name:

    sudo find /mnt/restore/inc/<inc_num> -name "*.xbcrypt" -type f -exec bash -c 'f={} && nf=${f%.xbcrypt} && \
      cat {} | sudo xbcrypt -da AES256 -f /etc/mysql/.mykeyfile -o ${nf} && rm {}' \;
    
  3. Decompress the backup. Replace <inc_num> with the same numerical directory name:

    sudo innobackupex --use-memory=1G --parallel=4 --decompress /mnt/restore/inc/<inc_num>
    
  4. If there are .qp files remaining after the decompression phase, you can remove them by running the following command. Replace <inc_num> with the same numerical directory name:

    sudo find /mnt/restore/inc/<inc_num> -name "*.qp" -exec rm {} \;
    
  5. Apply the incremental backup to the full backup, replacing <inc_num> with the same numerical directory name:

    sudo innobackupex --use-memory=1G --apply-log --redo-only --incremental-dir=/mnt/restore/inc/<inc_num> /mnt/restore/full
    

After you've applied all of the incremental backups to your full backup, you can prepare it as you would prepare a standard full backup:

sudo innobackupex --use-memory=1G --apply-log /mnt/restore/full

Restore your backup

Now that you've prepared your backup file, you can complete the restore process:

  1. Stop your MySQL service.

    sudo service mysql stop
    
  2. Empty your MySQL data directory. Replace <datadir> with the absolute path of your MySQL data directory:

    sudo rm -rf <datadir>/*
    
  3. Restore from your backup:

    sudo innobackupex --copy-back /mnt/restore/full
    
  4. Verify that the restored files are in your MySQL data directory. Replace <datadir> with the absolute path of your MySQL data directory:

    ls -l <datadir>
    
  5. Reset the owner and group for the MySQL data directory to match the MySQL process. Replace <datadir> with the absolute path of your MySQL data directory:

    sudo chown -R mysql:mysql <datadir>
    
  6. Repair permissions for the MySQL data directory so that the backup user can access the directory when performing future backups. Replace <datadir> with the absolute path of your MySQL data directory:

    sudo find <datadir> -type d -exec chmod 750 {} \;
    
  7. Restart the MySQL service:

    sudo service mysql start
    

Next steps

Set up MySQL on Cloud Platform

Take a look at the various ways you can set up MySQL on Cloud Platform, and choose the method that best suits your needs.

Learn about disaster recovery on Cloud Platform

For a comprehensive look at how you can best integrate Cloud Platform into your disaster recovery plan, check out Designing a Disaster Recovery Plan with Google Cloud Platform. For a discussion of specific disaster recovery use cases, with example implementations on Google Cloud Platform, see the Disaster Recovery Cookbook.

Try other tutorials

Try out other Google Cloud Platform features for yourself. Have a look at our tutorials.

Send feedback about...