This tutorial demonstrates how you can migrate your existing MySQL data from a Persistent Disk (PD) to Hyperdisk on Google Kubernetes Engine to upgrade your storage performance. Hyperdisk offers higher IOPS and throughput than Persistent Disk, which can improve MySQL performance by reducing latency for database queries and transactions. You can use disk snapshots to migrate your data to different disk types depending on machine type compatibility. For example, Hyperdisk volumes are compatible only with some third, fourth, and later generation machine types such as N4, which do not support Persistent Disks. For more information, see available machine series.
To demonstrate the migration from Persistent Disk to Hyperdisk, this tutorial uses the Sakila database to provide a sample dataset. Sakila is a sample database provided by MySQL that you can use as a schema for tutorials and examples. It represents a fictional DVD rental store and includes tables for films, actors, customers, and rentals.
This guide is for Storage specialists and Storage administrators who create and allocate storage, and manage data security and data access. To learn more about common roles and example tasks that we reference in Google Cloud content, see Common GKE Enterprise user roles and tasks.
Deployment architecture
The following diagram illustrates the migration process from a Persistent Disk to a Hyperdisk.
- A MySQL application runs on a GKE node pool with N2 machine types, storing its data on a Persistent Disk SSD.
- To ensure data consistency, the application is scaled down to prevent new writes.
- A snapshot of the Persistent Disk is created, serving as a complete point-in-time backup of the data.
- A new Hyperdisk is provisioned from the snapshot, and a new MySQL instance is deployed on a separate, Hyperdisk-compatible N4 node pool. This new instance attaches to the newly created Hyperdisk, finalizing the migration to the higher-performance storage.
Objectives
In this tutorial, you will learn how to do the following:
- Deploy a MySQL cluster.
- Upload a testing dataset.
- Create a snapshot of your data.
- Create a Hyperdisk from the snapshot.
- Start a new MySQL cluster in a Hyperdisk-enabled N4 machine type node pool.
- Verify data integrity to confirm a successful migration.
Costs
In this document, you use the following billable components of Google Cloud:
- GKE
- Compute Engine, which includes:
- Storage capacity provisioned for both Persistent Disk and Hyperdisk.
- Storage costs for the snapshots.
To generate a cost estimate based on your projected usage,
use the pricing calculator.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Verify that billing is enabled for your Google Cloud project.
-
Enable the Compute Engine, GKE, Identity and Access Management Service Account Credentials APIs.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Verify that billing is enabled for your Google Cloud project.
-
Enable the Compute Engine, GKE, Identity and Access Management Service Account Credentials APIs.
-
Make sure that you have the following role or roles on the project: roles/container.admin, roles/iam.serviceAccountAdmin, roles/compute.admin
Check for the roles
-
In the Google Cloud console, go to the IAM page.
Go to IAM - Select the project.
-
In the Principal column, find all rows that identify you or a group that you're included in. To learn which groups you're included in, contact your administrator.
- For all rows that specify or include you, check the Role column to see whether the list of roles includes the required roles.
Grant the roles
-
In the Google Cloud console, go to the IAM page.
Go to IAM - Select the project.
- Click Grant access.
-
In the New principals field, enter your user identifier. This is typically the email address for a Google Account.
- In the Select a role list, select a role.
- To grant additional roles, click Add another role and add each additional role.
- Click Save.
-
Set up Cloud Shell
-
In the Google Cloud console, activate Cloud Shell.
A Cloud Shell session starts and displays a command-line prompt. It can take a few seconds for the session to initialize.
- Set your default project:
gcloud config set project PROJECT_ID
Replace
PROJECT_ID
with your project ID.
Prepare the environment
In the Cloud Shell, set the environment variables for your project, location, and cluster prefix.
export PROJECT_ID=PROJECT_ID export EMAIL_ADDRESS=EMAIL_ADDRESS export KUBERNETES_CLUSTER_PREFIX=offline-hyperdisk-migration export LOCATION=us-central1-a
Replace the following:
PROJECT_ID
: your Google Cloud project ID.EMAIL_ADDRESS
: your email address.LOCATION
: the zone where you want to create your deployment resources. For the purpose of this tutorial, use theus-central1-a
zone.
Clone the sample code repository from GitHub:
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
Navigate to the
offline-hyperdisk-migration
directory to start creating deployment resources:cd kubernetes-engine-samples/databases/offline-hyperdisk-migration
Create the GKE cluster and node pools
This tutorial uses a zonal cluster for simplicity because Hyperdisk volumes are zonal resources and only accessible within a single zone.
Create a zonal GKE cluster:
gcloud container clusters create ${KUBERNETES_CLUSTER_PREFIX}-cluster \ --location ${LOCATION} \ --node-locations ${LOCATION} \ --shielded-secure-boot \ --shielded-integrity-monitoring \ --machine-type "e2-micro" \ --num-nodes "1"
Add a node pool with a N2 machine type for the initial MySQL deployment:
gcloud container node-pools create regular-pool \ --cluster ${KUBERNETES_CLUSTER_PREFIX}-cluster \ --machine-type n2-standard-4 \ --location ${LOCATION} \ --num-nodes 1
Add a node pool with a N4 machine type on Hyperdisk where the MySQL deployment will be migrated and run:
gcloud container node-pools create hyperdisk-pool \ --cluster ${KUBERNETES_CLUSTER_PREFIX}-cluster \ --machine-type n4-standard-4 \ --location ${LOCATION} \ --num-nodes 1
Connect to the cluster:
gcloud container clusters get-credentials ${KUBERNETES_CLUSTER_PREFIX}-cluster --location ${LOCATION}
Deploy MySQL on Persistent Disk
In this section, you deploy a MySQL instance that uses a Persistent Disk for storage, and load it with sample data.
Create and apply a
StorageClass
for Hyperdisk. ThisStorageClass
will be used later in the tutorial.kubectl apply -f manifests/01-storage-class/storage-class-hdb.yaml
Create and deploy a MySQL instance that includes node affinity to ensure Pods are scheduled on
regular-pool
nodes, and provisions a Persistent Disk SSD volume.kubectl apply -f manifests/02-mysql/mysql-deployment.yaml
This manifest creates a MySQL deployment and service, with a dynamically provisioned Persistent Disk for data storage. The password for the
root
user ismigration
.Deploy a MySQL client Pod to load data, and verify the data migration:
kubectl apply -f manifests/02-mysql/mysql-client.yaml kubectl wait pods mysql-client --for condition=Ready --timeout=300s
Connect to the client Pod:
kubectl exec -it mysql-client -- bash
From the client Pod shell, download and import the Sakila sample dataset:
# Download the dataset curl --output dataset.tgz "https://downloads.mysql.com/docs/sakila-db.tar.gz" # Extract the dataset tar -xvzf dataset.tgz -C /home/mysql # Import the dataset into MySQL (the password is "migration"). mysql -u root -h regular-mysql.default -p SOURCE /sakila-db/sakila-schema.sql; SOURCE /sakila-db/sakila-data.sql;
Verify that the data was imported:
USE sakila; SELECT table_name, table_rows FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'sakila';
The output shows a list of tables with row counts.
| TABLE_NAME | TABLE_ROWS | +----------------------------+------------+ | actor | 200 | | actor_info | NULL | | address | 603 | | category | 16 | | city | 600 | | country | 109 | | customer | 599 | | customer_list | NULL | | film | 1000 | | film_actor | 5462 | | film_category | 1000 | | film_list | NULL | | film_text | 1000 | | inventory | 4581 | | language | 6 | | nicer_but_slower_film_list | NULL | | payment | 16086 | | rental | 16419 | | sales_by_film_category | NULL | | sales_by_store | NULL | | staff | 2 | | staff_list | NULL | | store | 2 | +----------------------------+------------+ 23 rows in set (0.01 sec)
Exit the
mysql
session:exit;
Exit the client Pod shell:
exit
Get the name of the PersistentVolume (PV) created for MySQL and store it in an environment variable:
export PV_NAME=$(kubectl get pvc mysql-pv-claim -o jsonpath='{.spec.volumeName}')
Migrate the data to a Hyperdisk volume
Now you have a MySQL workload with data stored on a Persistent Disk SSD volume. This section describes how to migrate this data to a Hyperdisk volume by using a snapshot. This migration approach also preserves the original Persistent Disk volume, which lets you roll back to using the original MySQL instance if necessary.
While you can create snapshots from disks without detaching them from workloads, to ensure data integrity for MySQL you must stop any new writes from occurring to your disk during snapshot creation. Scale down the MySQL deployment to
0
replicas to stop writes:kubectl scale deployment regular-mysql --replicas=0
Create a snapshot from the existing Persistent Disk:
gcloud compute disks snapshot ${PV_NAME} --location=${LOCATION} --snapshot-name=original-snapshot --description="snapshot taken from pd-ssd"
Create a new Hyperdisk volume named
mysql-recovery
from the snapshot:gcloud compute disks create mysql-recovery --project=${PROJECT_ID} \ --type=hyperdisk-balanced \ --size=150GB --location=${LOCATION} \ --source-snapshot=projects/${PROJECT_ID}/global/snapshots/original-snapshot
Update the manifest file for the restored PV with your project ID:
sed -i "s/PRJCTID/$PROJECT_ID/g" manifests/02-mysql/restore_pv.yaml
Create the PersistentVolume (PVC) and PersistentVolumeClaim from the new Hyperdisk:
kubectl apply -f manifests/02-mysql/restore_pv.yaml
Verify the data migration
Deploy a new MySQL instance that uses the newly created Hyperdisk volume. This Pod will be scheduled on the hyperdisk-pool
node pool which consists of N4 nodes.
Deploy the new MySQL instance:
kubectl apply -f manifests/02-mysql/recovery_mysql_deployment.yaml
To verify data integrity, connect to the MySQL client Pod again:
kubectl exec -it mysql-client -- bash
Inside the client Pod, connect to the new MySQL database (
recovered-mysql.default
) and verify the data. The password ismigration
.mysql -u root -h recovered-mysql.default -p USE sakila; SELECT table_name, table_rows FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'sakila';
The data should be the same as in your original MySQL instance on Persistent Disk volume.
Exit the
mysql
session:exit;
Exit the client Pod shell:
exit
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Delete the project
- In the Google Cloud console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
Delete individual resources
If you used an existing project and you don't want to delete it, delete the individual resources:
Set environment variables for cleanup and retrieve the name of the Persistent Disk volume created by the
mysql-pv-claim
PersistentVolumeClaim:export PROJECT_ID=PROJECT_ID export KUBERNETES_CLUSTER_PREFIX=offline-hyperdisk-migration export location=us-central1-a export PV_NAME=$(kubectl get pvc mysql-pv-claim -o jsonpath='{.spec.volumeName}')
Replace
PROJECT_ID
with your project ID.Delete the snapshot:
gcloud compute snapshots delete original-snapshot --quiet
Delete the GKE cluster:
gcloud container clusters delete ${KUBERNETES_CLUSTER_PREFIX}-cluster --location=${LOCATION} --quiet
Delete the Persistent Disk and Hyperdisk volumes:
gcloud compute disks delete ${PV_NAME} --location=${LOCATION} --quiet gcloud compute disks delete mysql-recovery --location=${LOCATION} --quiet
What's next
- Look through more code samples in the GKE samples GitHub repository.
- Learn how to scale your storage performance with Hyperdisk volumes.
- Learn about using the Compute Engine Persistent Disk CSI Driver for managing Persistent Disk and Hyperdisk volumes.
- Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center.