Transaction log backup files contain the changes changes that occur in your source database after you take the full backup. Transaction log backups are required for the continuous load phase of your migration job.
This page describes how to schedule regular transaction log exports and uploads for your source SQL Server databases.
Schedule transaction log file uploads for Amazon RDS
You can schedule regular transfers of your transaction log files from the Amazon RDS source instance to the Cloud Storage bucket where you store the transaction log files.
Perform the following steps:
- Ensure automated backups are enabled on your Amazon RDS instance. See Enable automated backups in the Amazon RDS documentation.
- Enable access to transaction log files in your Amazon RDS instance.
See
Access to transaction log backups with RDS for SQL Server.
You can also check the following sample command:
Example stored procedure call to enable transaction log access in Amazon RDS
export DATABASE=YOUR_DATABASE_NAME; export S3_Bucket=YOUR_S3_BUCKET; exec msdb.dbo.rds_tlog_copy_setup @target_s3_arn='arn:aws:s3:::${S3_Bucket}/${DATABASE}/log/';
- Create a SQL Agent job that runs on a regular schedule. The job should
execute the following script:
declare @DATABASE varchar(100); SELECT @DATABASE=YOUR_DATABASE_NAME; USE @DATABASE; declare @startTime varchar(100); declare @endTime varchar(100); SELECT @startTime = CONVERT(VARCHAR(100), DATEADD(hour, -1, GETUTCDATE()), 120), @endTime = CONVERT(VARCHAR(100), GETUTCDATE(), 120); exec msdb.dbo.rds_tlog_backup_copy_to_S3 @db_name=@DATABASE, @backup_file_start_time=@startTime, @backup_file_end_time=@endTime;
- Configure continuous file transfers from your S3 bucket to the Cloud Storage bucket. You can use any solution to move your files, for example event-driven transfer jobs in Storage Transfer Service. See Set up event-driven transfers from AWS S3.
Schedule transaction log file uploads for on-premise SQL Server instances
You can schedule regular transfers of your transaction log files from your self-managed source instance to the Cloud Storage bucket where you store the transaction log files. One of the recommended solutions is
Perform the following steps:
- On the system where you want to run the script, ensure you have initialized the Google Cloud CLI with authentication and a project by running either gcloud init; or gcloud auth login and gcloud config set project.
- Save the following script to a file on your source SQL Server instance.
This script automates creating a transaction log file and uploading it to your Cloud Storage bucket by using the
gcloud storage cp
command.Bash
#!/bin/bash NOW="$(date +%s)" EXT=".trn" if [[ "$1" == "final" ]] then EXT='.trn.final' fi NAME="{DATABASE}.${NOW}.${EXT}" FULL_NAME="/SQLServerBackups/log/${NAME}" QUERY="BACKUP LOG ${DATABASE} TO DISK = '${FULL_NAME}'" /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "${SA_PASSWORD}" -d master -Q "${QUERY}" gcloud storage cp "${FULL_NAME}" "${GCS_BACKUPS}/log/"
PowerShell
# Get the current timestamp $NOW = [int](Get-Date -UFormat '%s') # Set the file extension based on the command-line argument $EXT = '.trn' if ($args[0] -eq 'final') { $EXT = '.trn.final' } # Construct the backup file name $NAME = "{0}.{1}{2}" -f $DATABASE, $NOW, $EXT $FULL_NAME = "X:\SQLServerBackups\log\$NAME" # Construct the SQL backup query $QUERY = "BACKUP LOG $DATABASE TO DISK = '$FULL_NAME'" # Execute the SQL backup command Invoke-Sqlcmd -ServerInstance 'localhost' -Username 'SA' -Password $env:SA_PASSWORD -Database 'master' -Query $QUERY # Upload the backup file to Google Cloud Storage gcloud storage cp "$FULL_NAME" "$GCS_BACKUPS/log/"
- Configure a scheduling tool of your choice to regularly run the script.
You can also use this script manually and pass the
"final"
positional argument to create the transaction log file whose name ends in the.trn.final
suffix. This is useful when you want to finish your migration and promote the migration job.