Stay organized with collections Save and categorize content based on your preferences.

Migrating Amazon Redshift data with VPC

This document explains how to migrate data from Amazon Redshift to BigQuery using a virtual private cloud (VPC) network.

If you have a private Amazon Redshift instance in AWS, you can migrate that data to BigQuery by using VPC peering. To enable this feature, you must specify the VPC and reserved IP range when setting up the migration.

  • You must set up a virtual private network (VPN) between the Amazon Redshift VPC network and the Google Cloud VPC network.
  • Through the VPN, the migration agent running in the Google Cloud VPC triggers an unload operation from Amazon Redshift to a staging area in an Amazon S3 bucket.
  • The BigQuery Data Transfer Service transfers your data from the Amazon S3 bucket to BigQuery.

The following diagram shows the VPC communications and the overall flow of data between a private Amazon Redshift instance and BigQuery during a migration.

Workflow of Amazon Redshift to BigQuery migration.

If you'd like to transfer data from your Amazon Redshift instance through public IPs, you can migrate your Amazon Redshift data to BigQuery with these instructions.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.

  4. Enable the BigQuery and BigQuery Data Transfer Service APIs.

    Enable the APIs

  5. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  6. Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.

  7. Enable the BigQuery and BigQuery Data Transfer Service APIs.

    Enable the APIs

Set required permissions

Before creating an Amazon Redshift transfer, follow these steps:

  1. Ensure that the person creating the transfer has the following required Identity and Access Management (IAM) permissions in BigQuery:

    • bigquery.transfers.update permissions to create the transfer
    • bigquery.datasets.update permissions on the target dataset

    The role/bigquery.admin predefined IAM role includes bigquery.transfers.update and bigquery.datasets.update permissions. For more information on IAM roles in BigQuery Data Transfer Service, see Access control.

  2. Consult the documentation for Amazon S3 to ensure you have configured any permissions necessary to enable the transfer. At a minimum, the Amazon S3 source data must have the AWS managed policy AmazonS3ReadOnlyAccess applied to it.

  3. Grant the appropriate IAM permissions for creating and deleting VPC Network Peering to the individual setting up the transfer. The service uses the individual's Google Cloud user credentials to create the VPC peering connection.

    • Permissions to create VPC peering: compute.networks.addPeering
    • Permissions to delete VPC peering: compute.networks.removePeering

    The roles/project.owner, roles/project.editor, and roles/compute.networkAdmin predefined IAM roles include the compute.networks.addPeering and compute.networks.removePeering permissions by default.

Create a dataset

Create a BigQuery dataset to store your data. You do not need to create any tables.

Grant access to your Amazon Redshift cluster

Follow the instructions in Configure inbound rules for SQL clients to allowlist the IP ranges of your private Amazon Redshift cluster. In a later step, you define the private IP range in this VPC network when you set up the transfer.

Grant access to your Amazon S3 bucket

You must have an S3 bucket to use as a staging area to transfer the Amazon Redshift data to BigQuery. For detailed instructions, see the Amazon documentation.

  1. We recommended that you create a dedicated Amazon IAM user, and grant that user only Read access to Amazon Redshift and Read and Write access to S3. To achieve this step, you can apply the following policies:

    Amazon Redshift migration Amazon permissions

  2. Create an Amazon IAM user access key pair.

Configure workload control with a separate migration queue

Optionally, you can define an Amazon Redshift queue for migration purposes to limit and separate the resources used for migration. You can configure this migration queue with a maximum concurrency query count. You can then associate a certain migration user group with the queue and use those credentials when setting up the migration to transfer data to BigQuery. The transfer service only has access to the migration queue.

Gather transfer information

Gather the information that you need to set up the migration with the BigQuery Data Transfer Service:

  • Get the VPC and reserved IP range in Amazon Redshift.
  • Follow these instructions to get the JDBC URL.
  • Get the username and password of a user with appropriate permissions to your Amazon Redshift database.
  • Follow the instructions at Grant access to your S3 bucket to get an AWS access key pair.
  • Get the URI of the Amazon S3 bucket you want to use for the transfer. We recommend that you set up a Lifecycle policy for this bucket to avoid unnecessary charges. The recommended expiration time is 24 hours to allow sufficient time to transfer all data to BigQuery.

Set up the VPN

  1. In your Google Cloud project, set up a Google Cloud VPC network.

  2. Set up the VPN. Follow the instructions in this guide to set up a VPN between your Cloud project's VPC network and the Amazon Redshift VPC.

  3. Grant permissions to do VPC Peering on Google Cloud. Ensure that you have the necessary permissions to create the VPC peering connection. See Required permissions.

  4. Before continuing, ensure that your Google Cloud VPC network exists in your Google Cloud project, and it is already connected to Redshift through VPN.

Set up an Amazon Redshift transfer

Use the following instructions to set up an Amazon Redshift transfer:

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. Click Data transfers.

  3. Click Create transfer.

  4. In the Source type section, select Migration: Amazon Redshift from the Source list.

  5. In the Transfer config name section, enter a name for the transfer, such as My migration, in the Display name field. The display name can be any value that allows you to easily identify the transfer if you need to modify it later.

  6. In the Destination settings section, choose the dataset you created from the Dataset list.

  7. In the Data source details section, do the following:

    1. For JDBC connection url for Amazon Redshift, provide the JDBC URL to access your Amazon Redshift cluster.
    2. For Username of your database, enter the username for the Amazon Redshift database that you want to migrate.
    3. For Password of your database, enter the database password.

    4. For Access key ID and Secret access key, enter the access key pair you obtained from Grant access to your S3 bucket.

    5. For Amazon S3 URI, enter the URI of the S3 bucket you'll use as a staging area.

    6. For Amazon Redshift Schema, enter the Amazon Redshift schema you're migrating.

    7. For Table name patterns, specify a name or a pattern for matching the table names in the schema. You can use regular expressions to specify the pattern in the form: <table1Regex>;<table2Regex>. The pattern should follow Java regular expression syntax. For example:

      • lineitem;ordertb matches tables that are named lineitem and ordertb.
      • .* matches all tables.

      Leave this field empty to migrate all tables from the specified schema.

    8. For VPC and the reserved IP range, specify your VPC network name and the expected private IP range as a CIDR block for provisioning migration infrastructure.

      Amazon Redshift migration CIDR field

      • The form is VPC_network_name:CIDR, for example: my_vpc:10.251.1.0/24.
      • Use standard private VPC network address ranges in the CIDR notation, starting with 10.x.x.x.
      • The private IP Range is for provisioning migration infrastructure, so ensure the following:
        • The IP range is wide enough (has more than 10 IP addresses).
        • The IP range does not overlap with any subnet in your Google Cloud VPC network or the Amazon Redshift VPC network.
      • If you have multiple transfers configured for the same Amazon Redshift instance, make sure to use the same VPC_network_name:CIDR value in each, so that multiple transfers can reuse the same migration infrastructure.
  8. Optional: In the Notification options section, do the following:

    1. Click the toggle to enable email notifications. When you enable this option, the transfer administrator receives an email notification when a transfer run fails.
    2. For Select a Pub/Sub topic, choose your topic name or click Create a topic. This option configures Pub/Sub run notifications for your transfer.
  9. Click Save.

  10. The Google Cloud console displays all the transfer setup details, including a Resource name for this transfer.

Quotas and limits

Migrating Amazon Redshift private instances with VPC runs migration agents on a single tenant infrastructure. Due to computation resource limits, at most 5 concurrent transfer runs are allowed.

BigQuery has a load quota of 15 TB for each load job for each table. Internally, Amazon Redshift compresses the table data, so the exported table size will be larger than the table size reported by Amazon Redshift. If you plan to migrate a table larger than 15 TB, please contact Cloud Customer Care first.

Costs can be incurred outside of Google by using this service. Review the Amazon Redshift and Amazon S3 pricing pages for details.

Because of Amazon S3's consistency model, it's possible that some files will not be included in the transfer to BigQuery.

What's next