Move your Cloud Storage data to another location

Last reviewed 2022-06-20 UTC

When you create a Cloud Storage bucket, you choose its permanent geographic location. As your business needs change, where you store your data might need to change too. For example, your data might be better situated in a highly available multi-region bucket, a lower cost regional bucket, or simply a different region of the world.

This tutorial helps you to select a location that best fits your needs, and then shows you how to set up a new Cloud Storage bucket and move your data to the new location using Storage Transfer Service.

Objectives

  • Choose a new location for the Cloud Storage data.
  • Define a transfer strategy.
  • Transfer your data to its new location.

Costs

In this document, you use the following billable components of Google Cloud:

From April 2, 2022 to December 31, 2022, Storage Transfer Service is suspending many of the transfer costs that normally accrue when using the service. This temporary suspension of charges is intended to help you migrate data within Cloud Storage to locations that best align with your use cases.

After December 31, 2022, the following Cloud Storage pricing applies when using Storage Transfer Service:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. Enable the Cloud Storage and Storage Transfer Service APIs.

    Enable the APIs

  5. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  6. Make sure that billing is enabled for your Google Cloud project.

  7. Enable the Cloud Storage and Storage Transfer Service APIs.

    Enable the APIs

  8. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  9. In the Google Cloud console, go to the IAM and Admin page to give your account the role of Storage Admin and Access Context Manager Admin.

    Go to the IAM and Admin page

    The Storage Admin role has the following permissions:

    • firebase.projects.get
    • resourcemanager.projects.get
    • resourcemanager.projects.list
    • storage.buckets.*
    • storage.objects.*

Choose a new location

When you choose the location for a Cloud Storage bucket, consider the differences in availability, price, and performance, as shown in the following table.

Region Dual-region Multi-region
Availability
  • Data redundancy across availability zones (synchronous)
  • RTO(recovery time objective)=0: automated failover and failback on zonal failure (no need to change storage paths)
  • Higher availability than regions for a given storage class
  • Data redundancy across regions (asynchronous)
  • Turbo replication option for replication within 15 minutes
  • RTO(recovery time objective)=0: automated failover and failback on regional failure (no need to change storage paths)
  • Higher availability than regions for a given storage class
  • Data redundancy across regions (asynchronous)
  • RTO(recovery time objective)=0: automated failover and failback on regional failure (no need to change storage paths)
Performance
  • 200 Gbps (per region, per project)
  • Scalable to many Tbps by requesting higher bandwidth quota
  • 200 Gbps (per region, per project)
  • Scalable to many Tbps by requesting higher bandwidth quota
  • 50 Gbps (per region, per project)
  • Limited performance scaling, variable performance for reads
Pricing
  • Lowest storage price
  • No replication charges
  • No outbound data transfer charges when reading data inside the same region
  • Highest storage price
  • Replication charges apply on write
  • No outbound data transfer charges when reading data within either region
  • Higher storage price than regions, but lower than dual-regions
  • Replication charges apply on write
  • Outbound data transfer charges always apply when reading data

Location recommendations

Requirements Recommended bucket location Workload examples
  • Optimized latency and bandwidth
  • Lowest data storage cost
  • Cross-zone redundancy
Region
  • Analytics
  • Backup and archive
  • Optimized latency and bandwidth
  • Cross-region redundancy
Dual-region1
  • Analytics
  • Backup and archive
  • Disaster recovery
  • Cross-geography data access
  • Cross-region redundancy
Multi-region
  • Content serving
  1. If you need a short and predictable recovery point objective (RPO), enable the premium turbo replication feature.
  • To maximize performance and lower your total cost of ownership, co-locate your data and compute in the same region(s). Regions and dual-regions are both suitable for this purpose.
  • To avoid data replication charges, store short-lived datasets in regions.
  • For moderate performance and ad hoc analytics workloads, multi-region storage can be a cost-effective choice.

  • When transferring to a new bucket, consider if the current storage class still suits your needs.

Plan and start the transfer

After you've decided on a new location, see Transfer between Cloud Storage buckets to plan your data move.