This page provides an overview of bucket relocation, its benefits, use cases, how it works, and limitations.
Overview
Cloud Storage bucket relocation enables serverless migration of buckets between geographic locations. Using bucket relocation, you can do the following:
Move an existing bucket from one location to another without changing its name or requiring manual data transfer of the data within the bucket.
Improve performance and cost efficiency by aligning the Cloud Storage configuration of your workload with Compute Engine.
Benefits
The benefits of bucket relocation are as follows:
Simplified migration: You can relocate buckets with minimal operational overhead. No complex scripting or multi-step processes are required.
Continuous operation: Your applications remain accessible throughout the relocation process, with no downtime for read operations and minimal downtime for write operations.
Improved performance: Colocating Compute Engine and Cloud Storage resources within the same region can reduce latency and enhance performance.
Metadata preservation: The bucket relocation process retains the object metadata. Retaining the object metadata ensures compatibility with existing applications and workflows after the bucket is moved.
Storage class configurations: You can maintain existing Cloud Storage class settings, including Autoclass. Preserving the storage class ensures that your cost structure remains consistent after the relocation.
Why should you use bucket relocation?
The following are some of the use cases for relocating your buckets:
Reduce data transfer cost: If your data is frequently accessed from a location that is distant from where it's stored, you can relocate your bucket to a location close to where it's accessed from, resulting in data transfer cost reduction. For example, if your data is primarily accessed from Europe, but stored in the United States, you can move your bucket to a Europe location to reduce costs.
Improve performance: You can improve your application's speed and response by moving your data closer to your Compute Engine. For example, if your application runs in
us-central1
but your data resides inasia-east1
, you can relocate your bucket tous-central1
to reduce latency.Enhance resilience: You can safeguard your critical data from regional outages. For example, if your data is stored in a single region, you can relocate it to dual- or multi-regional locations for increased availability and disaster recovery.
Relocation types
Whether a bucket relocation involves write downtime depends on the bucket's source and destination locations. For information about how the location impacts the relocation type, see Determine the relocation type of your bucket. The two bucket relocation types are as follows:
Bucket relocation with write downtime: In bucket relocation with write downtime, there is a period where you cannot perform object write operations during the bucket relocation process.
Bucket relocation without write downtime: In bucket relocation without write downtime, you can continue to perform object write operations without interruption while the bucket relocation happens in the background.
The following table describes the important differences between write downtime and without write downtime relocation types:
Specification | Bucket relocation with write downtime | Bucket relocation without write downtime |
---|---|---|
Write availability | Unable to perform write operations during the final synchronization step. | No interruption to write operations. |
User involvement | Requires user to initiate the write downtime finalization step. | No explicit finalization step required. |
Performance impact | Unable to write or update any objects within the bucket during the final synchronization step. | Potential for increased object read and write latency during the relocation. |
Bucket relocation cancellation | Faster than the moves without write downtime. | Cancellation is not instantaneous. Can take longer due to the need to backfill objects. |
Feature support | Lesser feature support available than moves without write downtime. | Limitations on features such as multi-part uploads, retention policies, Firebase, and appspot. For more information about these limitations, see Limitations. |
Determine your bucket relocation type
Your source and destination bucket locations determine your relocation type.
When relocating a bucket between regions, dual-regions, or multi-regions, you experience downtime where you cannot write to the bucket. However, you can relocate a bucket without downtime in the following cases:
Relocate from a multi-region to a configurable dual-region if both locations share the same multi-region code.
Relocate between configurable dual-regions if both locations share the same multi-region code.
Relocate from a configurable dual-region to a multi-region if both locations share the same multi-region code.
Understand the bucket relocation process
Bucket relocation helps you move your data from a source bucket to a destination bucket. The source bucket holds the data that you want to move and the destination bucket is where you want to move your data.
The following diagram shows the bucket relocation process flow.

The numbered steps refer to the numbers in the diagram. The diagram shows the following steps:
Incremental data copy: The incremental data copy step copies data from the source bucket to the destination bucket. The bucket metadata is write-locked to prevent any changes to the bucket that might affect the relocation process. However, you can write, modify, and delete objects in the bucket. The factors influencing the duration are as follows:
- The frequency of object updates, deletions, or additions within the
bucket directly impacts the copy duration. A higher rate of change
requires more time. There's a maximum object movement rate
Rm, objects/second
. WithN
total objects and an update rate ofR objects/second
, the copy step duration can be estimated asN / (Rm - R)
seconds.
Large buckets require more relocation time due to finite bandwidth.
The size of individual objects affects the copy time. Objects larger than 10 GB take longer to transfer than objects under 10 GB due to bandwidth constraints. For example, a 1 TB object takes one day to copy. We recommend you break down large objects into smaller objects of size 0.1 to 1 GB.
For more information about how to initiate incremental data copy, see Initiate the incremental data copy step.
- The frequency of object updates, deletions, or additions within the
bucket directly impacts the copy duration. A higher rate of change
requires more time. There's a maximum object movement rate
Monitor the incremental data copy: To view the status of the incremental data copy step, you can regularly check the long-running operations list. For information about how to check the status of the incremental data copy step, see Monitor the incremental data copy step.
Final synchronization: For relocations with write downtime, once the incremental data copy is complete, you'll need to trigger the final synchronization step. The final synchronization step includes a period where you cannot write to the bucket to ensure data consistency. The final synchronization step includes the following actions:
The bucket is temporarily write-locked. As a result, you cannot write or update any objects within the bucket during this time, preventing data inconsistencies.
Any changes made to the object data within a bucket since the incremental copy step are copied to the destination bucket, ensuring the relocated bucket has the most up-to-date data. Once the object copying finishes, a comparison is performed to ensure data parity between the source and destination buckets. After the data comparison, the bucket's location is updated, and all requests are redirected to the new location.
Once all data has been transferred, verified, and the bucket is operational in the new location, the write-lock is removed. You can then resume writing and updating objects in the bucket.
For information about how to initiate the final synchronization step, see Initiate the final synchronization step.
Limitations
The bucket relocation service supports up to five concurrent relocations from the same location within a project.
The following sections describe the limitations that apply to relocations with write downtime and without write downtime.
Relocation with write downtime limitations
Relocation with write downtime has the limitations listed in the following sections.
Data handling limitations
The following are the limitations when handling data during the relocation:
- Table breakage: BigLake external tables and BigQuery tables using Apache Iceberg will break and require manual recreation. Automatic detection of impacted tables is unavailable.
Autoclass object handling: Autoclass uses access patterns to determine when to transition objects to colder storage classes. During final synchronization of the bucket relocation process, Autoclass is paused and objects aren't transitioned to colder storage classes. Once final synchronization is complete, Autoclass resumes.
Objects in a Standard storage class are handled as follows:
- Standard storage class objects have a 30-day no-access period before they can be transitioned to a colder class like Nearline storage. When an object in the Standard storage class is moved during the relocation, it's treated as if it has been accessed. As a result, the relocation process resets the no-access period and even if an object was close to being transitioned to Nearline storage before the move, it has to wait another 30 days after the relocation is complete.
Objects in a non-standard storage class are handled as follows:
Relocating objects in Nearline storage, Coldline storage, or Archive storage storage classes does not count as accessing them. As a result, the no-access period for these objects is not affected.
If you read or write an object in a non-standard storage class bucket during the relocation, it won't be automatically upgraded to a warmer class like Standard storage, and this helps prevent unnecessary storage class transitions during the relocation process.
If an object was scheduled to be downgraded to a colder storage class such as from Nearline storage to Coldline storage, the relocation process won't interfere with the schedule. The downgrade will proceed as planned after the relocation is finished.
Object size limit: A 2 TB limit applies to object sizes for relocation.
Unsupported features
Buckets using the following features cannot be relocated:
- Customer-managed encryption keys (CMEK) or Customer-supplied encryption keys (CSEK).
- Locked retention policies.
- Objects with temporary holds.
- Multipart uploads. You must complete or abort any unfinished multipart uploads before starting the bucket relocation process.
- Tags. Adding tags during relocation is not recommended as it causes the relocation process to fail.
- Appspot buckets. Consider migrating Container Registry to Artifact Registry as a workaround for default buckets created by App Engine.
- Firebase buckets. You cannot relocate buckets associated with Firebase.
Operational restrictions
Bucket relocation with write downtime has the following operational restrictions:
- Project restriction: You cannot relocate buckets across projects.
- Resumable uploads: In-progress resumable uploads must be finalized before the final synchronization step to avoid data loss.
- Metadata updates: You cannot update a bucket's metadata during relocation.
- Request rate ramp-up: Relocated buckets are subject to the same request rate ramp-up guidelines as newly created buckets.
Relocation without write downtime limitations
Bucket relocation without write downtime has the following limitations:
- Multipart uploads: Unfinished multipart uploads are not supported and must be completed or aborted before relocation. New multipart uploads are blocked during the move.
- Retention policies: All retention policies must be unlocked before relocation.
- Firebase and Appspot buckets: Relocation is not supported for buckets associated with Firebase or Appspot.
- Progress updates: Relocation progress updates might not be linear.
Unsupported region
Bucket relocation isn't available in the me-central1
region for source or destination buckets.
What's next
- Learn how to plan a bucket relocation.
- Learn how to relocate buckets.