This page describes the process for relocating buckets from one location to another. For information about bucket relocation, see Bucket relocation.
Before you begin
Before you initiate the bucket relocation process, complete the following steps:
Check the quotas and limits to ensure that the new location has sufficient quotas to accommodate the bucket's data.
Determine the bucket relocation type to understand whether write downtime is required.
If you use inventory reports, save your configurations.
Required roles
To get the permissions to relocate buckets from one location to another, ask your administrator to grant you the Storage Admin
(roles/storage.admin
) role for the project.
This role provides a set of permissions that let you relocate buckets from one location to another. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
The authenticated user must have the following IAM permissions on the bucket to use this method:
storage.buckets.relocate
storage.bucketOperations.get
You need this permission to view the status of the bucket relocation operation.storage.bucketOperations.list
You need this permission to view the list of bucket relocation operations.storage.bucketOperations.cancel
You need this permission to cancel the bucket relocation operation.
The authenticated user might also need the following permissions on the bucket to use this method:
storage.bucket.get
You need this permission to view the metadata of a bucket during the dry run and the incremental data copy of the bucket relocation operation.storage.objects.list
andstorage.objects.get
You need these permissions to view the list of objects in a bucket that you want to relocate to another location.
You can also get these permissions with custom roles or you might be able to get these permissions with other predefined roles. To see which roles are associated with which permissions, refer to IAM roles for Cloud Storage.
For instructions on granting roles for projects, see Manage access to projects.
Relocate buckets
This section describes the process of relocating Cloud Storage buckets from one location to another with Bucket Relocation. When you relocate a bucket, you initiate the incremental data copy process, monitor it, and then initiate the final synchronization step. For more information about these steps, see Understand the bucket relocation process.
Perform a dry run
To minimize potential issues during the bucket relocation process, we recommend you perform a dry run. A dry run simulates the bucket relocation process without moving data, helping you to catch and resolve issues early on. The dry run checks for the following incompatibilities:
- Customer-managed encryption keys (CMEK) or Customer-supplied encryption keys (CSEK)
- Locked retention policies
- Objects with temporary holds
- Multipart uploads
While a dry run can't identify every possible issue as some issues might only surface during the live migration due to factors such as real-time resource availability, it reduces the risk of facing time-consuming issues during the actual relocation.
Command line
Simulate the dry run of bucket relocation:
gcloud storage buckets relocate gs://BUCKET_NAME --location=LOCATION --dry-run
Where:
BUCKET_NAME
is the name of the bucket that you want to relocate.LOCATION
is the destination location of the bucket.
Initiating a dry run starts a long-running operation. You'll receive an operation ID and a description of the operation. To track the completion of the dry run, you'll need to track its progress. For information about how to track the progress of the dry run, see Get details of a long-running operation.
If the dry run reveals any issues, address them before proceeding with the Initiate the incremental data copy step.
REST APIs
JSON API
Have gcloud CLI installed and initialized, which lets you generate an access token for the
Authorization
header.Create a JSON file that contains the settings for the bucket, which must include the
destinationLocation
andvalidateOnly
parameters. See theBuckets: relocate
documentation for a complete list of settings. The following are common settings to include:{ "destinationLocation": "DESTINATION_LOCATION", "destinationCustomPlacementConfig": { "dataLocations": [ LOCATIONS, ... ] }, "validateOnly": "true" }
Where:
DESTINATION_LOCATION
is the destination location of the bucket.LOCATIONS
is a list of location codes to be used for the configurable dual-region.validateOnly
is set totrue
to perform a dry run.
Use
cURL
to call the JSON API:curl -X POST --data-binary @JSON_FILE_NAME \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://storage.googleapis.com/storage/v1/b/bucket=BUCKET_NAME/relocate"
Where:
JSON_FILE_NAME
is the name of the JSON file you created.BUCKET_NAME
is the name of the bucket you want to relocate.
Initiate incremental data copy
Command line
Initiate the bucket relocation operation:
gcloud storage buckets relocate gs://BUCKET_NAME --location=LOCATION
Where:
BUCKET_NAME
is the name of the bucket that you want to relocate.LOCATION
is the destination location of the bucket.
REST APIs
JSON API
Have gcloud CLI installed and initialized, which lets you generate an access token for the
Authorization
header.Create a JSON file that contains the settings for the bucket. See the
Buckets: relocate
documentation for a complete list of settings. The following are common settings to include:{ "destinationLocation": "DESTINATION_LOCATION", "destinationCustomPlacementConfig": { "dataLocations": [ LOCATIONS, ... ] }, "validateOnly": "false" }
Where:
DESTINATION_LOCATION
is the destination location of the bucket.LOCATIONS
is a list of location codes to be used for the configurable dual-region.validateOnly
is set tofalse
to initiate the incremental data copy step of bucket relocation.
Use
cURL
to call the JSON API:curl -X POST --data-binary @JSON_FILE_NAME \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://storage.googleapis.com/storage/v1/b/bucket=BUCKET_NAME/relocate"
Where:
JSON_FILE_NAME
is the name of the JSON file you created.BUCKET_NAME
is the name of the bucket you want to relocate.
Monitor incremental data copy
The bucket relocation process is a long-running operation that must be monitored to see how it's progressing. You can regularly check the long-running operations list to see the status of the incremental data copy step. For information about how to get the details about a long-running operation, list, or cancel long-running operations, see Use long-running operations in Cloud Storage.
The following example shows the output generated by an incremental data copy operation:
done: false kind: storage#operation metadata: '@type': type.googleapis.com/google.storage.control.v2.RelocateBucketMetadata commonMetadata: createTime: '2024-10-21T04:26:59.666Z endTime: '2024-12-29T23:39:53.340Z' progressPercent: 99 requestedCancellation: false type: relocate-bucket updateTime: '2024-10-21T04:27:03.2892' destinationLocation: US-CENTRAL1 finalizationState: 'READY' progress: byteProgressPercent: 100 discoveredBytes: 200 remainingBytes: 0 discoveredObjectCount: 10 remainingObjectCount: 8 objectProgressPercent: 100 discoveredSyncCount: 8 remainingSyncCount: 0 syncProgressPercent: 100 relocationState: SYNCING sourceLocation: US validateOnly: false writeDowntimeExpireTime: '2024-12-30T10:34:01.786Z' name: projects//buckets/my-bucket1/operations/Bar7-1b0khdew@nhenUQRTF_R-Kk4dQ5V1f8fzezkFcPh3XMvlTqJ6xhnqJ1h_QXFIeAirrEqkjgu4zPKSRD6WSSG5UGXil6w response: '@type': type.googleapis.com/google.storage.control.v2.RelocateBucketResponse selfLink: https://storage.googleusercontent.com/storage/v1_ds/b/my-bucket1/operations/Bar7-1b0khdew@nhenUQRTF_R-Kk4dQ5V1f8fzezkFcPh3XMvlTqJ6xhnqJ1h_QXFIeAirrEqkjgu4zPKSRD6WSSG5UGXil6w
The following table provides information about the key fields in the output generated by the incremental data copy operation:
Field name | Description | Possible values |
---|---|---|
done |
Indicates the completion of the bucket relocation operation. | true , false |
kind |
Indicates that this resource represents a storage operation. | |
metadata |
Provides information about the operation. | |
metadata.@type |
Indicates the type of operation as a bucket relocation. | |
metadata.commonMetadata |
Metadata common to all operations. | |
metadata.commonMetadata.createTime |
The time the long-running operation was created. | |
metadata.commonMetadata.endTime |
The time the long-running operation ended. | |
metadata.commonMetadata.progressPercent |
The estimated progress of the long-running operation, in percentage. | Between 0 to 100 %. A value of -1 means that the progress is unknown or not applicable. |
metadata.commonMetadata.requestedCancellation |
Indicates whether the user has requested cancellation of the long-running operation. | true , false |
metadata.commonMetadata.type |
Indicates the type of the long-running operation. | |
metadata.commonMetadata.updateTime |
The time the long-running operation was last updated. | |
metadata.destinationLocation |
The destination location of the bucket. | |
metadata.finalizationState |
Indicates the readiness for initiating the final synchronization step. |
|
metadata.progress |
Progress details of the relocation operation. | |
metadata.progress.byteProgressPercent |
Progress of bytes copied in percentage. | Between 0 to 100 %. A value of -1 means that the progress is unknown or not applicable. |
metadata.progress.discoveredBytes |
Number of bytes discovered in the source bucket. | |
metadata.progress.discoveredObjectCount |
Number of objects discovered in the source bucket. | |
metadata.progress.discoveredSyncCount |
Number of object metadata updates discovered in the source bucket. | |
metadata.progress.objectProgressPercent |
Progress of objects copied in percentage. | Between 0 to 100 %. A value of -1 means that the progress is unknown or not applicable. |
metadata.progress.remainingBytes |
Number of bytes remaining to be copied from the source bucket to the destination bucket. | |
metadata.progress.remainingObjectCount |
Number of objects remaining to be copied from the source bucket to the destination bucket. | |
metadata.progress.remainingSyncCount |
Number of remaining object metadata updates to be synced. | |
metadata.progress.syncProgressPercent |
Progress of object metadata updates to be synced in percentage. | Between 0 to 100 %. A value of -1 means that the progress is unknown or not applicable. |
metadata.relocationState |
Overall state of the bucket relocation |
|
metadata.sourceLocation |
The source location of the bucket. | |
metadata.validateOnly |
Indicates if a dry run of the bucket relocation has been initiated. | true , false |
metadata.writeDowntimeExpireTime |
The time the write downtime expires. | |
name |
The unique identifier for this relocation operation. Format: projects/_/buckets/bucket-name/operations/operation-id |
|
response |
The response of the operation. | |
response.@type |
The type of the response. | |
selfLink |
A link to this operation. |
You might encounter issues due to the limitations when interacting with other Cloud Storage features. For more information about the limitations, see Limitations.
Initiate the final synchronization step
The final synchronization step involves a period where you cannot perform write operations on the bucket. We recommend that you schedule the final synchronization step at a time that minimizes disruption to your applications.
Before you proceed, confirm that the bucket is fully prepared by checking the
finalizationState
value in the output of the monitor the incremental data
copy process step. The finalizationState
value must be READY
to proceed
with the final synchronization step.
If you initiate the final synchronization step prematurely, the command returns
an error message The relocate bucket operation is not ready to advance to finalization running state
but the relocation process continues.
We recommend that you wait until the progressPercent
value is 99
before
initiating the final synchronization step.
Command line
Initiate the final synchronization step of the Bucket Relocation
operation once the finalizationState
value is READY
:
gcloud storage buckets relocate --finalize --operation=projects/_/BUCKET_NAME/operations/OPERATION_ID
Where:
BUCKET_NAME
is the name of the bucket that you want to relocate.OPERATION_ID
is the ID of the long-running operation, which is returned in the response of methods you call. For example, the following response is returned from callinggcloud storage operations list
and the long-running operation ID isAbCJYd8jKT1n-Ciw1LCNXIcubwvij_TdqO-ZFjuF2YntK0r74
.
`name: projects/_/buckets/my-bucket/operations/AbCJYd8jKT1n-Ciw1LCNXIcubwvij_TdqO-ZFjuF2YntK0r74`
Set the ttl
flag to have greater control over the relocation process. For example:
gcloud storage buckets relocate --finalize --ttl TTL_DURATION --operation=projects/_/BUCKET_NAME/operations/OPERATION_ID
Where:
TTL_DURATION
is the Time to live (TTL) for the write
downtime phase during a relocation process. It is expressed as a string,
such as 12h
for 12 hours. The TTL_DURATION
determines the maximum allowed
duration for the write downtime phase. If the write downtime exceeds this limit,
the relocation process automatically reverts to the incremental copy step,
and write operations to the bucket are re-enabled. The value must be within the
range of 6h
(6 hours) to 48h
(48 hours). If not specified, the default
value is 12h
(12 hours).
REST APIs
JSON API
Have gcloud CLI installed and initialized, which lets you generate an access token for the
Authorization
header.Create a JSON file that contains the settings for bucket relocation. See the
Buckets: advanceRelocateBucket
documentation for a complete list of settings. The following are common settings to include:{ "expireTime": "EXPIRE_TIME", "ttl": "TTL_DURATION" }
Where:
EXPIRE_TIME
is the time the write downtime expires.TTL_DURATION
is the Time to live (TTL) for the write downtime phase during a relocation process. It is expressed as a string, such as12h
for 12 hours. TheTTL_DURATION
determines the maximum allowed duration for the write downtime phase. If the write downtime exceeds this limit, the relocation process automatically reverts to the incremental copy step, and write operations to the bucket are re-enabled. The value must be within the range of6h
(6 hours) to48h
(48 hours). If not specified, the default value is12h
(12 hours).
Use
cURL
to call the JSON API:curl -X POST --data-binary @JSON_FILE_NAME \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://storage.googleapis.com/storage/v1/b/bucket/BUCKET_NAME/operations/OPERATION_ID/advanceRelocateBucket"
Where:
JSON_FILE_NAME
is the name of the JSON file you created.BUCKET_NAME
is the name of the bucket you want to relocate.OPERATION_ID
is the ID of the long-running operation, which is returned in the response of methods you call. For example, the following response is returned from callingOperations: list
and the long-running operation ID isAbCJYd8jKT1n-Ciw1LCNXIcubwvij_TdqO-ZFjuF2YntK0r74
.
Validate the bucket relocation process
After initiating a relocation, verify its successful completion. This section provides guidance on confirming the successful transfer of data.
Validate the success of the relocation process using the following methods:
Poll long-running operations: Bucket relocation is a long-running operation. You can poll the long-running operation using the
operation id
to monitor the operation's progress and confirm its successful completion by verifying thesuccess
state. This involves periodically querying the operation's status until it reaches a terminal state. For information about monitoring long-running operations, see Use long-running operations in Cloud Storage.Analyze Cloud Audit Logs entries: Cloud Audit Logs provides a detailed record of events and operations in your Google Cloud environment. You can analyze the Cloud Audit Logs entries associated with the relocation to validate its success. Analyze the logs for any errors, warnings, or unexpected behavior that might indicate issues during the transfer. For information about viewing Cloud Audit Logs logs, see Viewing audit logs.
The following log entries help you to determine if your move is a success or a failure:
Successful relocation:
Relocate bucket succeeded. All existing objects are now in the new placement configuration.
Failed relocation:
Relocate bucket has failed. Bucket location remains unchanged.
Using Pub/Sub notifications, you can also set up alerts that notify when the specific success or failure event appears in the logs. For information about setting up Pub/Sub notifications, see Configure Pub/Sub notifications for Cloud Storage.
Complete the post bucket relocation tasks
After you have successfully relocated your bucket, complete the following steps:
- Optional: Restore any tag-based access controls on your bucket.
- Existing inventory report configurations are not preserved during the relocation process and you'll need to manually recreate them. For information about creating an inventory report configuration, see Create an inventory report configuration.
- Update your infrastructure as code configurations such as Terraform and Google Kubernetes Engine configuration connector to specify the bucket's new location.
- Regional endpoints are tied to specific locations, and you'll need to modify your application code to reflect the new endpoint.
How to handle failed bucket relocation operations
Consider the following factors before handling failed bucket relocation operations:
A failed bucket relocation might leave obsolete resources, such as temporary files or incomplete data copies, at the destination. You must wait 7 to 14 days before initiating another bucket relocation to the same destination. You can initiate a bucket relocation to a different destination immediately.
If the destination location is not the optimal location for your data, you might want to roll back the relocation. However, you cannot initiate a relocation immediately. A waiting period of up to 14 days is required before you can initiate the relocation process again. This restriction is in place to ensure stability and prevent data conflicts.
What's next
- Learn about bucket relocation.