Google Cloud Platform
Cloud Storage

Bucket Locations

In Google Cloud Storage, you create a bucket to store your data. A bucket has three properties that you specify when you create it: a globally unique name, a storage class, and a location where the bucket and its contents are stored. This page describes how to choose a bucket's location.

A bucket location can be a multi-region location or a regional location. Typically, a good location for your bucket balances latency, availability, and bandwidth costs for applications and users of the bucket data. For more information, see Choosing the best location for your data.

Types of locations

Location

A property of a bucket that specifies where data in the bucket is stored, either a multi-region or regional location. When you specify a location, Google will keep it there in accordance with our Service Specific Terms.

Each bucket you create in Google Cloud Storage has a location setting. For example, if you specify the location eu (European Union) when you create bucket A, then bucket A and any objects in bucket A will be stored on servers in the European Union.

Multi-region Location

A location that spans multiple regional locations. You can configure a bucket for all storage classes in the following multi-region locations:

  • asia — Asia Pacific
  • eu — European Union
  • us — United States
Regional Location

A specific geographic location within a multi-region location. You can create a bucket configured for all storage classes in the following regional locations:

  • asia-east1 — Eastern Asia-Pacific
  • europe-west1 — Western Europe
  • us-central1 — Central United States
  • us-east1 — Eastern United States

You can create a bucket configured for Durable Reduced Availability (DRA) Storage only in the following Alpha release regional locations.

  • us-central2*— Central United States
  • us-east2*— Eastern United States
  • us-east3*— Eastern United States
  • us-west1*— Western United States

Zone

An isolated location within a region where you can create Google Compute Engine instances. A zone within a region is independent of other zones in the same region. All Compute Engine instances created within zones in the same region have similar performance accessing Google Cloud Storage.

You can't specify a bucket location as a zone, but regional Google Cloud Storage buckets automatically store multiple copies of objects in optimized zones within the region.

Choosing the best location for your data

Multi-region location

Data stored in a Google Cloud Storage multi-region location is distributed across regions. This allows Google to optimally balance availability, performance, and resource efficiency on your behalf.

If you don't have a specific application use or regional need for your data, then store your data in a multi-region location that is convenient or includes the majority of the data client. Additionally, if your application cannot tolerate the possible loss of data in a single region, use a multi-region location. For more information about using multi-region services in Google Cloud Platform, see Geography and Regions.

Regional location

Regional buckets allow you to locate your data in a specific region within a multi-region location. You can use a regional bucket when you need to optimize latency, availability, and network bandwidth costs for an application in a specific region. Data stored in a regional bucket can still be read globally, but if your bucket data will be used primarily by clients outside the region, use a multi-region location instead.

All Google Cloud Storage and Google Cloud Platform resources in a region share the same network fabric, which reduces the latency and improves the bandwidth of other regional resources accessing regional bucket data. Scenarios for using a regional bucket include:

Data Processing and Analysis

You can place your Standard Storage data within the same region as your Compute Engine instances that you use for data analysis and processing. Doing so gives you better performance for data-intensive computations, as opposed to storing your data in a multi-region location. In addition, using a regional bucket in this scenario can reduce network charges.

If your application isn't sensitive to data availability, for example, batch processing pipelines or data backups, then you can use the Durable Reduced Availability (DRA) Storage class for your regional bucket.

Disaster Recovery or Archiving

With a regional bucket configured for Nearline Storage, you can prepare for a disaster recovery event by placing your data in a region where you would need to recover it, decreasing recovery time and ensuring low latency access. You can use several regional buckets together as part of a coordinated, global disaster recovery infrastructure. For examples, see Disaster Recovery Cookbook.

For an archiving application, you can create a regional Nearline Storage bucket that is located geographically close to the data to be archived, thereby reducing network bandwidth charges.

Web and Mobile Serving

For a serving use case that is latency sensitive, you can use a regional Standard Storage bucket. Doing so offers more predictable performance versus data stored in a multi-region location. If your data is used primarily to serve content for web or mobile clients beyond a single region, then a multi-region location for your data might be better.

Specifying a bucket's location

You can specify a bucket's location when you create the bucket. If you don't specify a location when using gsutil or one of the APIs, the us location is used.

Google Cloud Platform Console

  1. Open the Google Cloud Storage browser in the Google Cloud Platform Console.
  2. Click Create bucket and specify:
    • A bucket name subject to the bucket name requirements.
    • A Storage class, for example, Standard.
    • A Location, for example, European Union.

    New standard bucket in the eu location.

  3. Click Create.

gsutil

Using gsutil mb, include the -l "<location>" flag when creating your bucket. For example, to create a Standard Storage class bucket in the `eu` location use:

gsutil mb -l "eu" gs://<bucket-name>

Here is a complete list of location flags you can enter in gsutil.

JSON API

Using cURL and the JSON API buckets insert method. For example, to create a Standard Storage class bucket in the `eu` location use:

curl -X POST --data-binary @request.json \
     -H 'Content-Type: application/json' \
     -H "Authorization: Bearer <oauth2_token>" \
     https://www.googleapis.com/storage/v1/b?project=<project-id>

The request.json file should contain the name, location, and storage class. For example, this file specifies a Standard Class bucket in the location eu (European Union):

{
 "name": "<bucket-name>",
 "location": "eu",
 "storageClass": "standard"
}

You can get an authorization access token from the OAuth 2.0 Playground. Configure the playground to use your own OAuth credentials.

XML API

Using cURL and the XML API create bucket method. For example, to create a Standard Storage class bucket in the `eu` location use:

curl -X PUT --data-binary @request.xml \
     -H "Authorization: Bearer <oauth2_token>" \
     -H "x-goog-project-id: <project-id>" \
     https://storage.googleapis.com/<bucket-name>

Where the request.xml file contains the following information:

<CreateBucketConfiguration>
   <LocationConstraint>eu</LocationConstraint>
   <StorageClass>standard</StorageClass>
</CreateBucketConfiguration>

You can get an authorization access token from the OAuth 2.0 Playground. Configure the playground to use your own OAuth credentials.

Displaying a bucket's location

Google Cloud Platform Console

  1. Open the Google Cloud Storage browser in the Google Cloud Platform Console.
  2. In the bucket list, find the bucket you want to verify, and check its Location value.

    Get bucket location.

gsutil

Using gsutil ls:

gsutil ls -L -b gs://<bucket-name>/

gs://<bucketname>/ :
     Storage class:         standard
     Location constraint:   us
     ...

JSON API

Using cURL and the JSON API buckets get method:

curl -X GET -H "Authorization: Bearer <oauth2_token>" \
     https://www.googleapis.com/storage/v1/b/<bucket-name>?fields=location

The response will look like:

{
 "location": "us"
}

You can get an authorization access token from the OAuth 2.0 Playground. Configure the playground to use your own OAuth credentials.

XML API

Using cURL and the XML API get bucket location method:

curl -X GET -H "Authorization: Bearer <oauth2_token>" \
     -H "x-goog-project-id: <project-id>" \
     https://storage.googleapis.com/<bucket-name>?location

The response will look like:

<LocationConstraint>us</LocationConstraint>

You can get an authorization access token from the OAuth 2.0 Playground. Configure the playground to use your own OAuth credentials.

Changing a bucket's location

You cannot change a bucket's location once it is created. If there is no data in the bucket, you can delete the bucket and create it in the new location. If the bucket has data it, you can create a new bucket in a new location and then copy data from the existing bucket to the new bucket.

Moving data between buckets incurs one or more of the following costs for transfers:

  • $0.01 per GB if transfer is between different regions in the same multi-regional location. For example, transfers between US-CENTRAL1 and US-EAST2 incur the stated charge.
  • $0.01 per GB if transfer is between a region and its parent multi-regional location.
  • A tiered charge if transfer is between multi-regional locations, same as egress pricing:
    • $0.12 per GB 0-1 TB
    • $0.11 per GB 1-10 TB
    • $0.08 per GB 10+ TB
  • $0.01 per GB if transfer is out of the Nearline Storage class. An early deletion cost applies for data deleted in less than 30 days old. See Google Cloud Storage Nearline Pricing for details.

If your source and destinations buckets are in the same region or same location and you are not transferring out of Cloud Storage Nearline, then the transfer does not incur any cost. For more information and examples, see pricing.

After you create a new bucket, copy the data into it. The examples below show how to copy data in limited-sized chunks, over multiple requests. gsutil cp automatically manages the multiple requests for you. If you use the JSON API rewrite method directly to copy the data, you must loop and call rewrite until all the data are moved.

gsutil

Use the gsutil cp command to copy from the source bucket to the destination bucket. Make sure you have at least gsutil 4.12.

For example, to copy foo from the source bucket to the destination bucket, you can use:

gsutil cp gs://<source-bucket>/foo gs://<destination-bucket>

See the gsutil cp command for more information about copy options such as copying directories, buckets, and bucket subdirectories to recursively with the -r flag.

JSON API

Using cURL and the JSON API rewrite method you can copy data from a source bucket to a destination bucket. For example, to copy the object foo use:

curl -X POST -H "Authorization: Bearer <oauth2_token>" \
   https://www.googleapis.com/storage/v1/b/<source-bucket>/o/foo/rewriteTo/b/<destination-bucket>/o/foo

If the foo is 10 GB, the response to this request will resemble:

{
  "kind": "storage#rewriteResponse",
  "totalBytesRewritten": 1048576,
  "objectSize": 10000000000,
  "done": false,
  "rewriteToken": token-value
}

Use rewriteToken in a subsequent request to continue copying data:

curl -X POST -H "Authorization: Bearer <oauth2_token>" \
   -H "Content-Type: application/json" \
   -d '{"rewriteToken": "token-value"}' \
   https://www.googleapis.com/storage/v1/b/<source-bucket>/o/foo/rewriteTo/b/<destination-bucket>/o/foo

When all of the data is copied, the last response has a done property equal to true and there is no rewriteToken property:

{
  "kind": "storage#rewriteResponse",
  "totalBytesRewritten": 10000000000,
  "objectSize": 10000000000,
  "done": true
}

You can get an authorization access token from the OAuth 2.0 Playground. Configure the playground to use your own OAuth credentials.

If you want to change a bucket's storage class, see Changing Storage Class.

Preventing caching

To improve performance, data may be temporarily cached by Google systems when data is being written to or read from a bucket. You can prevent such delivery- based caching by proxying data transfers through a Google App Engine application located in the same location as your bucket, and additionally employing appropriate cache-control headers for your Google Cloud Storage objects to specify that they not be cached in edge-caches. For example, the response header Cache-Control: no-cache for an object specifies that the object must not be used to satisfy a subsequent request. For more information about cache control directives, see RFC 7234: Cache-Control.