Configuring Access to Data Sources and Sinks

This page explains how to set up access to the data source and data sink for a transfer using Cloud Storage Transfer Service.

Setting up access to the data source

Depending on your transfer data source, take one of the following sets of actions:

Google Cloud Storage bucket

To use a Google Cloud Storage bucket as a data source, you must give the service account associated with the Storage Transfer Service permission to view objects in the bucket. If you want the transfer to delete objects from the source bucket, you must also give the service account this deletion permission.

  1. Obtain the email address used for the service account.

    1. Go to the Try it! section of the googleServiceAccounts.get method page.

    2. If toggled off (appears grey), toggle the switch beside Authorize requests using OAuth 2.0 to On (appears blue).

      1. When prompted, click Authorize.
    3. In the projectID field, enter the id of the project that your data source bucket resides in.

    4. Click the Execute button.

    5. In the Response section, find and copy the account email.

      The email has a form that looks like: storage-transfer-123456789@partnercontent.gserviceaccount.com

  2. Give this service account email the required permissions to access the data.

    1. Reader access to objects allows the service account to read each object for transfer.

    2. (Optional): By giving the service account Writer access to the bucket, you will have the option to remove objects from the source bucket once they've been transferred.

    For information on setting ACLs for buckets and objects, see the Setting ACLs guide.

Amazon S3 bucket

Follow these steps to set up access to an Amazon S3 bucket:

  1. Create an AWS Identity and Access Management (AWS IAM) user with a name that you can easily recognize, such as transfer-user. Ensure the name follows the IAM user name guidelines (see Limitations on IAM Entities and Objects).
  2. Give the AWS IAM user the ability to do the following:

    • List the Amazon S3 bucket
    • Get the location of the bucket
    • Read the objects in the bucket
  3. Create at least one access/secret key pair for each group of transfers that you plan to set up. You can also create a separate access/secret key pair for each transfer.

  4. Restore any objects that are archived to Amazon Glacier. Objects in Amazon S3 that are archived to Amazon Glacier are not accessible until they are restored. For more information, see the Migrating to Google Cloud Storage From Amazon Glacier White Paper.

URL list

If your data source is a URL list, ensure that each object on the URL list is publicly accessible.

Setting up access to the data sink

The data sink for your transfer is always a Google Cloud Storage bucket. To use a bucket as a data sink, you must give the service account associated with the Storage Transfer Service permission to create, delete, and list objects in the bucket.

  1. Obtain the email address used for the service account.

    1. Go to the Try it! section of the googleServiceAccounts.get method page.

    2. If toggled off (appears grey), toggle the switch beside Authorize requests using OAuth 2.0 to On (appears blue).

      1. When prompted, click Authorize.
    3. In the projectID field, enter the id of the project that your data sink bucket resides in.

    4. Click the Execute button.

    5. In the Response section, find and copy the account email.

      The email has a form that looks like: storage-transfer-123456789@partnercontent.gserviceaccount.com

  2. Give this service account email Writer access to the bucket that is your data sink.

    Writer access allows the service account to create, delete, and list objects in the bucket. For information on setting ACLs for buckets, see the Setting ACLs guide.

What's next

Send feedback about...

Cloud Storage Documentation