Configuring Access to Data Sources and Sinks

This page explains how to set up access to the data source and data sink for a data transfer using Storage Transfer Service.

Prerequisites

Service account permissions are granted at the bucket level. You must have the ability to grant these permissions to your service account:

  • roles/storage.legacyBucketReader for source and destination buckets.
  • roles/storage.objectAdmin for destination buckets or possibly the source if you intend to delete the source files.
  • roles/storage.objectViewer on the source if you don't intend to delete the source files.

Setting up access to the data source

Google Cloud Storage

To set up access to a Cloud Storage data source, you must give the service account associated with the Storage Transfer Service permission to access the source:

  1. Obtain the email address used for the service account.

    1. Use the Try this API section of the googleServiceAccounts.get method page.

    2. In the projectID field, enter the ID of the project that is creating the transfer job.

    3. Click the Execute button.

    4. In the response that appears, find and copy the value for accountEmail.

      The email value has a form that looks like: project-[$PROJECT_NUMBER]@storage-transfer-service.iam.gserviceaccount.com

  2. Give this service account email the required roles to access the data.

    For basic transfer jobs, the Storage Object Viewer role gives the service account the necessary permissions. For advanced data transfers, see IAM permissions for Storage Transfer Service.

    For a step-by-step guide to granting roles for buckets, see Adding a member to a bucket-level policy.

Amazon S3

Follow these steps to set up access to an Amazon S3 bucket:

  1. Create an AWS Identity and Access Management (AWS IAM) user with a name that you can easily recognize, such as transfer-user. Ensure the name follows the IAM user name guidelines (see Limitations on IAM Entities and Objects).

  2. Give the AWS IAM user the ability to do the following:

    • List the Amazon S3 bucket.
    • Get the location of the bucket.
    • Read the objects in the bucket.
    • (Optional) Delete objects from source after data transfer. You will need Delete objects permissions.
  3. Create at least one access/secret key pair for the transfer job that you plan to set up. You can also create a separate access/secret key pair for each transfer job.

  4. Restore any objects that are archived to Amazon Glacier. Objects in Amazon S3 that are archived to Amazon Glacier are not accessible until they are restored. For more information, see the Migrating to Cloud Storage From Amazon Glacier White Paper.

URL list

If your data source is a URL list, ensure that each object on the URL list is publicly accessible.

Setting up access to the data sink

The data sink for your data transfer is always a Cloud Storage bucket. To use a bucket as a data sink, you must give the service account associated with the Storage Transfer Service permission to access the sink:

  1. Obtain the email address used for the service account.

    1. Use the Try this API section of the googleServiceAccounts.get method page.

    2. In the projectID field, enter the ID of the project that is creating the transfer job.

    3. Click the Execute button.

    4. In the response that appears, find and copy the value for accountEmail.

      The email value has a form that looks like: project-[$PROJECT_NUMBER]@storage-transfer-service.iam.gserviceaccount.com

  2. Give this service account email the required roles to use the data sink.

    The Storage Legacy Bucket Writer role gives the service account all necessary permissions. For more information about required permissions, see IAM permissions for Storage Transfer Service.

    For a step-by-step guide to granting roles for buckets, see Adding a member to a bucket-level policy.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Storage Transfer Service Documentation