Configure access to a source: Amazon S3

Stay organized with collections Save and categorize content based on your preferences.

You can set up access to an Amazon S3 bucket using either of two methods:

Supported regions

Storage Transfer Service is able to transfer data from the following Amazon S3 regions:

ap-east-1 ca-central-1 me-south-1
ap-northeast-1 eu-central-1 sa-east-1
ap-northeast-2 eu-north-1 us-east-1
ap-south-1 eu-west-1 us-east-2
ap-southeast-1 eu-west-2 us-west-1
ap-southeast-2 eu-west-3 us-west-2

Required permissions

In order to use Storage Transfer Service to move data from an Amazon S3 bucket, your user account or federated identity role must have the appropriate permissions for the bucket:

Permission Description Use
s3:ListBucket Allows Storage Transfer Service to list objects in the bucket. Always required.
s3:GetBucketLocation Allows Storage Transfer Service to get the location of the bucket. Always required.
s3:GetObject Allows Storage Transfer Service to read objects in the bucket. Required if you are transferring the current version of all objects. If your manifest specifies an object version, use s3:GetObjectVersion instead.
s3:GetObjectVersion Allows Storage Transfer Service to read specific versions of objects in the bucket. Required if your manifest specifies an object version. Otherwise, use s3:GetObject.
s3:DeleteObject Allows Storage Transfer Service to delete objects in the bucket. Required if you set deleteObjectsFromSourceAfterTransfer to true.

Authenticate using access credentials

To use an access key ID and secret key to authenticate to AWS:

  1. Create an AWS Identity and Access Management (AWS IAM) user with a name that you can easily recognize, such as transfer-user.

  2. For AWS access type, select Access key - programmatic access.

  3. Grant one of the following roles to the user:

    • AmazonS3ReadyOnlyAccess to provide read-only access to the source. This allows transfers but does not support deleting objects at source once the transfer is complete.
    • AmazonS3FullAccess if your transfer is configured to delete objects at source.
    • A custom role with the appropriate permissions from the Required permissions table above. The JSON for the minimum permissions looks like the example below:

      {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "s3:ListBucket",
                    "s3:GetBucketLocation"
                ],
                "Resource": [
                    "arn:aws:s3:::AWS_BUCKET_NAME/*",
                    "arn:aws:s3:::AWS_BUCKET_NAME"
                ]
            }
        ]
      }
      
  4. Note the access key ID and secret access key when the user is successfully created.

How you pass the access key ID and secret access key to Storage Transfer Service depends on the interface you use to initiate the transfer.

Cloud console

Enter the values directly into the transfer job creation form.

See Create transfers to get started.

gcloud CLI

Create a JSON file with the following format:

{
  "accessKeyId": "AWS_ACCESS_KEY_ID",
  "secretAccessKey": "AWS_SECRET_ACCESS_KEY"
}

Pass the location of the file to the gcloud transfer jobs create command using the source-creds-file flag:

gcloud transfer jobs create s3://S3_BUCKET_NAME gs://GCS_BUCKET_NAME \
  --source-creds-file=PATH/TO/KEYFILE.JSON

REST API

Your transferSpec object must contain the key info as part of the awsS3DataSource object:

"transferSpec": {
  "awsS3DataSource": {
    "bucketName": "AWS_SOURCE_NAME",
    "awsAccessKey": {
      "accessKeyId": "AWS_ACCESS_KEY_ID",
      "secretAccessKey": "AWS_SECRET_ACCESS_KEY"
    }
  },
  "gcsDataSink": {
    "bucketName": "GCS_SINK_NAME"
  }
}

Client libraries

See the examples in the Create transfers page.

Authenticate using federated identity

To use federated identity to authenticate to AWS:

  1. Create a new IAM role in AWS.

  2. Select Custom trust policy as the trusted entity type.

  3. Copy and paste the following trust policy:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Federated": "accounts.google.com"
          },
          "Action": "sts:AssumeRoleWithWebIdentity",
          "Condition": {
            "StringEquals": {
              "accounts.google.com:sub": "SUBJECT_ID"
            }
          }
        }
      ]
    }
    
  4. Replace SUBJECT_ID with the subjectID of the Google-managed service account that is automatically created when you start using Storage Transfer Service. To retrieve the subjectID:

    1. Go to the googleServiceAccounts.get reference page.

      An interactive panel opens, titled Try this method.

    2. In the panel, under Request parameters, enter your project ID. The project you specify here must be the project you're using to manage Storage Transfer Service.

    3. Click Execute. The subjectId is included in the response.

  5. Grant one of the following permissions policies to the role:

    • AmazonS3ReadOnlyAccess provides read-only access to the source. This allows transfers but does not support deleting objects at source once the transfer is complete.
    • AmazonS3FullAccess if your transfer is configured to delete objects at source.
    • A custom role with the appropriate permissions from the Required permissions table above. The JSON for the minimum permissions looks like the example below:

      {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "s3:ListBucket",
                    "s3:GetBucketLocation"
                ],
                "Resource": [
                    "arn:aws:s3:::AWS_BUCKET_NAME/*",
                    "arn:aws:s3:::AWS_BUCKET_NAME"
                ]
            }
        ]
      }
      
  6. Assign a name to the role and create the role.

  7. Once created, view the role details to retrieve the Amazon Resource Name (ARN). Note this value; it has the format arn:aws:iam::AWS_ACCOUNT:role/ROLE_NAME.

How you pass the ARN to Storage Transfer Service depends on the interface you use to initiate the transfer.

Cloud console

Enter the ARN directly into the transfer job creation form.

See Create transfers to get started.

gcloud CLI

Create a JSON file with the following format:

{
  "roleArn": "ARN"
}

Pass the location of the file to the gcloud transfer jobs create command using the source-creds-file flag:

gcloud transfer jobs create s3://S3_BUCKET_NAME gs://GCS_BUCKET_NAME \
  --source-creds-file=PATH/TO/ARNFILE.JSON

REST API

Your transferSpec object must contain the ARN info as part of the awsS3DataSource object:

"transferSpec": {
  "awsS3DataSource": {
    "bucketName": "AWS_SOURCE_NAME",
    "roleArn": "ARN"
  },
  "gcsDataSink": {
    "bucketName": "GCS_SINK_NAME"
  }
}

Client libraries

See the examples in the Create transfers page.