You can set up access to an Amazon S3 bucket using either of two methods:
Supported regions
Storage Transfer Service is able to transfer data from the following Amazon S3 regions:af-south-1
, ap-east-1
, ap-northeast-1
,
ap-northeast-2
, ap-northeast-3
, ap-south-1
,
ap-south-2
, ap-southeast-1
, ap-southeast-2
,
ap-southeast-3
, ca-central-1
, eu-central-1
,
eu-central-2
, eu-north-1
, eu-south-1
,
eu-south-2
, eu-west-1
, eu-west-2
, eu-west-3
,
me-central-1
, me-south-1
, sa-east-1
, us-east-1
,
us-east-2
, us-west-1
, us-west-2
.
Required permissions
In order to use Storage Transfer Service to move data from an Amazon S3 bucket, your user account or federated identity role must have the appropriate permissions for the bucket:
Permission | Description | Use |
---|---|---|
s3:ListBucket |
Allows Storage Transfer Service to list objects in the bucket. | Always required. |
s3:GetBucketLocation |
Allows Storage Transfer Service to get the location of the bucket. | Always required. |
s3:GetObject |
Allows Storage Transfer Service to read objects in the bucket. | Required if you are transferring the current version of all objects. If your manifest specifies an object version, use s3:GetObjectVersion instead. |
s3:GetObjectVersion |
Allows Storage Transfer Service to read specific versions of objects in the bucket. | Required if your manifest specifies an object version. Otherwise, use s3:GetObject . |
s3:DeleteObject |
Allows Storage Transfer Service to delete objects in the bucket. | Required if you set deleteObjectsFromSourceAfterTransfer to true . |
Authenticate using access credentials
To use an access key ID and secret key to authenticate to AWS:
Create an AWS Identity and Access Management (AWS IAM) user with a name that you can easily recognize, such as
transfer-user
.For AWS access type, select Access key - programmatic access.
Grant one of the following roles to the user:
- AmazonS3ReadOnlyAccess to provide read-only access to the source. This allows transfers but does not support deleting objects at source once the transfer is complete.
- AmazonS3FullAccess if your transfer is configured to delete objects at source.
A custom role with the appropriate permissions from the Required permissions table above. The JSON for the minimum permissions looks like the example below:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::AWS_BUCKET_NAME/*", "arn:aws:s3:::AWS_BUCKET_NAME" ] } ] }
Note the access key ID and secret access key when the user is successfully created.
How you pass the access key ID and secret access key to Storage Transfer Service depends on the interface you use to initiate the transfer.
Cloud console
Enter the values directly into the transfer job creation form.
See Create transfers to get started.
gcloud CLI
Create a JSON file with the following format:
{
"accessKeyId": "AWS_ACCESS_KEY_ID",
"secretAccessKey": "AWS_SECRET_ACCESS_KEY"
}
Pass the location of the file to the gcloud transfer jobs create
command
using the source-creds-file
flag:
gcloud transfer jobs create s3://S3_BUCKET_NAME gs://GCS_BUCKET_NAME \
--source-creds-file=PATH/TO/KEYFILE.JSON
REST API
Your transferSpec
object must contain the key info as part of
the awsS3DataSource
object:
"transferSpec": {
"awsS3DataSource": {
"bucketName": "AWS_SOURCE_NAME",
"awsAccessKey": {
"accessKeyId": "AWS_ACCESS_KEY_ID",
"secretAccessKey": "AWS_SECRET_ACCESS_KEY"
}
},
"gcsDataSink": {
"bucketName": "GCS_SINK_NAME"
}
}
Client libraries
See the examples in the Create transfers page.
Authenticate using federated identity
To use federated identity to authenticate to AWS:
Create a new IAM role in AWS.
Select Custom trust policy as the trusted entity type.
Copy and paste the following trust policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "accounts.google.com" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "accounts.google.com:sub": "SUBJECT_ID" } } } ] }
Replace SUBJECT_ID with the
subjectID
of the Google-managed service account that is automatically created when you start using Storage Transfer Service. To retrieve thesubjectID
:Go to the
googleServiceAccounts.get
reference page.An interactive panel opens, titled Try this method.
In the panel, under Request parameters, enter your project ID. The project you specify here must be the project you're using to manage Storage Transfer Service.
Click Execute. The
subjectId
is included in the response.
Grant one of the following permissions policies to the role:
- AmazonS3ReadOnlyAccess provides read-only access to the source. This allows transfers but does not support deleting objects at source once the transfer is complete.
- AmazonS3FullAccess if your transfer is configured to delete objects at source.
A custom role with the appropriate permissions from the Required permissions table above. The JSON for the minimum permissions looks like the example below:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::AWS_BUCKET_NAME/*", "arn:aws:s3:::AWS_BUCKET_NAME" ] } ] }
Assign a name to the role and create the role.
Once created, view the role details to retrieve the Amazon Resource Name (ARN). Note this value; it has the format
arn:aws:iam::AWS_ACCOUNT:role/ROLE_NAME
.
How you pass the ARN to Storage Transfer Service depends on the interface you use to initiate the transfer.
Cloud console
Enter the ARN directly into the transfer job creation form.
See Create transfers to get started.
gcloud CLI
Create a JSON file with the following format:
{
"roleArn": "ARN"
}
Pass the location of the file to the gcloud transfer jobs create
command
using the source-creds-file
flag:
gcloud transfer jobs create s3://S3_BUCKET_NAME gs://GCS_BUCKET_NAME \
--source-creds-file=PATH/TO/ARNFILE.JSON
REST API
Your transferSpec
object must contain the ARN info as part of
the awsS3DataSource
object:
"transferSpec": {
"awsS3DataSource": {
"bucketName": "AWS_SOURCE_NAME",
"roleArn": "ARN"
},
"gcsDataSink": {
"bucketName": "GCS_SINK_NAME"
}
}
Client libraries
See the examples in the Create transfers page.
IP restrictions
If your AWS project uses IP restrictions for access to storage, you must add the IP ranges used by Storage Transfer Service workers to your list of allowed IPs.
Because these IP ranges can change, we publish the current values as a JSON file at a permanent address:
https://www.gstatic.com/storage-transfer-service/ipranges.json
When a new range is added to the file, we'll wait at least 7 days before using that range for requests from Storage Transfer Service.
We recommend that you pull data from this document at least weekly to keep your security configuration up to date. For a sample Python script that fetches IP ranges from a JSON file, see this article from the Virtual Private Cloud documentation.
To add these ranges as allowed IPs, use the Condition
field in a bucket
policy, as described in the AWS S3 documentation:
Managing access based on specific IP addresses.