You can set up access to an Amazon S3 bucket using either of two methods:
Supported regions
Storage Transfer Service is able to transfer data from the following Amazon S3 regions:af-south-1
, ap-east-1
, ap-northeast-1
,
ap-northeast-2
, ap-northeast-3
, ap-south-1
,
ap-south-2
, ap-southeast-1
, ap-southeast-2
,
ap-southeast-3
, ca-central-1
, eu-central-1
,
eu-central-2
, eu-north-1
, eu-south-1
,
eu-south-2
, eu-west-1
, eu-west-2
, eu-west-3
,
me-central-1
, me-south-1
, sa-east-1
, us-east-1
,
us-east-2
, us-west-1
, us-west-2
.
Required permissions
In order to use Storage Transfer Service to move data from an Amazon S3 bucket, your user account or federated identity role must have the appropriate permissions for the bucket:
Permission | Description | Use |
---|---|---|
s3:ListBucket |
Allows Storage Transfer Service to list objects in the bucket. | Always required. |
s3:GetObject |
Allows Storage Transfer Service to read objects in the bucket. | Required if you are transferring the current version of all objects. If your manifest specifies an object version, use s3:GetObjectVersion instead. |
s3:GetObjectVersion |
Allows Storage Transfer Service to read specific versions of objects in the bucket. | Required if your manifest specifies an object version. Otherwise, use s3:GetObject . |
s3:DeleteObject |
Allows Storage Transfer Service to delete objects in the bucket. | Required if you set deleteObjectsFromSourceAfterTransfer to true . |
Authenticate using access credentials
To use an access key ID and secret key to authenticate to AWS:
Create an AWS Identity and Access Management (AWS IAM) user with a name that you can easily recognize, such as
transfer-user
.For AWS access type, select Access key - programmatic access.
Grant one of the following roles to the user:
- AmazonS3ReadOnlyAccess to provide read-only access to the source. This allows transfers but does not support deleting objects at source once the transfer is complete.
- AmazonS3FullAccess if your transfer is configured to delete objects at source.
A custom role with the appropriate permissions from the Required permissions table above. The JSON for the minimum permissions looks like the example below:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::AWS_BUCKET_NAME/*", "arn:aws:s3:::AWS_BUCKET_NAME" ] } ] }
Note the access key ID and secret access key when the user is successfully created.
How you pass the access key ID and secret access key to Storage Transfer Service depends on the interface you use to initiate the transfer.
Cloud console
Enter the values directly into the transfer job creation form.
See Create transfers to get started.
gcloud CLI
Create a JSON file with the following format:
{
"accessKeyId": "AWS_ACCESS_KEY_ID",
"secretAccessKey": "AWS_SECRET_ACCESS_KEY"
}
Pass the location of the file to the gcloud transfer jobs create
command
using the source-creds-file
flag:
gcloud transfer jobs create s3://S3_BUCKET_NAME gs://GCS_BUCKET_NAME \
--source-creds-file=PATH/TO/KEYFILE.JSON
REST API
Your transferSpec
object must contain the key info as part of
the awsS3DataSource
object:
"transferSpec": {
"awsS3DataSource": {
"bucketName": "AWS_SOURCE_NAME",
"awsAccessKey": {
"accessKeyId": "AWS_ACCESS_KEY_ID",
"secretAccessKey": "AWS_SECRET_ACCESS_KEY"
}
},
"gcsDataSink": {
"bucketName": "GCS_SINK_NAME"
}
}
Client libraries
See the examples in the Create transfers page.
Save your access credentials in Secret Manager
Secret Manager is a secure service that stores and manages sensitive data such as passwords. It uses strong encryption, role-based access control, and audit logging to protect your secrets.
Storage Transfer Service can leverage Secret Manager to protect your AWS access credentials. You load your credentials into Secret Manager, then pass the secret resource name to Storage Transfer Service.
Enable the API
Enable the Secret Manager API.
Configure additional permissions
User permissions
The user creating the secret requires the following role:
- Secret Manager Admin (
roles/secretmanager.admin
)
Learn how to grant a role.
Service agent permissions
The Storage Transfer Service service agent requires the following IAM role:
- Secret Manager Secret Accessor (
roles/secretmanager.secretAccessor
)
To grant the role to your service agent:
Cloud console
Follow the instructions to retrieve your service agent email.
Go to the IAM page in the Google Cloud console.
Click Grant access.
In the New principals text box, enter the service agent email.
In the Select a role drop-down, search for and select Secret Manager Secret Accessor.
Click Save.
gcloud
Use the gcloud projects add-iam-policy-binding
command to add the IAM
role to your service agent.
Follow the instructions to retrieve your service agent email.
From the command line, enter the following command:
gcloud projects add-iam-policy-binding PROJECT_ID \ --member='serviceAccount:SERVICE_AGENT_EMAIL' \ --role='roles/secretmanager.secretAccessor'
Create a secret
Create a secret with Secret Manager:
Cloud console
Go to the Secret Manager page in the Google Cloud console.
Click Create secret.
Enter a name.
In the Secret value text box, enter your credentials in the following format:
{ "accessKeyId": "AWS_ACCESS_KEY_ID", "secretAccessKey": "AWS_SECRET_ACCESS_KEY" }
Click Create secret.
Once the secret has been created, note the secret's full resource name:
Select the Overview tab.
Copy the value of Resource ID. It uses the following format:
projects/1234567890/secrets/SECRET_NAME
gcloud
To create a new secret using the gcloud command-line tool, pass the
JSON-formatted credentials to the gcloud secrets create
command:
printf '{
"accessKeyId": "AWS_ACCESS_KEY_ID",
"secretAccessKey": "AWS_SECRET_ACCESS_KEY"
}' | gcloud secrets create SECRET_NAME --data-file=-
Retrieve the secret's full resource name:
gcloud secrets describe SECRET_NAME
Note the value of name
in the response. It uses the following format:
projects/1234567890/secrets/SECRET_NAME
For more details about creating and managing secrets, refer to the Secret Manager documentation.
Pass your secret to the job creation command
Using Secret Manager with Storage Transfer Service requires using the REST API to create a transfer job.
Pass the Secret Manager resource name as the value of the
transferSpec.awsS3DataSource.credentialsSecret
field:
POST https://storagetransfer.googleapis.com/v1/transferJobs
{
"description": "Transfer with Secret Manager",
"status": "ENABLED",
"projectId": "PROJECT_ID",
"transferSpec": {
"awsS3DataSource": {
"bucketName": "AWS_BUCKET_NAME",
"credentialsSecret": "SECRET_RESOURCE_ID",
},
"gcsDataSink": {
"bucketName": "CLOUD_STORAGE_BUCKET_NAME"
}
}
}
Authenticate using federated identity
To use federated identity to authenticate to AWS:
Create a new IAM role in AWS.
Select Custom trust policy as the trusted entity type.
Copy and paste the following trust policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "accounts.google.com" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "accounts.google.com:sub": "SUBJECT_ID" } } } ] }
Replace SUBJECT_ID with the
subjectID
of the Google-managed service account that is automatically created when you start using Storage Transfer Service. To retrieve thesubjectID
:Go to the
googleServiceAccounts.get
reference page.An interactive panel opens, titled Try this method.
In the panel, under Request parameters, enter your project ID. The project you specify here must be the project you're using to manage Storage Transfer Service.
Click Execute. The
subjectId
is included in the response.
Grant one of the following permissions policies to the role:
- AmazonS3ReadOnlyAccess provides read-only access to the source. This allows transfers but does not support deleting objects at source once the transfer is complete.
- AmazonS3FullAccess if your transfer is configured to delete objects at source.
A custom role with the appropriate permissions from the Required permissions table above. The JSON for the minimum permissions looks like the example below:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::AWS_BUCKET_NAME/*", "arn:aws:s3:::AWS_BUCKET_NAME" ] } ] }
Assign a name to the role and create the role.
Once created, view the role details to retrieve the Amazon Resource Name (ARN). Note this value; it has the format
arn:aws:iam::AWS_ACCOUNT:role/ROLE_NAME
.
How you pass the ARN to Storage Transfer Service depends on the interface you use to initiate the transfer.
Cloud console
Enter the ARN directly into the transfer job creation form.
See Create transfers to get started.
gcloud CLI
Create a JSON file with the following format:
{
"roleArn": "ARN"
}
Pass the location of the file to the gcloud transfer jobs create
command
using the source-creds-file
flag:
gcloud transfer jobs create s3://S3_BUCKET_NAME gs://GCS_BUCKET_NAME \
--source-creds-file=PATH/TO/ARNFILE.JSON
REST API
Your transferSpec
object must contain the ARN info as part of
the awsS3DataSource
object:
"transferSpec": {
"awsS3DataSource": {
"bucketName": "AWS_SOURCE_NAME",
"roleArn": "ARN"
},
"gcsDataSink": {
"bucketName": "GCS_SINK_NAME"
}
}
Client libraries
See the examples in the Create transfers page.
IP restrictions
If your AWS project uses IP restrictions for access to storage, you must add the IP ranges used by Storage Transfer Service workers to your list of allowed IPs.
Because these IP ranges can change, we publish the current values as a JSON file at a permanent address:
https://www.gstatic.com/storage-transfer-service/ipranges.json
When a new range is added to the file, we'll wait at least 7 days before using that range for requests from Storage Transfer Service.
We recommend that you pull data from this document at least weekly to keep your security configuration up to date. For a sample Python script that fetches IP ranges from a JSON file, see this article from the Virtual Private Cloud documentation.
To add these ranges as allowed IPs, use the Condition
field in a bucket
policy, as described in the AWS S3 documentation:
Managing access based on specific IP addresses.