This tutorial describes how to harden data transfers from Amazon Simple Storage Service (Amazon S3) to Cloud Storage using Storage Transfer Service with a VPC Service Controls perimeter. This tutorial is intended for data owners who have data that resides in Amazon S3, and who want to process or migrate that data securely to Google Cloud.
This tutorial assumes that you're familiar with Amazon Web Services (AWS) and the fundamentals of working with data in object stores. This tutorial applies a service account-based method of controlling access by using Access Context Manager. For more advanced access levels beyond the service account-based method, see Creating an access level.
The following diagram outlines the VPC Service Controls architecture.
In the preceding diagram, VPC Service Controls explicitly denies communication between Google Cloud services unless both projects are in the controlled perimeter.
- Configure AWS access.
- Create VPC Service Controls perimeter.
- Create an access policy and access level by using Access Context Manager.
- Use Storage Transfer Service to move data between Amazon S3 and Cloud Storage.
- Schedule Storage Transfer Service to retrieve data on a schedule.
This tutorial uses the following billable components of Google Cloud:
There are no extra costs to use Storage Transfer Service; however, Cloud Storage pricing and external provider costs apply when using Storage Transfer Service.
When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up.
In addition to Google Cloud resources, this tutorial uses the following Amazon Web Services (AWS) resources, which might have costs:
Before you begin
Sign in to your Google Account.
If you don't already have one, sign up for a new account.
In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.
Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.
- Enable the Access Context Manager, Cloud Storage, and Storage Transfer Service APIs.
In the Cloud Console, activate Cloud Shell.
At the bottom of the Cloud Console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Cloud SDK already installed, including the
gcloudcommand-line tool, and with values already set for your current project. It can take a few seconds for the session to initialize.
- In the Cloud Console, go to the IAM and Admin page
to give your account the role of Storage Admin and Access Context Manager Admin.
Go to the IAM and Admin page
The Storage Admin role has the following permissions:
The Access Context Manager Admin role has the following permissions:
Configuring AWS access
In this tutorial, you work with existing AWS Identity and Access Management (AWS IAM) users to create an AWS IAM policy to interface with StorageTransfer Service. These policies and users are needed to authenticate your connection to Google Cloud and to help secure your data in transit. This tutorial requires that you have an Amazon S3 bucket to transfer data from; you can use an existing Amazon S3 bucket or you can create a new bucket. You can use a test or sandbox AWS account to avoid affecting production resources in the same account.
Create an AWS IAM policy for Storage Transfer Service and apply it to your bucket
- In the AWS Management Console, go to the IAM page.
- Click Policies, and then click Create Policy.
- In the visual editor, click IAM Policy.
- Click S3.
Select the following Access Level checkboxes:
In the Resources pane, click Specific.
In the Bucket pane, click Add ARN.
In the Bucket Name field, enter the name of the bucket where you're transferring data from.
Click Review Policy and enter a name such as
Click Create Policy.
Add AWS IAM users to your AWS IAM policy
- In the AWS Management Console, go to the IAM page.
- Click Users, and then click Add User.
- In the Name field, enter
- For Access Type, click Programmatic Access and attach the
transfer-user-policythat you created for that user.
- After you create the user, make a note of your access ID and secret key pair because it's used later in the tutorial.
- Click Save.
Creating a Cloud Storage bucket
Before you can enable your VPC Service Controls perimeter, you need to create a Cloud Storage bucket.
In the Cloud Console, go to the Cloud Storage Browser.
Click Create bucket.
In the Name field, enter a name, such as
project-idrepresents your Google Cloud project ID.
For the Default storage class for the bucket, click Regional storage.
In the Location drop-down list, click a region where the bucket data is stored.
Finding the name of your transfer operations service account
You need to determine the name of your service account because it is used later in this tutorial. For more information about Google-managed service accounts, see Service accounts.
- To determine the name of your service account, go to the Storage Transfer Service API page.
In the String field, enter your Google Cloud project ID.
The name of the service account is in the following format:
Creating your access policy in Access Context Manager
An access policy collects the service perimeters and access levels you create for your organization. An organization can only have one access policy.
In the Cloud Console, go to the Settings page.
Make a note of your Google Cloud project ID and the organization name.
In Cloud Shell, create a policy:
gcloud access-context-manager policies create \ --organization organization-id --title policy-title
organization-idis the organization ID that you found earlier.
policy-titleis the title of the perimeter. For example,
The output is as follows:
Create request issued Waiting for operation [accessPolicies/policy-title/create/policy-number] to complete...done. Created.
policy-numberrepresents a unique ID assigned to the policy title.
Creating your VPC Service Controls perimeter
When you create the VPC Service Controls perimeter, you start with no traffic allowed in. Then, you create an explicit access level to allow the transfer operation to send data into the controlled perimeter.
In the Cloud Console, go to the VPC Service Controls page.
Click New Perimeter.
In the Name field, enter a name for the perimeter, such as
Leave Regular Perimeter selected.
Click Add project and add the project that you created through this tutorial to the list of projects to protect.
Click Cloud Storage API.
Leave Access Levels at the default value.
Creating an access level in the access policy
In this section, you limit access into the VPC through the service account.
In Cloud Shell, create a YAML file called
conditions.yamlthat lists the members that you want to provide access to:
- members: - serviceAccount:email@example.com - user:firstname.lastname@example.org
Create the access level:
gcloud access-context-manager levels create name \ --title title \ --basic-level-spec ~./conditions.yaml \ --combine-function=OR \ --policy=policy-name
nameis the unique name for the access level. It must begin with a letter and include only letters, numbers, and underscores.
titleis a title that is unique to the policy, such as
policy-nameis the name of your organization's access policy.
combine-functionis set to
OR. The default value
ANDrequires that all conditions be met before an access level is granted. The
ORvalue gives the members access even if other conditions, such as IP address or those inherited from other required access levels aren't met.
The output is similar to the following:
Create request issued for: name Waiting for operation [accessPolicies/policy-name/accessLevels/name/create/access-level-number] to complete...done. Created level name.
access-level-number represents a unique ID assigned to the access level.
Binding the access level to VPC Service Controls
In the Cloud Console, go to VPC Service Controls.
Click Edit for VPC Service Control.
Click Access Level and select the
Now the only operations that are allowed in the controlled perimeter are from the service account that you defined.
Initiating the transfer
In the Cloud Console, go to the Transfer page.
Click Create transfer.
Click Amazon S3 bucket.
In the Amazon S3 bucket field, enter the source Amazon S3 bucket name as it appears in the AWS Management Console.
Enter the Access key ID and Secret key associated with the Amazon S3 bucket. You copied these values at the beginning of this tutorial.
In Select destination, enter the name of the bucket that you created in your perimeter, such as
For Configure transfer, schedule your transfer job to Run now.
Optional: Edit the transfer job name.
For Description use a unique, descriptive name to help you identify your transfer job later.
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Delete the project
- In the Cloud Console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.