Before transferring data from an Azure Storage bucket, you must configure access to that bucket so that Storage Transfer Service can retrieve its objects.
Storage Transfer Service supports the following Azure authentication methods:
Shared access signature (SAS) tokens. SAS tokens can be specified directly when creating a transfer job, or can be stored in Secret Manager.
Azure Shared Keys can be stored in Secret Manager and the secret passed when creating a transfer job.
Federated credentials are passed in a
federatedIdentityConfig
object during transfer job creation.
This document also includes information on adding Storage Transfer Service worker IP addresses to your Azure Storage firewall to allow access. See IP restrictions for details.
Supported regions
Storage Transfer Service is able to transfer data from the following Microsoft Azure Storage regions:- Americas: East US, East US 2, West US, West US 2, West US 3, Central US, North Central US, South Central US, West Central US, Canada Central, Canada East, Brazil South
- Asia-Pacific: Australia Central, Australia East, Australia Southeast, Central India, South India, West India, Southeast Asia, East Asia, Japan East, Japan West, Korea South, Korea Central
- Europe, Middle East, Africa (EMEA): France Central, Germany West Central, Norway East, Sweden Central, Switzerland North, North Europe, West Europe, UK South, UK West, Qatar Central, UAE North, South Africa North
Option 1: Authenticate using an SAS token
Follow these steps to configure access to a Microsoft Azure Storage container using an SAS token. You can alternatively save your SAS token in Secret Manager; to do so, follow the instructions in Authenticate using an Azure Shared Key or SAS token in Secret Manager.
Create or use an existing Microsoft Azure Storage user to access the storage account for your Microsoft Azure Storage Blob container.
Create an SAS token at the container level. See Grant limited access to Azure Storage resources using shared access signatures for instructions.
The Allowed services must include Blob.
For Allowed resource types select both Container and Object.
The Allowed permissions must include Read and List. If the transfer is configured to delete objects from source, you must also include Delete permission.
The default expiration time for SAS tokens is 8 hours. Set a reasonable expiration time that enables you to successfully complete your transfer.
Do not specify any IP addresses in the Allowed IP addresses field. Storage Transfer Service uses various IP addresses and doesn't support IP address restriction.
The Allowed protocols should be HTTPS only.
Once the token is created, note the SAS token value that is returned. You need this value when configuring your transfer with Storage Transfer Service.
Option 2: Authenticate using an Azure Shared Key or SAS token in Secret Manager
Secret Manager is a secure service that stores and manages sensitive data such as passwords. It uses strong encryption, role-based access control, and audit logging to protect your secrets.
Storage Transfer Service supports Secret Manager resource names that reference your securely stored Azure credentials.
To use an Azure Shared Key, you must save the key in Secret Manager. SAS tokens can be saved in Secret Manager or passed directly.
When you specify a Shared Key, Storage Transfer Service uses that key to generate a service SAS that is restricted in scope to the Azure container specified in the transfer job.
Enable the API
Enable the Secret Manager API.
Configure additional permissions
User permissions
The user creating the secret requires the following role:
- Secret Manager Admin (
roles/secretmanager.admin
)
Learn how to grant a role.
Service agent permissions
The Storage Transfer Service service agent requires the following IAM role:
- Secret Manager Secret Accessor (
roles/secretmanager.secretAccessor
)
To grant the role to your service agent:
Cloud console
Follow the instructions to retrieve your service agent email.
Go to the IAM page in the Google Cloud console.
Click Grant access.
In the New principals text box, enter the service agent email.
In the Select a role drop-down, search for and select Secret Manager Secret Accessor.
Click Save.
gcloud
Use the gcloud projects add-iam-policy-binding
command to add the IAM
role to your service agent.
Follow the instructions to retrieve your service agent email.
From the command line, enter the following command:
gcloud projects add-iam-policy-binding PROJECT_ID \ --member='serviceAccount:SERVICE_AGENT_EMAIL' \ --role='roles/secretmanager.secretAccessor'
Create a secret
Create a secret with Secret Manager:
Cloud console
Go to the Secret Manager page in the Google Cloud console.
Click Create secret.
Enter a name.
In the Secret value text box, enter your credentials in one of the following formats.
{ "sas_token" : "SAS_TOKEN_VALUE" }
Or:
{ "access_key" : "ACCESS_KEY" }
Click Create secret.
Once the secret has been created, note the secret's full resource name:
Select the Overview tab.
Copy the value of Resource name. It uses the following format:
projects/1234567890/secrets/SECRET_NAME
gcloud
To create a new secret using the gcloud command-line tool, pass the
JSON-formatted credentials to the gcloud secrets create
command:
printf '{
"sas_token" : "SAS_TOKEN_VALUE"
}' | gcloud secrets create SECRET_NAME --data-file=-
Or:
printf '{
"access_key" : "ACCESS_KEY"
}' | gcloud secrets create SECRET_NAME --data-file=-
Retrieve the secret's full resource name:
gcloud secrets describe SECRET_NAME
Note the value of name
in the response. It uses the following format:
projects/1234567890/secrets/SECRET_NAME
For more details about creating and managing secrets, refer to the Secret Manager documentation.
Pass your secret to the job creation command
Using Secret Manager with Storage Transfer Service requires using the REST API to create a transfer job.
Pass the Secret Manager resource name as the value of the
transferSpec.azureBlobStorageDataSource.credentialsSecret
field:
POST https://storagetransfer.googleapis.com/v1/transferJobs
{
"description": "Transfer with Secret Manager",
"status": "ENABLED",
"projectId": "PROJECT_ID",
"transferSpec": {
"azureBlobStorageDataSource": {
"storageAccount": "AZURE_STORAGE_ACCOUNT_NAME",
"container": "AZURE_CONTAINER_NAME",
"credentialsSecret": "SECRET_RESOURCE_ID",
},
"gcsDataSink": {
"bucketName": "CLOUD_STORAGE_BUCKET_NAME"
}
}
}
See Create transfers for full details about creating a transfer.
Option 3: Authenticate using federated identity
Storage Transfer Service supports Azure workload identity federation with Google Cloud. Storage Transfer Service can issue requests to Azure Storage through registered Azure applications, eliminating the need to pass credentials to Storage Transfer Service directly.
To configure federated identity, follow these instructions.
Configure Google Cloud credentials
You must add the Service Account Token Creator
(roles/iam.serviceAccountTokenCreator
) role to the Storage Transfer Service
service agent to allow creating OpenID Connect (OIDC) ID tokens for the
account.
Retrieve the
accountEmail
andsubjectId
of the Google-managed service agent that is automatically created when you start using Storage Transfer Service. To retrieve these values:Go to the
googleServiceAccounts.get
reference page.An interactive panel opens, titled Try this method.
In the panel, under Request parameters, enter your project ID. The project you specify here must be the project you're using to manage Storage Transfer Service.
Click Execute. The
accountEmail
andsubjectId
are included in the response. Save these values.
Grant the Service Account Token Creator (
roles/iam.serviceAccountTokenCreator
) role to the Storage Transfer Service service agent. Follow the instructions in Manage access to service accounts.
Configure Microsoft credentials
First, register an application and add a federated credential:
- Sign in to https://portal.azure.com.
- Go to the App registrations page.
- Click New registration.
- Enter a name. For example,
azure-transfer-app
. - Select Accounts in this organizational directory only.
- Click Register. The application is created. Note the
Application (client) ID
and theDirectory (tenant) ID
. You can also retrieve these later from the application's Overview page. - Click Certificates & secrets and select the Federated credentials tab.
- Click Add credential.
- Select Other issuer as the scenario and enter the following information:
- Issuer:
https://accounts.google.com
- Subject identifier: The
subjectId
of your service agent, that you retrieved in Configure Google Cloud credentials. - A unique name for the federated credential.
- Audience must remain as
api://AzureADTokenExchange
.
- Issuer:
- Click Add.
Next, grant the application access to your Azure Storage container:
- Go to the Storage Accounts page in your Azure account.
- Select your storage account and select Containers from the Data storage section.
- Click the bucket to which to grant access.
- Click Access Control (IAM) from the left menu and select the Roles tab.
- Click the overflow (
...
) menu next to any role and select Clone. - Enter a name for this custom role and select Start from scratch. Click Next.
- Click Add permissions and search for
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
. - Click the Microsoft Storage card that appears.
- Select the Data actions radio button.
- Select Read : Read Blob.
- Click Add.
- If you will be deleting objects at source after transfer, click
Add permissions again and search for
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete
. - Click the Microsoft Storage card that appears, select Data actions, and select Delete : Delete blob.
- Click Add.
- Click Review + create, then Create. You are returned to the bucket's Access Control (IAM) page.
- Click Add and select Add role assignment.
- From the list of roles, select your custom role and click Next.
- Click Select members.
- In the Select field, enter the name of the application that you
previously registered. For example,
azure-transfer-app
. - Click the application tile and click Select.
- Click Review + assign.
Pass your application identifiers to the job creation command
Your application's identifiers are passed to the job creation command using a
federatedIdentityConfig
object. Copy the Application (client) ID and the
Directory (tenant) ID that you saved during the
Configure Microsoft credentials steps into
the client_id
and tenant_id
fields.
"federatedIdentifyConfig": {
"client_id": "efghe9d8-4810-800b-8f964ed4057f",
"tenant_id": "abcd1234-c8f0-4cb0-b0c5-ae4aded60078"
}
An example job creation request looks like the following:
POST https://storagetransfer.googleapis.com/v1/transferJobs
{
"description": "Transfer with Azure Federated Identity",
"status": "ENABLED",
"projectId": "PROJECT_ID",
"transferSpec": {
"azureBlobStorageDataSource": {
"storageAccount": "AZURE_STORAGE_ACCOUNT_NAME",
"container": "AZURE_CONTAINER_NAME",
"federatedIdentifyConfig": {
"client_id": "AZURE_CLIENT_ID",
"tenant_id": "AZURE_TENANT_ID"
}
},
"gcsDataSink": {
"bucketName": "CLOUD_STORAGE_BUCKET_NAME"
}
}
}
See Create transfers for full details about creating a transfer.
IP restrictions
If you restrict access to your Azure resources using an Azure Storage firewall, you must add the IP ranges used by Storage Transfer Service workers to your list of allowed IPs.
Because these IP ranges can change, we publish the current values as a JSON file at a permanent address:
https://www.gstatic.com/storage-transfer-service/ipranges.json
When a new range is added to the file, we'll wait at least 7 days before using that range for requests from Storage Transfer Service.
We recommend that you pull data from this document at least weekly to keep your security configuration up to date. For a sample Python script that fetches IP ranges from a JSON file, see this article from the Virtual Private Cloud documentation.
To add these ranges as allowed IPs, follow the instructions in the Microsoft Azure article, Configure Azure Storage firewalls and virtual networks.