Create transfers

This page shows you how to create and start transfer jobs.

To see if your source and destination (also known as a sink) are supported by Storage Transfer Service, refer to Supported sources and sinks.

Agents and agent pools

Depending on your source and destination, you may need to create and configure an agent pool and install agents on a machine with access to your source or destination.

  • Transfers from Amazon S3, Microsoft Azure, URL lists, or Cloud Storage to Cloud Storage do not require agents and agent pools.

  • Transfers whose source and/or destination is a file system, or from S3-compatible storage, do require agents and agent pools. See Manage agent pools for instructions.

Before you begin

Before configuring your transfers, make sure you have configured access:

If you're using gcloud commands, install the gcloud CLI.

Create a transfer

Don't include sensitive information such as personally identifiable information (PII) or security data in your transfer job name. Resource names may be propagated to the names of other Google Cloud resources and may be exposed to Google-internal systems outside of your project.

Google Cloud console

  1. Go to the Storage Transfer Service page in the Google Cloud console.

    Go to Storage Transfer Service

  2. Click Create transfer job. The Create a transfer job page is displayed.

  3. Choose a source:

    Cloud Storage

    Your user account must have storage.buckets.get permission to select source and destination buckets. Alternatively, you can type the name of the bucket directly. For more information, see Troubleshooting access.

    1. Under Source type, select Cloud Storage.

    2. Select your Destination type.

    3. If your destination is Cloud Storage, select your Scheduling mode. Batch transfers execute on a one-time or scheduled basis. Event-driven transfers continuously monitor the source and transfer data when it's added or modified.

      To configure an event-driven transfer, follow the instructions at Event-driven transfers.

    4. Click Next step.

    5. Select a bucket and (optionally) a folder in that bucket by doing one of the following:

      • Enter an existing Cloud Storage bucket name and path in the Bucket or folder field without the prefix gs://. For example, my-test-bucket/path/to/files. To specify a Cloud Storage bucket from another project, type the name exactly into the Bucket name field.

      • Select a list of existing buckets in your projects by clicking Browse, then selecting a bucket.

        When you click Browse, you can select buckets in other projects by clicking the Project ID, then selecting the new Project ID and bucket.

      • To create a new bucket, click Create new bucket.

    6. If this is an event-driven transfer, enter the Pub/Sub subscription name, which takes the following format:

      projects/PROJECT_NAME/subscriptions/SUBSCRIPTION_ID
      
    7. Optionally, choose to filter objects by prefix or by last modified date. If you specified a folder as your source location, prefix filters are relative to that folder. For example, if your source is my-test-bucket/path/, an include filter of file includes all files starting with my-test-bucket/path/file.
    8. Click Next step.

    Amazon S3

    1. Under Source type, select Amazon S3.

    2. As your Destination type select Google Cloud Storage.

    3. Select your Scheduling mode. Batch transfers execute on a one-time or scheduled basis. Event-driven transfers continuously monitor the source and transfer data when it's added or modified.

      To configure an event-driven transfer, follow the instructions at Event-driven transfers.

    4. Click Next step.

    5. In the Bucket or folder name field, enter the source bucket name.

      The bucket name is the name as it appears in the AWS Management Console.

    6. If you're using a CloudFront distribution to transfer from S3, enter the distribution domain name in the CloudFront domain field. For example, https://dy1h2n3l4ob56.cloudfront.net. See Transfer from S3 via CloudFront to configure a CloudFront distribution.

    7. Select your Amazon Web Services (AWS) authentication method. You can provide an AWS access key or an Amazon Resource Name (ARN) for identity federation:

      • Access key: Enter your access key in the Access key ID field and the secret associated with your access key in the Secret access key field.

      • ARN: Enter your ARN in the AWS IAM role ARN field, with the following syntax:

        arn:aws:iam::ACCOUNT:role/ROLE-NAME-WITH-PATH
        

        Where:

        • ACCOUNT: The AWS account ID with no hyphens.
        • ROLE-NAME-WITH-PATH: The AWS role name including path.

        For more information on ARNs, see IAM ARNs.

    8. If this is an event-driven transfer, enter the Amazon SQS queue ARN, which takes the following format:

      arn:aws:sqs:us-east-1:1234567890:event-queue
      
    9. Optionally, choose to filter objects by prefix or by last modified date. If you specified a folder as your source location, prefix filters are relative to that folder. For example, if your source is my-test-bucket/path/, an include filter of file includes all files starting with my-test-bucket/path/file.
    10. Click Next step.

    S3-compatible storage

    1. Under Source type, select S3-compatible object storage.

    2. Click Next step.

    3. Specify the required information for this transfer:

      1. Select the agent pool you configured for this transfer.

      2. Enter the Bucket name relative to the endpoint. For example, if your data resides at:

        https://us-east-1.example.com/folder1/bucket_a

        Enter: folder1/bucket_a

      3. Enter the Endpoint. Do not include the protocol (http:// or https://). In the example from the previous step, you would enter:

        us-east-1.example.com

    4. Specify any optional attributes for this transfer:

      1. Enter the Signing region to use for signing of requests. Leave this field empty if your source does not require a signing region.

      2. Choose the Signing process for this request.

      3. Select the Addressing style. This determines whether the bucket name is provided in path-style (e.g., https://s3.region.example.com/bucket-name/key-name) or virtual hosted-style (e.g., https://bucket-name.s3.region.example.com/key-name). Read [Virtual hosting of buckets] in the Amazon documentation for more information.

      4. Select the Network protocol.

      5. Select the listing API version to use. Refer to the ListObjectsV2 and ListObjects documentation for more information.

    5. Click Next step.

    Microsoft Azure Blob Storage

    1. Under Source type, select Azure Blob Storage or Data Lake Storage Gen2.

    2. Click Next step.

    3. Specify the following:

      1. Storage account name — the source Microsoft Azure Storage account name.

        The storage account name is displayed in the Microsoft Azure Storage portal under All services > Storage > Storage accounts.

      2. Container name — the Microsoft Azure Storage container name.

        The container name is displayed in the Microsoft Azure Storage portal under Storage explorer > Blob containers.

      3. Shared access signature (SAS) — the Microsoft Azure Storage SAS token created from a stored access policy. For more information, see Grant limited access to Azure Storage resources using shared access signatures (SAS).

        The default expiration time for SAS tokens is 8 hours. When you create your SAS token, be sure to set a reasonable expiration time that enables you to successfully complete your transfer.
    4. Optionally, choose to filter objects by prefix or by last modified date. If you specified a folder as your source location, prefix filters are relative to that folder. For example, if your source is my-test-bucket/path/, an include filter of file includes all files starting with my-test-bucket/path/file.
    5. Click Next step.

    File system

    1. Under Source type, select POSIX file system.

    2. Select your Destination type and click Next step.

    3. Select an existing agent pool, or select Create agent pool and follow the instructions to create a new pool.

    4. Specify the fully qualified path of the file system directory.

    5. Click Next step.

    HDFS

    See Transfer from HDFS to Cloud Storage.

    URL list

    1. Under Source type, select URL list and click Next step.

    2. Under URL of TSV file, provide the URL to a tab-separated values (TSV) file. See Creating a URL List for details about how to create the TSV file.

    3. Optionally, choose to filter objects by prefix or by last modified date. If you specified a folder as your source location, prefix filters are relative to that folder. For example, if your source is my-test-bucket/path/, an include filter of file includes all files starting with my-test-bucket/path/file.
    4. Click Next step.

  4. Choose a destination:

    Cloud Storage​

    1. In the Bucket or folder field, enter the destination bucket and (optionally) folder name, or click Browse to select a bucket from a list of existing buckets in your current project. To create a new bucket, click Create new bucket.

    2. Click Next step.

    3. Choose settings for the transfer job. Some options are only available for certain source/sink combinations.

      1. In the Description field, enter a description of the transfer. As a best practice, enter a description that is meaningful and unique so that you can tell jobs apart.

      2. Under Metadata options, choose to use the default options, or click View and select options to specify values for all supported metadata. See Metadata preservation for details.

      3. Under When to overwrite, select one of the following:

        • If different: Overwrites destination files if the source file with the same name has different Etags or checksum values.

        • Always: Always overwrites destination files when the source file has the same name, even if they're identical.

      4. Under When to delete, select one of the following:

        • Never: Never delete files from either the source or destination.

        • Delete file from source after they're transferred: Delete files from the source after they're transferred to the destination.

        • Delete files from destination if they're not also at source: If files in the destination Cloud Storage bucket aren't also in the source, then delete the files from the Cloud Storage bucket.

          This option ensures that the destination Cloud Storage bucket exactly matches your source.

      5. Under Notification options, select your Pub/Sub topic and which events to notify for. See Pub/Sub notifications for more details.

    4. Click Next step.

    File system​

    1. Select an existing agent pool, or select Create agent pool and follow the instructions to create a new pool.

    2. Specify the fully qualified destination directory path.

    3. Click Next step.

  5. Choose your scheduling options:

    1. From the Run once drop-down list, select one of the following:

      • Run once: Runs a single transfer, starting at a time that you select.

      • Run every day: Runs a transfer daily, starting at a time that you select.

        You can enter an optional End date, or leave End date blank to run the transfer continually.

      • Run every week: Runs a transfer weekly, starting at a time that you select.

      • Run with custom frequency: Runs a transfer at a frequency that you select. You can choose to repeat the transfer at a regular interval of Hours, Days, or Weeks.

        You can enter an optional End date, or leave End date blank to run the transfer continually.

    2. From the Starting now drop-down list, select one of the following:

      • Starting now: Starts the transfer after you click Create.

      • Starting on: Starts the transfer on the date and time that you select. Click Calendar to display a calendar to select the start date.

    3. To create your transfer job, click Create.

gcloud CLI

To create a new transfer job, use the gcloud transfer jobs create command. Creating a new job initiates the specified transfer, unless a schedule or --do-not-run is specified.

gcloud transfer jobs create \
  SOURCE DESTINATION

Where:

  • SOURCE is the data source for this transfer. The format for each source is:

    • Cloud Storage: gs://BUCKET_NAME. To transfer from a specific folder, specify gs://BUCKET_NAME/FOLDER_PATH/, including the trailing slash.
    • Amazon S3: s3://BUCKET_NAME/FOLDER_PATH
    • S3-compatible storage: s3://BUCKET_NAME. The bucket name is relative to the endpoint. For example, if your data resides at https://us-east-1.example.com/folder1/bucket_a, enter s3://folder1/bucket_a.
    • Microsoft Azure Storage: https://myaccount.blob.core.windows.net/CONTAINER_NAME
    • URL list: https://PATH_TO_URL_LIST or http://PATH_TO_URL_LIST
    • POSIX file system: posix:///PATH. This must be an absolute path from the root of the agent host machine.
    • HDFS: hdfs:///PATH
  • DESTINATION is one of:

    • Cloud Storage: gs://BUCKET_NAME. To transfer into a specific directory, specify gs://BUCKET_NAME/FOLDER_PATH/, including the trailing slash.
    • POSIX file system: posix:///PATH. This must be an absolute path from the root of the agent host machine.

If the transfer requires transfer agents, the following options are available:

  • --source-agent-pool specifies the source agent pool to use for this transfer. Required for transfers originating from a file system.

  • --destination-agent-pool specifies the destination agent pool to use for this transfer. Required for transfers to a file system.

  • --intermediate-storage-path is the path to a Cloud Storage bucket, in the form gs://my-intermediary-bucket. Required for transfers between two file systems. See Create a Cloud Storage bucket as an intermediary for details on creating the intermediate bucket.

Additional options include:

  • --source-creds-file specifies the relative path to a local file on your machine that includes AWS or Azure credentials for the transfer source. For credential file formatting information, see the TransferSpec reference.

  • --do-not-run prevents Storage Transfer Service from running the job upon submission of the command. To run the job, update it to add a schedule, or use jobs run to start it manually.

  • --manifest-file specifies the path to a CSV file in Cloud Storage containing a list of files to transfer from your source. For manifest file formatting, see Transfer specific files or objects using a manifest.

  • Job information: You can specify --name, --description, and --source-creds-file.

  • Schedule: Specify --schedule-starts, --schedule-repeats-every, and --schedule-repeats-until, or --do-not-run.

  • Object conditions: Use conditions to determine which objects are transferred. These include --include-prefixes and --exclude-prefixes, and the time-based conditions in --include-modified-[before | after]-[absolute | relative]. If you specified a folder with your source, prefix filters are relative to that folder. See Filter source objects by prefix for more information.

    Object conditions aren't supported for transfers involving file systems.

  • Transfer options: Specify whether to overwrite destination files (--overwrite-when=different or always) and whether to delete certain files during or after the transfer (--delete-from=destination-if-unique or source-after-transfer); specify which metadata values to preserve (--preserve-metadata); and optionally set a storage class on transferred objects (--custom-storage-class).

  • Notifications: Configure Pub/Sub notifications for transfers with --notification-pubsub-topic, --notification-event-types, and --notification-payload-format.

  • Cloud Logging: Enable Cloud Logging for agentless transfers, or transfers from S3-compatible sources, with --log-actions and --log-action-states. See Cloud Logging for Storage Transfer Service for details.

Transfers from S3-compatible sources also use the following options:

  • --source-endpoint (required) specifies your storage system's endpoint. For example, s3.example.com. Check with your provider for the correct formatting. Do not specify the protocol (http:// or https://).
  • --source-signing-region specifies a region for signing requests. Omit this flag if your storage provider doesn't require a signing region.
  • --source-auth-method specifies the authentication method to use. Valid values are AWS_SIGNATURE_V2 or AWS_SIGNATURE_V4. Refer to Amazon's SigV4 and SigV2 documentation for more information.
  • --source-request-model specifies the addressing style to use. Valid values are PATH_STYLE or VIRTUAL_HOSTED_STYLE. Path style uses the format https://s3.example.com/BUCKET_NAME/KEY_NAME. Virtual hosted style uses the format `https://BUCKET_NAME.s3.example.com/KEY_NAME.
  • --source-network-protocol specifies the network protocol that agents should use for this job. Valid values are HTTP or HTTPS.
  • --source-list-api specifies the version of the S3 listing API for returning objects from the bucket. Valid values are LIST_OBJECTS or LIST_OBJECTS_V2. Refer to Amazon's ListObjectsV2 and ListObjects documentation for more information.

To view all options, run gcloud transfer jobs create --help or refer to the gcloud reference documentation.

Examples

Amazon S3 to Cloud Storage

Run the following command to create a daily transfer job to move data modified in the last 1 day from s3://my-s3-bucket to a Cloud Storage bucket named my-storage-bucket:

gcloud transfer jobs create \
  s3://my-s3-bucket gs://my-storage-bucket \
  --source-creds-file=/examplefolder/creds.txt \
  --include-modified-after-relative=1d \
  --schedule-repeats-every=1d

S3-compatible storage to Cloud Storage

To transfer from an S3-compatible source to a Cloud Storage bucket:

gcloud transfer jobs create s3://my-s3-compat-bucket gs://my-storage-bucket \
  --source-agent-pool=my-source-agent-pool
  --source-endpoint=us-east-1.example.com/ \
  --source-auth-method=AWS_SIGNATURE_V2 \
  --source-request-model=PATH_STYLE \
  --source-network-protocol=HTTPS \
  --source-list-api=LIST_OBJECTS_V2

File system to Cloud Storage

To transfer from a file system to a Cloud Storage bucket, specify the following.

gcloud transfer jobs create \
  posix:///tmp/source gs://my-storage-bucket \
  --source-agent-pool=my-source-agent-pool

Cloud Storage to file system

To transfer from a Cloud Storage bucket to a file system, specify the following.

gcloud transfer jobs create \
  gs://my-storage-bucket posix:///tmp/destination \
  --destination-agent-pool=my-destination-agent-pool

File system to file system

To transfer between two file systems, you must specify a source agent pool, a destination agent pool, and an intermediate Cloud Storage bucket through which the data passes.

See Create a Cloud Storage bucket as an intermediary for details on the intermediate bucket.

Then, specify these 3 resources when calling transfer jobs create:

gcloud transfer jobs create \
  posix:///tmp/source/on/systemA posix:///tmp/destination/on/systemB \
  --source-agent-pool=source_agent_pool \
  --destination-agent-pool=destination_agent_pool \
  --intermediate-storage-path=gs://my-intermediary-bucket

REST

The following samples show you how to use Storage Transfer Service through the REST API.

When you configure or edit transfer jobs using the Storage Transfer Service API, the time must be in UTC. For more information on specifying the schedule of a transfer job, see Schedule.

Transfer between Cloud Storage buckets

In this example, you'll learn how to move files from one Cloud Storage bucket to another. For example, you can move data to a bucket in another location.

Request using transferJobs create:

POST https://storagetransfer.googleapis.com/v1/transferJobs
{
  "description": "YOUR DESCRIPTION",
  "status": "ENABLED",
  "projectId": "PROJECT_ID",
  "schedule": {
      "scheduleStartDate": {
          "day": 1,
          "month": 1,
          "year": 2015
      },
      "startTimeOfDay": {
          "hours": 1,
          "minutes": 1
      }
  },
  "transferSpec": {
      "gcsDataSource": {
          "bucketName": "GCS_SOURCE_NAME"
      },
      "gcsDataSink": {
          "bucketName": "GCS_SINK_NAME"
      },
      "transferOptions": {
          "deleteObjectsFromSourceAfterTransfer": true
      }
  }
}
Response:
200 OK
{
  "transferJob": [
      {
          "creationTime": "2015-01-01T01:01:00.000000000Z",
          "description": "YOUR DESCRIPTION",
          "name": "transferJobs/JOB_ID",
          "status": "ENABLED",
          "lastModificationTime": "2015-01-01T01:01:00.000000000Z",
          "projectId": "PROJECT_ID",
          "schedule": {
              "scheduleStartDate": {
                  "day": 1,
                  "month": 1,
                  "year": 2015
              },
              "startTimeOfDay": {
                  "hours": 1,
                  "minutes": 1
              }
          },
          "transferSpec": {
              "gcsDataSource": {
                  "bucketName": "GCS_SOURCE_NAME",
              },
              "gcsDataSink": {
                  "bucketName": "GCS_NEARLINE_SINK_NAME"
              },
              "objectConditions": {
                  "minTimeElapsedSinceLastModification": "2592000.000s"
              },
              "transferOptions": {
                  "deleteObjectsFromSourceAfterTransfer": true
              }
          }
      }
  ]
}

Transfer from Amazon S3 to Cloud Storage

In this example, you'll learn how to move files from Amazon S3 to a Cloud Storage bucket. Be sure to review Configure access to Amazon S3 and Pricing to understand the implications of moving data from Amazon S3 to Cloud Storage.

When creating transfer jobs, do not include the s3:// prefix for bucketName in Amazon S3 bucket source names.

Request using transferJobs create:

POST https://storagetransfer.googleapis.com/v1/transferJobs
{
  "description": "YOUR DESCRIPTION",
  "status": "ENABLED",
  "projectId": "PROJECT_ID",
  "schedule": {
      "scheduleStartDate": {
          "day": 1,
          "month": 1,
          "year": 2015
      },
      "scheduleEndDate": {
          "day": 1,
          "month": 1,
          "year": 2015
      },
      "startTimeOfDay": {
          "hours": 1,
          "minutes": 1
      }
  },
  "transferSpec": {
      "awsS3DataSource": {
          "bucketName": "AWS_SOURCE_NAME",
          "awsAccessKey": {
              "accessKeyId": "AWS_ACCESS_KEY_ID",
              "secretAccessKey": "AWS_SECRET_ACCESS_KEY"
          }
      },
      "gcsDataSink": {
          "bucketName": "GCS_SINK_NAME"
      }
  }
}
Response:
200 OK
{
  "transferJob": [
      {
          "creationTime": "2015-01-01T01:01:00.000000000Z",
          "description": "YOUR DESCRIPTION",
          "name": "transferJobs/JOB_ID",
          "status": "ENABLED",
          "lastModificationTime": "2015-01-01T01:01:00.000000000Z",
          "projectId": "PROJECT_ID",
          "schedule": {
              "scheduleStartDate": {
                  "day": 1,
                  "month": 1,
                  "year": 2015
              },
              "scheduleEndDate": {
                  "day": 1,
                  "month": 1,
                  "year": 2015
              },
              "startTimeOfDay": {
                  "hours": 1,
                  "minutes": 1
              }
          },
          "transferSpec": {
              "awsS3DataSource": {
                  "bucketName": "AWS_SOURCE_NAME"
              },
              "gcsDataSink": {
                  "bucketName": "GCS_SINK_NAME"
              },
              "objectConditions": {},
              "transferOptions": {}
          }
      }
  ]
}

If you're transferring from S3 via a CloudFront distribution, specify the distribution domain name as the value of the transferSpec.awsS3DataSource.cloudfrontDomain field:

"transferSpec": {
  "awsS3DataSource": {
    "bucketName": "AWS_SOURCE_NAME",
    "cloudfrontDomain": "https://dy1h2n3l4ob56.cloudfront.net",
    "awsAccessKey": {
      "accessKeyId": "AWS_ACCESS_KEY_ID",
      "secretAccessKey": "AWS_SECRET_ACCESS_KEY"
    }
  },
  ...
}

Transfer between Microsoft Azure Blob Storage and Cloud Storage

In this example, you'll learn how to move files from Microsoft Azure Storage to a Cloud Storage bucket, using a Microsoft Azure Storage shared access signature (SAS) token.

For more information on Microsoft Azure Storage SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS).

Before starting, review Configure access to Microsoft Azure Storage and Pricing to understand the implications of moving data from Microsoft Azure Storage to Cloud Storage.

Request using transferJobs create:

POST https://storagetransfer.googleapis.com/v1/transferJobs
{
  "description": "YOUR DESCRIPTION",
  "status": "ENABLED",
  "projectId": "PROJECT_ID",
  "schedule": {
      "scheduleStartDate": {
          "day": 14,
          "month": 2,
          "year": 2020
      },
      "scheduleEndDate": {
          "day": 14
          "month": 2,
          "year": 2020
      },
      "startTimeOfDay": {
          "hours": 1,
          "minutes": 1
      }
  },
  "transferSpec": {
      "azureBlobStorageDataSource": {
          "storageAccount": "AZURE_SOURCE_NAME",
          "azureCredentials": {
              "sasToken": "AZURE_SAS_TOKEN",
          },
          "container": "AZURE_CONTAINER",
      },
      "gcsDataSink": {
          "bucketName": "GCS_SINK_NAME"
      }
  }
}
Response:
200 OK
{
  "transferJob": [
      {
          "creationTime": "2020-02-14T01:01:00.000000000Z",
          "description": "YOUR DESCRIPTION",
          "name": "transferJobs/JOB_ID",
          "status": "ENABLED",
          "lastModificationTime": "2020-02-14T01:01:00.000000000Z",
          "projectId": "PROJECT_ID",
          "schedule": {
              "scheduleStartDate": {
                  "day": 14
                  "month": 2,
                  "year": 2020
              },
              "scheduleEndDate": {
                  "day": 14,
                  "month": 2,
                  "year": 2020
              },
              "startTimeOfDay": {
                  "hours": 1,
                  "minutes": 1
              }
          },
          "transferSpec": {
              "azureBlobStorageDataSource": {
                  "storageAccount": "AZURE_SOURCE_NAME",
                  "azureCredentials": {
                      "sasToken": "AZURE_SAS_TOKEN",
                  },
                  "container": "AZURE_CONTAINER",
              },
              "objectConditions": {},
              "transferOptions": {}
          }
      }
  ]
}

Transfer from a file system

In this example, you'll learn how to move files from a POSIX file system to a Cloud Storage bucket.

Use transferJobs.create with a posixDataSource:

POST https://storagetransfer.googleapis.com/v1/transferJobs
{
 "name":"transferJobs/sample_transfer",
 "description": "My First Transfer",
 "status": "ENABLED",
 "projectId": "my_transfer_project_id",
 "schedule": {
     "scheduleStartDate": {
         "year": 2022,
         "month": 5,
         "day": 2
     },
     "startTimeOfDay": {
         "hours": 22,
         "minutes": 30,
         "seconds": 0,
         "nanos": 0
     }
     "scheduleEndDate": {
         "year": 2022,
         "month": 12,
         "day": 31
     },
     "repeatInterval": {
         "259200s"
     },
 },
 "transferSpec": {
     "posixDataSource": {
          "rootDirectory": "/bar/",
     },
     "sourceAgentPoolName": "my_example_pool",
     "gcsDataSink": {
          "bucketName": "destination_bucket"
          "path": "foo/bar/"
     },
  }
}

The schedule field is optional; if it's not included, the transfer job must be started with a transferJobs.run request.

To check your transfer's status after creating a job, use transferJobs.get:

GET https://storagetransfer.googleapis.com/v1/transferJobs/sample_transfer?project_id=my_transfer_project_id

Specifying source and destination paths

Source and destination paths enable you to specify source and destination directories when transferring data to your Cloud Storage bucket. For example, consider that you have files file1.txt and file2.txt and a Cloud Storage bucket named B. If you set a destination path named my-stuff, then after the transfer completes your files are located at gs://B/my-stuff/file1.txt and gs://B/my-stuff/file2.txt.

Specifying a source path

To specify a source path when creating a transfer job, add a path field to the gcsDataSource field in your TransferSpec specification:

{
gcsDataSource: {
  bucketName: "SOURCE_BUCKET",
  path: "SOURCE_PATH/",
},
}

In this example:

  • SOURCE_BUCKET: The source Cloud Storage bucket.
  • SOURCE_PATH: The source Cloud Storage path.

Specifying a destination path

To specify a destination folder when you create a transfer job, add a path field to the gcsDataSink field in your TransferSpec specification:

{
gcsDataSink: {
  bucketName: "DESTINATION_BUCKET",
  path: "DESTINATION_PATH/",
},
}

In this example:

  • DESTINATION_BUCKET: The destination Cloud Storage bucket.
  • DESTINATION_PATH: The destination Cloud Storage path.

Complete example request

The following is an example of a full request:

POST https://storagetransfer.googleapis.com/v1/transferJobs
{
  "description": "YOUR DESCRIPTION",
  "status": "ENABLED",
  "projectId": "PROJECT_ID",
  "schedule": {
      "scheduleStartDate": {
          "day": 1,
          "month": 1,
          "year": 2015
      },
      "startTimeOfDay": {
          "hours": 1,
          "minutes": 1
      }
  },
  "transferSpec": {
      "gcsDataSource": {
          "bucketName": "GCS_SOURCE_NAME",
          "path": "GCS_SOURCE_PATH",
      },
      "gcsDataSink": {
          "bucketName": "GCS_SINK_NAME",
          "path": "GCS_SINK_PATH",
      },
      "objectConditions": {
          "minTimeElapsedSinceLastModification": "2592000s"
      },
      "transferOptions": {
          "deleteObjectsFromSourceAfterTransfer": true
      }
  }

}

Client libraries

The following samples show you how to use Storage Transfer Service programmatically with Go, Java, Node.js, and Python.

When you configure or edit transfer jobs programmatically, the time must be in UTC. For more information on specifying the schedule of a transfer job, see Schedule.

For more information about the Storage Transfer Service client libraries, see Getting started with Storage Transfer Service client libraries.

Transfer between Cloud Storage buckets

In this example, you'll learn how to move files from one Cloud Storage bucket to another. For example, you can move data to a bucket in another location.

Go

import (
	"context"
	"fmt"
	"io"
	"time"

	"google.golang.org/genproto/googleapis/type/date"
	"google.golang.org/genproto/googleapis/type/timeofday"
	"google.golang.org/protobuf/types/known/durationpb"

	storagetransfer "cloud.google.com/go/storagetransfer/apiv1"
	"cloud.google.com/go/storagetransfer/apiv1/storagetransferpb"
)

func transferToNearline(w io.Writer, projectID string, gcsSourceBucket string, gcsNearlineSinkBucket string) (*storagetransferpb.TransferJob, error) {
	// Your Google Cloud Project ID
	// projectID := "my-project-id"

	// The name of the GCS bucket to transfer objects from
	// gcsSourceBucket := "my-source-bucket"

	// The name of the Nearline GCS bucket to transfer objects to
	// gcsNearlineSinkBucket := "my-sink-bucket"

	ctx := context.Background()
	client, err := storagetransfer.NewClient(ctx)
	if err != nil {
		return nil, fmt.Errorf("storagetransfer.NewClient: %w", err)
	}
	defer client.Close()

	// A description of this job
	jobDescription := "Transfers objects that haven't been modified in 30 days to a Nearline bucket"

	// The time to start the transfer
	startTime := time.Now().UTC()

	req := &storagetransferpb.CreateTransferJobRequest{
		TransferJob: &storagetransferpb.TransferJob{
			ProjectId:   projectID,
			Description: jobDescription,
			TransferSpec: &storagetransferpb.TransferSpec{
				DataSink: &storagetransferpb.TransferSpec_GcsDataSink{
					GcsDataSink: &storagetransferpb.GcsData{BucketName: gcsNearlineSinkBucket}},
				DataSource: &storagetransferpb.TransferSpec_GcsDataSource{
					GcsDataSource: &storagetransferpb.GcsData{BucketName: gcsSourceBucket},
				},
				ObjectConditions: &storagetransferpb.ObjectConditions{
					MinTimeElapsedSinceLastModification: &durationpb.Duration{Seconds: 2592000 /*30 days */},
				},
				TransferOptions: &storagetransferpb.TransferOptions{DeleteObjectsFromSourceAfterTransfer: true},
			},
			Schedule: &storagetransferpb.Schedule{
				ScheduleStartDate: &date.Date{
					Year:  int32(startTime.Year()),
					Month: int32(startTime.Month()),
					Day:   int32(startTime.Day()),
				},
				ScheduleEndDate: &date.Date{
					Year:  int32(startTime.Year()),
					Month: int32(startTime.Month()),
					Day:   int32(startTime.Day()),
				},
				StartTimeOfDay: &timeofday.TimeOfDay{
					Hours:   int32(startTime.Hour()),
					Minutes: int32(startTime.Minute()),
					Seconds: int32(startTime.Second()),
				},
			},
			Status: storagetransferpb.TransferJob_ENABLED,
		},
	}
	resp, err := client.CreateTransferJob(ctx, req)
	if err != nil {
		return nil, fmt.Errorf("failed to create transfer job: %w", err)
	}
	if _, err = client.RunTransferJob(ctx, &storagetransferpb.RunTransferJobRequest{
		ProjectId: projectID,
		JobName:   resp.Name,
	}); err != nil {
		return nil, fmt.Errorf("failed to run transfer job: %w", err)
	}
	fmt.Fprintf(w, "Created and ran transfer job from %v to %v with name %v", gcsSourceBucket, gcsNearlineSinkBucket, resp.Name)
	return resp, nil
}

Java

Looking for older samples? See the Storage Transfer Service Migration Guide.

import com.google.protobuf.Duration;
import com.google.storagetransfer.v1.proto.StorageTransferServiceClient;
import com.google.storagetransfer.v1.proto.TransferProto.CreateTransferJobRequest;
import com.google.storagetransfer.v1.proto.TransferTypes.GcsData;
import com.google.storagetransfer.v1.proto.TransferTypes.ObjectConditions;
import com.google.storagetransfer.v1.proto.TransferTypes.Schedule;
import com.google.storagetransfer.v1.proto.TransferTypes.TransferJob;
import com.google.storagetransfer.v1.proto.TransferTypes.TransferJob.Status;
import com.google.storagetransfer.v1.proto.TransferTypes.TransferOptions;
import com.google.storagetransfer.v1.proto.TransferTypes.TransferSpec;
import com.google.type.Date;
import com.google.type.TimeOfDay;
import java.io.IOException;
import java.util.Calendar;

public class TransferToNearline {
  /**
   * Creates a one-off transfer job that transfers objects in a standard GCS bucket that are more
   * than 30 days old to a Nearline GCS bucket.
   */
  public static void transferToNearline(
      String projectId,
      String jobDescription,
      String gcsSourceBucket,
      String gcsNearlineSinkBucket,
      long startDateTime)
      throws IOException {

    // Your Google Cloud Project ID
    // String projectId = "your-project-id";

    // A short description of this job
    // String jobDescription = "Sample transfer job of old objects to a Nearline GCS bucket.";

    // The name of the source GCS bucket to transfer data from
    // String gcsSourceBucket = "your-gcs-source-bucket";

    // The name of the Nearline GCS bucket to transfer old objects to
    // String gcsSinkBucket = "your-nearline-gcs-bucket";

    // What day and time in UTC to start the transfer, expressed as an epoch date timestamp.
    // If this is in the past relative to when the job is created, it will run the next day.
    // long startDateTime =
    //     new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").parse("2000-01-01 00:00:00").getTime();

    // Parse epoch timestamp into the model classes
    Calendar startCalendar = Calendar.getInstance();
    startCalendar.setTimeInMillis(startDateTime);
    // Note that this is a Date from the model class package, not a java.util.Date
    Date date =
        Date.newBuilder()
            .setYear(startCalendar.get(Calendar.YEAR))
            .setMonth(startCalendar.get(Calendar.MONTH) + 1)
            .setDay(startCalendar.get(Calendar.DAY_OF_MONTH))
            .build();
    TimeOfDay time =
        TimeOfDay.newBuilder()
            .setHours(startCalendar.get(Calendar.HOUR_OF_DAY))
            .setMinutes(startCalendar.get(Calendar.MINUTE))
            .setSeconds(startCalendar.get(Calendar.SECOND))
            .build();

    TransferJob transferJob =
        TransferJob.newBuilder()
            .setDescription(jobDescription)
            .setProjectId(projectId)
            .setTransferSpec(
                TransferSpec.newBuilder()
                    .setGcsDataSource(GcsData.newBuilder().setBucketName(gcsSourceBucket))
                    .setGcsDataSink(GcsData.newBuilder().setBucketName(gcsNearlineSinkBucket))
                    .setObjectConditions(
                        ObjectConditions.newBuilder()
                            .setMinTimeElapsedSinceLastModification(
                                Duration.newBuilder().setSeconds(2592000 /* 30 days */)))
                    .setTransferOptions(
                        TransferOptions.newBuilder().setDeleteObjectsFromSourceAfterTransfer(true)))
            .setSchedule(Schedule.newBuilder().setScheduleStartDate(date).setStartTimeOfDay(time))
            .setStatus(Status.ENABLED)
            .build();

    // Create a Transfer Service client
    StorageTransferServiceClient storageTransfer = StorageTransferServiceClient.create();

    // Create the transfer job
    TransferJob response =
        storageTransfer.createTransferJob(
            CreateTransferJobRequest.newBuilder().setTransferJob(transferJob).build());

    System.out.println("Created transfer job from standard bucket to Nearline bucket:");
    System.out.println(response.toString());
  }
}

Node.js


// Imports the Google Cloud client library
const {
  StorageTransferServiceClient,
} = require('@google-cloud/storage-transfer');

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of the Google Cloud Platform Project that owns the job
// projectId = 'my-project-id'

// A useful description for your transfer job
// description = 'My transfer job'

// Google Cloud Storage source bucket name
// gcsSourceBucket = 'my-gcs-source-bucket'

// Google Cloud Storage destination bucket name
// gcsSinkBucket = 'my-gcs-destination-bucket'

// Date to start daily migration
// startDate = new Date()

// Creates a client
const client = new StorageTransferServiceClient();

/**
 * Create a daily migration from a GCS bucket to another GCS bucket for
 * objects untouched for 30+ days.
 */
async function createDailyNearline30DayMigration() {
  // Runs the request and creates the job
  const [transferJob] = await client.createTransferJob({
    transferJob: {
      projectId,
      description,
      status: 'ENABLED',
      schedule: {
        scheduleStartDate: {
          day: startDate.getDate(),
          month: startDate.getMonth() + 1,
          year: startDate.getFullYear(),
        },
      },
      transferSpec: {
        gcsDataSource: {
          bucketName: gcsSourceBucket,
        },
        gcsDataSink: {
          bucketName: gcsSinkBucket,
        },
        objectConditions: {
          minTimeElapsedSinceLastModification: {
            seconds: 2592000, // 30 days
          },
        },
        transferOptions: {
          deleteObjectsFromSourceAfterTransfer: true,
        },
      },
    },
  });

  console.log(`Created transferJob: ${transferJob.name}`);
}

createDailyNearline30DayMigration();

Python

Looking for older samples? See the Storage Transfer Service Migration Guide.

from datetime import datetime

from google.cloud import storage_transfer
from google.protobuf.duration_pb2 import Duration


def create_daily_nearline_30_day_migration(
    project_id: str,
    description: str,
    source_bucket: str,
    sink_bucket: str,
    start_date: datetime,
):
    """Create a daily migration from a GCS bucket to a Nearline GCS bucket
    for objects untouched for 30 days."""

    client = storage_transfer.StorageTransferServiceClient()

    # The ID of the Google Cloud Platform Project that owns the job
    # project_id = 'my-project-id'

    # A useful description for your transfer job
    # description = 'My transfer job'

    # Google Cloud Storage source bucket name
    # source_bucket = 'my-gcs-source-bucket'

    # Google Cloud Storage destination bucket name
    # sink_bucket = 'my-gcs-destination-bucket'

    transfer_job_request = storage_transfer.CreateTransferJobRequest(
        {
            "transfer_job": {
                "project_id": project_id,
                "description": description,
                "status": storage_transfer.TransferJob.Status.ENABLED,
                "schedule": {
                    "schedule_start_date": {
                        "day": start_date.day,
                        "month": start_date.month,
                        "year": start_date.year,
                    }
                },
                "transfer_spec": {
                    "gcs_data_source": {
                        "bucket_name": source_bucket,
                    },
                    "gcs_data_sink": {
                        "bucket_name": sink_bucket,
                    },
                    "object_conditions": {
                        "min_time_elapsed_since_last_modification": Duration(
                            seconds=2592000  # 30 days
                        )
                    },
                    "transfer_options": {
                        "delete_objects_from_source_after_transfer": True
                    },
                },
            }
        }
    )

    result = client.create_transfer_job(transfer_job_request)
    print(f"Created transferJob: {result.name}")

Transfer from Amazon S3 to Cloud Storage

In this example, you'll learn how to move files from Amazon S3 to a Cloud Storage bucket. Be sure to review Configure access to Amazon S3 and Pricing to understand the implications of moving data from Amazon S3 to Cloud Storage.

When creating transfer jobs, do not include the s3:// prefix for bucketName in Amazon S3 bucket source names.

Go

import (
	"context"
	"fmt"
	"io"
	"os"
	"time"

	storagetransfer "cloud.google.com/go/storagetransfer/apiv1"
	"cloud.google.com/go/storagetransfer/apiv1/storagetransferpb"
	"google.golang.org/genproto/googleapis/type/date"
	"google.golang.org/genproto/googleapis/type/timeofday"
)

func transferFromAws(w io.Writer, projectID string, awsSourceBucket string, gcsSinkBucket string) (*storagetransferpb.TransferJob, error) {
	// Your Google Cloud Project ID
	// projectID := "my-project-id"

	// The name of the Aws bucket to transfer objects from
	// awsSourceBucket := "my-source-bucket"

	// The name of the GCS bucket to transfer objects to
	// gcsSinkBucket := "my-sink-bucket"

	ctx := context.Background()
	client, err := storagetransfer.NewClient(ctx)
	if err != nil {
		return nil, fmt.Errorf("storagetransfer.NewClient: %w", err)
	}
	defer client.Close()

	// A description of this job
	jobDescription := "Transfers objects from an AWS bucket to a GCS bucket"

	// The time to start the transfer
	startTime := time.Now().UTC()

	// The AWS access key credential, should be accessed via environment variable for security
	awsAccessKeyID := os.Getenv("AWS_ACCESS_KEY_ID")

	// The AWS secret key credential, should be accessed via environment variable for security
	awsSecretKey := os.Getenv("AWS_SECRET_ACCESS_KEY")

	req := &storagetransferpb.CreateTransferJobRequest{
		TransferJob: &storagetransferpb.TransferJob{
			ProjectId:   projectID,
			Description: jobDescription,
			TransferSpec: &storagetransferpb.TransferSpec{
				DataSource: &storagetransferpb.TransferSpec_AwsS3DataSource{
					AwsS3DataSource: &storagetransferpb.AwsS3Data{
						BucketName: awsSourceBucket,
						AwsAccessKey: &storagetransferpb.AwsAccessKey{
							AccessKeyId:     awsAccessKeyID,
							SecretAccessKey: awsSecretKey,
						}},
				},
				DataSink: &storagetransferpb.TransferSpec_GcsDataSink{
					GcsDataSink: &storagetransferpb.GcsData{BucketName: gcsSinkBucket}},
			},
			Schedule: &storagetransferpb.Schedule{
				ScheduleStartDate: &date.Date{
					Year:  int32(startTime.Year()),
					Month: int32(startTime.Month()),
					Day:   int32(startTime.Day()),
				},
				ScheduleEndDate: &date.Date{
					Year:  int32(startTime.Year()),
					Month: int32(startTime.Month()),
					Day:   int32(startTime.Day()),
				},
				StartTimeOfDay: &timeofday.TimeOfDay{
					Hours:   int32(startTime.Hour()),
					Minutes: int32(startTime.Minute()),
					Seconds: int32(startTime.Second()),
				},
			},
			Status: storagetransferpb.TransferJob_ENABLED,
		},
	}
	resp, err := client.CreateTransferJob(ctx, req)
	if err != nil {
		return nil, fmt.Errorf("failed to create transfer job: %w", err)
	}
	if _, err = client.RunTransferJob(ctx, &storagetransferpb.RunTransferJobRequest{
		ProjectId: projectID,
		JobName:   resp.Name,
	}); err != nil {
		return nil, fmt.Errorf("failed to run transfer job: %w", err)
	}
	fmt.Fprintf(w, "Created and ran transfer job from %v to %v with name %v", awsSourceBucket, gcsSinkBucket, resp.Name)
	return resp, nil
}

Java

Looking for older samples? See the Storage Transfer Service Migration Guide.


import com.google.storagetransfer.v1.proto.StorageTransferServiceClient;
import com.google.storagetransfer.v1.proto.TransferProto.CreateTransferJobRequest;
import com.google.storagetransfer.v1.proto.TransferTypes.AwsAccessKey;
import com.google.storagetransfer.v1.proto.TransferTypes.AwsS3Data;
import com.google.storagetransfer.v1.proto.TransferTypes.GcsData;
import com.google.storagetransfer.v1.proto.TransferTypes.Schedule;
import com.google.storagetransfer.v1.proto.TransferTypes.TransferJob;
import com.google.storagetransfer.v1.proto.TransferTypes.TransferJob.Status;
import com.google.storagetransfer.v1.proto.TransferTypes.TransferSpec;
import com.google.type.Date;
import com.google.type.TimeOfDay;
import java.io.IOException;
import java.util.Calendar;

public class TransferFromAws {

  // Creates a one-off transfer job from Amazon S3 to Google Cloud Storage.
  public static void transferFromAws(
      String projectId,
      String jobDescription,
      String awsSourceBucket,
      String gcsSinkBucket,
      long startDateTime)
      throws IOException {

    // Your Google Cloud Project ID
    // String projectId = "your-project-id";

    // A short description of this job
    // String jobDescription = "Sample transfer job from S3 to GCS.";

    // The name of the source AWS bucket to transfer data from
    // String awsSourceBucket = "yourAwsSourceBucket";

    // The name of the GCS bucket to transfer data to
    // String gcsSinkBucket = "your-gcs-bucket";

    // What day and time in UTC to start the transfer, expressed as an epoch date timestamp.
    // If this is in the past relative to when the job is created, it will run the next day.
    // long startDateTime =
    //     new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").parse("2000-01-01 00:00:00").getTime();

    // The ID used to access your AWS account. Should be accessed via environment variable.
    String awsAccessKeyId = System.getenv("AWS_ACCESS_KEY_ID");

    // The Secret Key used to access your AWS account. Should be accessed via environment variable.
    String awsSecretAccessKey = System.getenv("AWS_SECRET_ACCESS_KEY");

    // Set up source and sink
    TransferSpec transferSpec =
        TransferSpec.newBuilder()
            .setAwsS3DataSource(
                AwsS3Data.newBuilder()
                    .setBucketName(awsSourceBucket)
                    .setAwsAccessKey(
                        AwsAccessKey.newBuilder()
                            .setAccessKeyId(awsAccessKeyId)
                            .setSecretAccessKey(awsSecretAccessKey)))
            .setGcsDataSink(GcsData.newBuilder().setBucketName(gcsSinkBucket))
            .build();

    // Parse epoch timestamp into the model classes
    Calendar startCalendar = Calendar.getInstance();
    startCalendar.setTimeInMillis(startDateTime);
    // Note that this is a Date from the model class package, not a java.util.Date
    Date startDate =
        Date.newBuilder()
            .setYear(startCalendar.get(Calendar.YEAR))
            .setMonth(startCalendar.get(Calendar.MONTH) + 1)
            .setDay(startCalendar.get(Calendar.DAY_OF_MONTH))
            .build();
    TimeOfDay startTime =
        TimeOfDay.newBuilder()
            .setHours(startCalendar.get(Calendar.HOUR_OF_DAY))
            .setMinutes(startCalendar.get(Calendar.MINUTE))
            .setSeconds(startCalendar.get(Calendar.SECOND))
            .build();
    Schedule schedule =
        Schedule.newBuilder()
            .setScheduleStartDate(startDate)
            .setScheduleEndDate(startDate)
            .setStartTimeOfDay(startTime)
            .build();

    // Set up the transfer job
    TransferJob transferJob =
        TransferJob.newBuilder()
            .setDescription(jobDescription)
            .setProjectId(projectId)
            .setTransferSpec(transferSpec)
            .setSchedule(schedule)
            .setStatus(Status.ENABLED)
            .build();

    // Create a Transfer Service client
    StorageTransferServiceClient storageTransfer = StorageTransferServiceClient.create();

    // Create the transfer job
    TransferJob response =
        storageTransfer.createTransferJob(
            CreateTransferJobRequest.newBuilder().setTransferJob(transferJob).build());

    System.out.println("Created transfer job from AWS to GCS:");
    System.out.println(response.toString());
  }
}

Node.js


// Imports the Google Cloud client library
const {
  StorageTransferServiceClient,
} = require('@google-cloud/storage-transfer');

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of the Google Cloud Platform Project that owns the job
// projectId = 'my-project-id'

// A useful description for your transfer job
// description = 'My transfer job'

// AWS S3 source bucket name
// awsSourceBucket = 'my-s3-source-bucket'

// AWS Access Key ID
// awsAccessKeyId = 'AKIA...'

// AWS Secret Access Key
// awsSecretAccessKey = 'HEAoMK2.../...ku8'

// Google Cloud Storage destination bucket name
// gcsSinkBucket = 'my-gcs-destination-bucket'

// Creates a client
const client = new StorageTransferServiceClient();

/**
 * Creates a one-time transfer job from Amazon S3 to Google Cloud Storage.
 */
async function transferFromS3() {
  // Setting the start date and the end date as the same time creates a
  // one-time transfer
  const now = new Date();
  const oneTimeSchedule = {
    day: now.getDate(),
    month: now.getMonth() + 1,
    year: now.getFullYear(),
  };

  // Runs the request and creates the job
  const [transferJob] = await client.createTransferJob({
    transferJob: {
      projectId,
      description,
      status: 'ENABLED',
      schedule: {
        scheduleStartDate: oneTimeSchedule,
        scheduleEndDate: oneTimeSchedule,
      },
      transferSpec: {
        awsS3DataSource: {
          bucketName: awsSourceBucket,
          awsAccessKey: {
            accessKeyId: awsAccessKeyId,
            secretAccessKey: awsSecretAccessKey,
          },
        },
        gcsDataSink: {
          bucketName: gcsSinkBucket,
        },
      },
    },
  });

  console.log(
    `Created and ran a transfer job from '${awsSourceBucket}' to '${gcsSinkBucket}' with name ${transferJob.name}`
  );
}

transferFromS3();

Python

Looking for older samples? See the Storage Transfer Service Migration Guide.

from datetime import datetime

from google.cloud import storage_transfer


def create_one_time_aws_transfer(
    project_id: str,
    description: str,
    source_bucket: str,
    aws_access_key_id: str,
    aws_secret_access_key: str,
    sink_bucket: str,
):
    """Creates a one-time transfer job from Amazon S3 to Google Cloud
    Storage."""

    client = storage_transfer.StorageTransferServiceClient()

    # The ID of the Google Cloud Platform Project that owns the job
    # project_id = 'my-project-id'

    # A useful description for your transfer job
    # description = 'My transfer job'

    # AWS S3 source bucket name
    # source_bucket = 'my-s3-source-bucket'

    # AWS Access Key ID
    # aws_access_key_id = 'AKIA...'

    # AWS Secret Access Key
    # aws_secret_access_key = 'HEAoMK2.../...ku8'

    # Google Cloud Storage destination bucket name
    # sink_bucket = 'my-gcs-destination-bucket'

    now = datetime.utcnow()
    # Setting the start date and the end date as
    # the same time creates a one-time transfer
    one_time_schedule = {"day": now.day, "month": now.month, "year": now.year}

    transfer_job_request = storage_transfer.CreateTransferJobRequest(
        {
            "transfer_job": {
                "project_id": project_id,
                "description": description,
                "status": storage_transfer.TransferJob.Status.ENABLED,
                "schedule": {
                    "schedule_start_date": one_time_schedule,
                    "schedule_end_date": one_time_schedule,
                },
                "transfer_spec": {
                    "aws_s3_data_source": {
                        "bucket_name": source_bucket,
                        "aws_access_key": {
                            "access_key_id": aws_access_key_id,
                            "secret_access_key": aws_secret_access_key,
                        },
                    },
                    "gcs_data_sink": {
                        "bucket_name": sink_bucket,
                    },
                },
            }
        }
    )

    result = client.create_transfer_job(transfer_job_request)
    print(f"Created transferJob: {result.name}")

Transfer between Microsoft Azure Blob Storage and Cloud Storage

In this example, you'll learn how to move files from Microsoft Azure Storage to a Cloud Storage bucket, using a Microsoft Azure Storage shared access signature (SAS) token.

For more information on Microsoft Azure Storage SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS).

Before starting, review Configure access to Microsoft Azure Storage and Pricing to understand the implications of moving data from Microsoft Azure Storage to Cloud Storage.

Go

To learn how to install and use the client library for Storage Transfer Service, see Storage Transfer Service client libraries. For more information, see the Storage Transfer Service Go API reference documentation.

To authenticate to Storage Transfer Service, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

import (
	"context"
	"fmt"
	"io"
	"os"

	storagetransfer "cloud.google.com/go/storagetransfer/apiv1"
	"cloud.google.com/go/storagetransfer/apiv1/storagetransferpb"
)

func transferFromAzure(w io.Writer, projectID string, azureStorageAccountName string, azureSourceContainer string, gcsSinkBucket string) (*storagetransferpb.TransferJob, error) {
	// Your Google Cloud Project ID.
	// projectID := "my-project-id"

	// The name of your Azure Storage account.
	// azureStorageAccountName := "my-azure-storage-acc"

	// The name of the Azure container to transfer objects from.
	// azureSourceContainer := "my-source-container"

	// The name of the GCS bucket to transfer objects to.
	// gcsSinkBucket := "my-sink-bucket"

	ctx := context.Background()
	client, err := storagetransfer.NewClient(ctx)
	if err != nil {
		return nil, fmt.Errorf("storagetransfer.NewClient: %w", err)
	}
	defer client.Close()

	// The Azure SAS token, should be accessed via environment variable for security
	azureSasToken := os.Getenv("AZURE_SAS_TOKEN")

	req := &storagetransferpb.CreateTransferJobRequest{
		TransferJob: &storagetransferpb.TransferJob{
			ProjectId: projectID,
			TransferSpec: &storagetransferpb.TransferSpec{
				DataSource: &storagetransferpb.TransferSpec_AzureBlobStorageDataSource{
					AzureBlobStorageDataSource: &storagetransferpb.AzureBlobStorageData{
						StorageAccount: azureStorageAccountName,
						AzureCredentials: &storagetransferpb.AzureCredentials{
							SasToken: azureSasToken,
						},
						Container: azureSourceContainer,
					},
				},
				DataSink: &storagetransferpb.TransferSpec_GcsDataSink{
					GcsDataSink: &storagetransferpb.GcsData{BucketName: gcsSinkBucket}},
			},
			Status: storagetransferpb.TransferJob_ENABLED,
		},
	}
	resp, err := client.CreateTransferJob(ctx, req)
	if err != nil {
		return nil, fmt.Errorf("failed to create transfer job: %w", err)
	}
	if _, err = client.RunTransferJob(ctx, &storagetransferpb.RunTransferJobRequest{
		ProjectId: projectID,
		JobName:   resp.Name,
	}); err != nil {
		return nil, fmt.Errorf("failed to run transfer job: %w", err)
	}
	fmt.Fprintf(w, "Created and ran transfer job from %v to %v with name %v", azureSourceContainer, gcsSinkBucket, resp.Name)
	return resp, nil
}

Java

To learn how to install and use the client library for Storage Transfer Service, see Storage Transfer Service client libraries. For more information, see the Storage Transfer Service Java API reference documentation.

To authenticate to Storage Transfer Service, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

import com.google.storagetransfer.v1.proto.StorageTransferServiceClient;
import com.google.storagetransfer.v1.proto.TransferProto;
import com.google.storagetransfer.v1.proto.TransferProto.RunTransferJobRequest;
import com.google.storagetransfer.v1.proto.TransferTypes.AzureBlobStorageData;
import com.google.storagetransfer.v1.proto.TransferTypes.AzureCredentials;
import com.google.storagetransfer.v1.proto.TransferTypes.GcsData;
import com.google.storagetransfer.v1.proto.TransferTypes.TransferJob;
import com.google.storagetransfer.v1.proto.TransferTypes.TransferJob.Status;
import com.google.storagetransfer.v1.proto.TransferTypes.TransferSpec;
import java.io.IOException;
import java.util.concurrent.ExecutionException;

public class TransferFromAzure {
  public static void main(String[] args)
      throws IOException, ExecutionException, InterruptedException {
    // TODO(developer): Replace these variables before running the sample.
    // Your Google Cloud Project ID
    String projectId = "my-project-id";

    // Your Azure Storage Account name
    String azureStorageAccount = "my-azure-account";

    // The Azure source container to transfer data from
    String azureSourceContainer = "my-source-container";

    // The GCS bucket to transfer data to
    String gcsSinkBucket = "my-sink-bucket";

    transferFromAzureBlobStorage(
        projectId, azureStorageAccount, azureSourceContainer, gcsSinkBucket);
  }

  /**
   * Creates and runs a transfer job to transfer all data from an Azure container to a GCS bucket.
   */
  public static void transferFromAzureBlobStorage(
      String projectId,
      String azureStorageAccount,
      String azureSourceContainer,
      String gcsSinkBucket)
      throws IOException, ExecutionException, InterruptedException {

    // Your Azure SAS token, should be accessed via environment variable
    String azureSasToken = System.getenv("AZURE_SAS_TOKEN");

    TransferSpec transferSpec =
        TransferSpec.newBuilder()
            .setAzureBlobStorageDataSource(
                AzureBlobStorageData.newBuilder()
                    .setAzureCredentials(
                        AzureCredentials.newBuilder().setSasToken(azureSasToken).build())
                    .setContainer(azureSourceContainer)
                    .setStorageAccount(azureStorageAccount))
            .setGcsDataSink(GcsData.newBuilder().setBucketName(gcsSinkBucket).build())
            .build();

    TransferJob transferJob =
        TransferJob.newBuilder()
            .setProjectId(projectId)
            .setStatus(Status.ENABLED)
            .setTransferSpec(transferSpec)
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources,
    // or use "try-with-close" statement to do this automatically.
    try (StorageTransferServiceClient storageTransfer = StorageTransferServiceClient.create()) {
      // Create the transfer job
      TransferJob response =
          storageTransfer.createTransferJob(
              TransferProto.CreateTransferJobRequest.newBuilder()
                  .setTransferJob(transferJob)
                  .build());

      // Run the created job
      storageTransfer
          .runTransferJobAsync(
              RunTransferJobRequest.newBuilder()
                  .setProjectId(projectId)
                  .setJobName(response.getName())
                  .build())
          .get();

      System.out.println(
          "Created and ran a transfer job from "
              + azureSourceContainer
              + " to "
              + gcsSinkBucket
              + " with "
              + "name "
              + response.getName());
    }
  }
}

Node.js

To learn how to install and use the client library for Storage Transfer Service, see Storage Transfer Service client libraries. For more information, see the Storage Transfer Service Node.js API reference documentation.

To authenticate to Storage Transfer Service, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.


// Imports the Google Cloud client library
const {
  StorageTransferServiceClient,
} = require('@google-cloud/storage-transfer');

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of the Google Cloud Platform Project that owns the job
// projectId = 'my-project-id'

// A useful description for your transfer job
// description = 'My transfer job'

// Azure Storage Account name
// azureStorageAccount = 'accountname'

// Azure Storage Account name
// azureSourceContainer = 'my-azure-source-bucket'

// Azure Shared Access Signature token
// azureSASToken = '?sv=...'

// Google Cloud Storage destination bucket name
// gcsSinkBucket = 'my-gcs-destination-bucket'

// Creates a client
const client = new StorageTransferServiceClient();

/**
 * Creates a one-time transfer job from Azure Blob Storage to Google Cloud Storage.
 */
async function transferFromBlobStorage() {
  // Setting the start date and the end date as the same time creates a
  // one-time transfer
  const now = new Date();
  const oneTimeSchedule = {
    day: now.getDate(),
    month: now.getMonth() + 1,
    year: now.getFullYear(),
  };

  // Runs the request and creates the job
  const [transferJob] = await client.createTransferJob({
    transferJob: {
      projectId,
      description,
      status: 'ENABLED',
      schedule: {
        scheduleStartDate: oneTimeSchedule,
        scheduleEndDate: oneTimeSchedule,
      },
      transferSpec: {
        azureBlobStorageDataSource: {
          azureCredentials: {
            sasToken: azureSASToken,
          },
          container: azureSourceContainer,
          storageAccount: azureStorageAccount,
        },
        gcsDataSink: {
          bucketName: gcsSinkBucket,
        },
      },
    },
  });

  console.log(
    `Created and ran a transfer job from '${azureSourceContainer}' to '${gcsSinkBucket}' with name ${transferJob.name}`
  );
}

transferFromBlobStorage();

Python

To learn how to install and use the client library for Storage Transfer Service, see Storage Transfer Service client libraries. For more information, see the Storage Transfer Service Python API reference documentation.

To authenticate to Storage Transfer Service, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

from datetime import datetime

from google.cloud import storage_transfer


def create_one_time_azure_transfer(
    project_id: str,
    description: str,
    azure_storage_account: str,
    azure_sas_token: str,
    source_container: str,
    sink_bucket: str,
):
    """Creates a one-time transfer job from Azure Blob Storage to Google Cloud
    Storage."""

    # Initialize client that will be used to create storage transfer requests.
    # This client only needs to be created once, and can be reused for
    # multiple requests.
    client = storage_transfer.StorageTransferServiceClient()

    # The ID of the Google Cloud Platform Project that owns the job
    # project_id = 'my-project-id'

    # A useful description for your transfer job
    # description = 'My transfer job'

    # Azure Storage Account name
    # azure_storage_account = 'accountname'

    # Azure Shared Access Signature token
    # azure_sas_token = '?sv=...'

    # Azure Blob source container name
    # source_container = 'my-azure-source-bucket'

    # Google Cloud Storage destination bucket name
    # sink_bucket = 'my-gcs-destination-bucket'

    now = datetime.utcnow()
    # Setting the start date and the end date as
    # the same time creates a one-time transfer
    one_time_schedule = {"day": now.day, "month": now.month, "year": now.year}

    transfer_job_request = storage_transfer.CreateTransferJobRequest(
        {
            "transfer_job": {
                "project_id": project_id,
                "description": description,
                "status": storage_transfer.TransferJob.Status.ENABLED,
                "schedule": {
                    "schedule_start_date": one_time_schedule,
                    "schedule_end_date": one_time_schedule,
                },
                "transfer_spec": {
                    "azure_blob_storage_data_source": {
                        "storage_account": azure_storage_account,
                        "azure_credentials": {"sas_token": azure_sas_token},
                        "container": source_container,
                    },
                    "gcs_data_sink": {
                        "bucket_name": sink_bucket,
                    },
                },
            }
        }
    )

    result = client.create_transfer_job(transfer_job_request)
    print(f"Created transferJob: {result.name}")

Transfer from a file system

In this example, you'll learn how to move files from a POSIX file system to a Cloud Storage bucket.

Go


import (
	"context"
	"fmt"
	"io"

	storagetransfer "cloud.google.com/go/storagetransfer/apiv1"
	"cloud.google.com/go/storagetransfer/apiv1/storagetransferpb"
)

func transferFromPosix(w io.Writer, projectID string, sourceAgentPoolName string, rootDirectory string, gcsSinkBucket string) (*storagetransferpb.TransferJob, error) {
	// Your project id
	// projectId := "myproject-id"

	// The agent pool associated with the POSIX data source. If not provided, defaults to the default agent
	// sourceAgentPoolName := "projects/my-project/agentPools/transfer_service_default"

	// The root directory path on the source filesystem
	// rootDirectory := "/directory/to/transfer/source"

	// The ID of the GCS bucket to transfer data to
	// gcsSinkBucket := "my-sink-bucket"

	ctx := context.Background()
	client, err := storagetransfer.NewClient(ctx)
	if err != nil {
		return nil, fmt.Errorf("storagetransfer.NewClient: %w", err)
	}
	defer client.Close()

	req := &storagetransferpb.CreateTransferJobRequest{
		TransferJob: &storagetransferpb.TransferJob{
			ProjectId: projectID,
			TransferSpec: &storagetransferpb.TransferSpec{
				SourceAgentPoolName: sourceAgentPoolName,
				DataSource: &storagetransferpb.TransferSpec_PosixDataSource{
					PosixDataSource: &storagetransferpb.PosixFilesystem{RootDirectory: rootDirectory},
				},
				DataSink: &storagetransferpb.TransferSpec_GcsDataSink{
					GcsDataSink: &storagetransferpb.GcsData{BucketName: gcsSinkBucket},
				},
			},
			Status: storagetransferpb.TransferJob_ENABLED,
		},
	}

	resp, err := client.CreateTransferJob(ctx, req)
	if err != nil {
		return nil, fmt.Errorf("failed to create transfer job: %w", err)
	}
	if _, err = client.RunTransferJob(ctx, &storagetransferpb.RunTransferJobRequest{
		ProjectId: projectID,
		JobName:   resp.Name,
	}); err != nil {
		return nil, fmt.Errorf("failed to run transfer job: %w", err)
	}
	fmt.Fprintf(w, "Created and ran transfer job from %v to %v with name %v", rootDirectory, gcsSinkBucket, resp.Name)
	return resp, nil
}

Java

import com.google.storagetransfer.v1.proto.StorageTransferServiceClient;
import com.google.storagetransfer.v1.proto.TransferProto;
import com.google.storagetransfer.v1.proto.TransferTypes.GcsData;
import com.google.storagetransfer.v1.proto.TransferTypes.PosixFilesystem;
import com.google.storagetransfer.v1.proto.TransferTypes.TransferJob;
import com.google.storagetransfer.v1.proto.TransferTypes.TransferSpec;
import java.io.IOException;

public class TransferFromPosix {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.

    // Your project id
    String projectId = "my-project-id";

    // The agent pool associated with the POSIX data source. If not provided, defaults to the
    // default agent
    String sourceAgentPoolName = "projects/my-project-id/agentPools/transfer_service_default";

    // The root directory path on the source filesystem
    String rootDirectory = "/directory/to/transfer/source";

    // The ID of the GCS bucket to transfer data to
    String gcsSinkBucket = "my-sink-bucket";

    transferFromPosix(projectId, sourceAgentPoolName, rootDirectory, gcsSinkBucket);
  }

  public static void transferFromPosix(
      String projectId, String sourceAgentPoolName, String rootDirectory, String gcsSinkBucket)
      throws IOException {
    TransferJob transferJob =
        TransferJob.newBuilder()
            .setProjectId(projectId)
            .setTransferSpec(
                TransferSpec.newBuilder()
                    .setSourceAgentPoolName(sourceAgentPoolName)
                    .setPosixDataSource(
                        PosixFilesystem.newBuilder().setRootDirectory(rootDirectory).build())
                    .setGcsDataSink(GcsData.newBuilder().setBucketName(gcsSinkBucket).build()))
            .setStatus(TransferJob.Status.ENABLED)
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources,
    // or use "try-with-close" statement to do this automatically.
    try (StorageTransferServiceClient storageTransfer = StorageTransferServiceClient.create()) {

      // Create the transfer job
      TransferJob response =
          storageTransfer.createTransferJob(
              TransferProto.CreateTransferJobRequest.newBuilder()
                  .setTransferJob(transferJob)
                  .build());

      System.out.println(
          "Created a transfer job from "
              + rootDirectory
              + " to "
              + gcsSinkBucket
              + " with "
              + "name "
              + response.getName());
    }
  }
}

Node.js


// Imports the Google Cloud client library
const {
  StorageTransferServiceClient,
} = require('@google-cloud/storage-transfer');

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// Your project id
// const projectId = 'my-project'

// The agent pool associated with the POSIX data source. Defaults to the default agent
// const sourceAgentPoolName = 'projects/my-project/agentPools/transfer_service_default'

// The root directory path on the source filesystem
// const rootDirectory = '/directory/to/transfer/source'

// The ID of the GCS bucket to transfer data to
// const gcsSinkBucket = 'my-sink-bucket'

// Creates a client
const client = new StorageTransferServiceClient();

/**
 * Creates a request to transfer from the local file system to the sink bucket
 */
async function transferDirectory() {
  const createRequest = {
    transferJob: {
      projectId,
      transferSpec: {
        sourceAgentPoolName,
        posixDataSource: {
          rootDirectory,
        },
        gcsDataSink: {bucketName: gcsSinkBucket},
      },
      status: 'ENABLED',
    },
  };

  // Runs the request and creates the job
  const [transferJob] = await client.createTransferJob(createRequest);

  const runRequest = {
    jobName: transferJob.name,
    projectId: projectId,
  };

  await client.runTransferJob(runRequest);

  console.log(
    `Created and ran a transfer job from '${rootDirectory}' to '${gcsSinkBucket}' with name ${transferJob.name}`
  );
}

transferDirectory();

Python

from google.cloud import storage_transfer


def transfer_from_posix_to_gcs(
    project_id: str,
    description: str,
    source_agent_pool_name: str,
    root_directory: str,
    sink_bucket: str,
):
    """Create a transfer from a POSIX file system to a GCS bucket."""

    client = storage_transfer.StorageTransferServiceClient()

    # The ID of the Google Cloud Platform Project that owns the job
    # project_id = 'my-project-id'

    # A useful description for your transfer job
    # description = 'My transfer job'

    # The agent pool associated with the POSIX data source.
    # Defaults to 'projects/{project_id}/agentPools/transfer_service_default'
    # source_agent_pool_name = 'projects/my-project/agentPools/my-agent'

    # The root directory path on the source filesystem
    # root_directory = '/directory/to/transfer/source'

    # Google Cloud Storage sink bucket name
    # sink_bucket = 'my-gcs-sink-bucket'

    transfer_job_request = storage_transfer.CreateTransferJobRequest(
        {
            "transfer_job": {
                "project_id": project_id,
                "description": description,
                "status": storage_transfer.TransferJob.Status.ENABLED,
                "transfer_spec": {
                    "source_agent_pool_name": source_agent_pool_name,
                    "posix_data_source": {
                        "root_directory": root_directory,
                    },
                    "gcs_data_sink": {"bucket_name": sink_bucket},
                },
            }
        }
    )

    result = client.create_transfer_job(transfer_job_request)
    print(f"Created transferJob: {result.name}")