You have several options for protecting the data and resources that you transfer.
Protecting your file system resources
Agents access files from the environment they are running in. This means that you have several ways that you can protect access to your data:
Using a restricted user or role account to run the agent container.
Limiting the file systems that are mounted to the agent container.
Choose NFS mount permissions in accordance with your security policies, such as no write access.
Protecting data in-flight
Storage Transfer Service encrypts your data over an HTTPS session with TLS for both connections through the public internet, and through private connections (such as Cloud Interconnect). If you are using Cloud Interconnect, you can obtain an additional layer of security by using private API endpoints.
Protecting Google Cloud resources
gcloud auth to connect to Pub/Sub and Cloud Storage
resources used during the transfer. Therefore your Google Cloud resources are
protected using Identity and Access Management and the account that you choose to provision for
transfer agent use. You may also use a service
account, which can help make
permissions management easier to use.
Pub/Sub is only used to communicate file and object metadata between agents and Google Cloud, and for orchestrating work amongst the agents. No files or objects are transferred over Pub/Sub.
Storage Transfer Service supports the following Storage Transfer Service predefined IAM roles:
Storage Transfer Admin — Provides all Storage Transfer Service permissions.
Storage Transfer User — Can submit and monitor jobs, but can't delete jobs or see admin settings such as agent details or bandwidth settings.
Storage Transfer Service doesn't support custom IAM roles or the Storage Transfer Viewer predefined role. Users in either scenario may not see a polished user interface. If they attempt to load pages they don't have permissions for, the page will display an error or a blank page. However, the permitted actions remain restricted.