Stay organized with collections
Save and categorize content based on your preferences.
Storage Transfer Service uses metadata available from the source storage system,
such as checksums and file sizes, to ensure that data written to
Cloud Storage is the same data read from the source.
When checksum metadata is available
If the checksum metadata on the source storage system indicates that the data
Storage Transfer Service received doesn't match the source data,
Storage Transfer Service records a failure for the transfer operation. Examples
of storage systems that include checksum metadata include most Amazon Simple Storage Service (Amazon S3)
and Microsoft Azure Blob Storage objects (with some exceptions)
and HTTP transfers (checksum metadata are provided by the user).
When checksum metadata is unavailable
When agents can run near the source
If checksum metadata isn't available from the underlying source storage system
but agents can be run locally
near the source storage system, Storage Transfer Service attempts to read the
source data and compute a checksum before sending the data to
Cloud Storage. This occurs when moving data from file systems to
Cloud Storage.
When agents can't run near the source
If checksum metadata isn't available from the underlying source storage system,
and agents can't be run locally near the source storage system,
Storage Transfer Service can't compute a checksum until the data arrives in
Cloud Storage. In this scenario, Storage Transfer Service copies the
data but can't perform end-to-end data integrity checks to confirm that the
data received is the same as the source data. Instead, Storage Transfer Service
attempts a "best effort" approach by using available metadata, such as file
size, to validate that the file copied to Cloud Storage matches the
source file.
For example, Storage Transfer Service uses file sizes to validate data for:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Data integrity\n\nStorage Transfer Service uses metadata available from the source storage system,\nsuch as checksums and file sizes, to ensure that data written to\nCloud Storage is the same data read from the source.\n\nWhen checksum metadata is available\n-----------------------------------\n\nIf the checksum metadata on the source storage system indicates that the data\nStorage Transfer Service received doesn't match the source data,\nStorage Transfer Service records a failure for the transfer operation. Examples\nof storage systems that include checksum metadata include most Amazon Simple Storage Service (Amazon S3)\nand Microsoft Azure Blob Storage objects ([with some exceptions](#metadata-unavailable-no-agents))\nand HTTP transfers (checksum metadata are provided by the user).\n\nWhen checksum metadata is unavailable\n-------------------------------------\n\n### When agents can run near the source\n\nIf checksum metadata isn't available from the underlying source storage system\nbut [agents](/storage-transfer/docs/managing-on-prem-agents) can be run locally\nnear the source storage system, Storage Transfer Service attempts to read the\nsource data and compute a checksum before sending the data to\nCloud Storage. This occurs when moving data from file systems to\nCloud Storage.\n\n### When agents can't run near the source\n\nIf checksum metadata isn't available from the underlying source storage system,\nand agents can't be run locally near the source storage system,\nStorage Transfer Service can't compute a checksum until the data arrives in\nCloud Storage. In this scenario, Storage Transfer Service copies the\ndata but can't perform end-to-end data integrity checks to confirm that the\ndata received is the same as the source data. Instead, Storage Transfer Service\nattempts a \"best effort\" approach by using available metadata, such as file\nsize, to validate that the file copied to Cloud Storage matches the\nsource file.\n\nFor example, Storage Transfer Service uses file sizes to validate data for:\n\n- [Amazon S3 multi-part objects](https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html)\n\n- [Amazon S3 objects stored with server-side encryption with keys stored in AWS Key Management Service (SSE-KMS)](https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html)\n\n- [Microsoft Azure Blob Storage Put Block List](https://docs.microsoft.com/en-us/rest/api/storageservices/put-block-list)\n\nAfter transfer checks\n---------------------\n\nAfter your transfer is complete, we recommend performing additional data\nintegrity checks to validate that:\n\n- The correct version of the files are copied, for files that change at the source.\n- The correct set and number of files are copied, to verify that you've set up the transfer jobs correctly.\n- The files were copied correctly, by verifying the metadata on the files, such as file checksums, file size, and so forth."]]