[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-08 (世界標準時間)。"],[],[],null,["# Data integrity\n\nStorage Transfer Service uses metadata available from the source storage system,\nsuch as checksums and file sizes, to ensure that data written to\nCloud Storage is the same data read from the source.\n\nWhen checksum metadata is available\n-----------------------------------\n\nIf the checksum metadata on the source storage system indicates that the data\nStorage Transfer Service received doesn't match the source data,\nStorage Transfer Service records a failure for the transfer operation. Examples\nof storage systems that include checksum metadata include most Amazon Simple Storage Service (Amazon S3)\nand Microsoft Azure Blob Storage objects ([with some exceptions](#metadata-unavailable-no-agents))\nand HTTP transfers (checksum metadata are provided by the user).\n\nWhen checksum metadata is unavailable\n-------------------------------------\n\n### When agents can run near the source\n\nIf checksum metadata isn't available from the underlying source storage system\nbut [agents](/storage-transfer/docs/managing-on-prem-agents) can be run locally\nnear the source storage system, Storage Transfer Service attempts to read the\nsource data and compute a checksum before sending the data to\nCloud Storage. This occurs when moving data from file systems to\nCloud Storage.\n\n### When agents can't run near the source\n\nIf checksum metadata isn't available from the underlying source storage system,\nand agents can't be run locally near the source storage system,\nStorage Transfer Service can't compute a checksum until the data arrives in\nCloud Storage. In this scenario, Storage Transfer Service copies the\ndata but can't perform end-to-end data integrity checks to confirm that the\ndata received is the same as the source data. Instead, Storage Transfer Service\nattempts a \"best effort\" approach by using available metadata, such as file\nsize, to validate that the file copied to Cloud Storage matches the\nsource file.\n\nFor example, Storage Transfer Service uses file sizes to validate data for:\n\n- [Amazon S3 multi-part objects](https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html)\n\n- [Amazon S3 objects stored with server-side encryption with keys stored in AWS Key Management Service (SSE-KMS)](https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html)\n\n- [Microsoft Azure Blob Storage Put Block List](https://docs.microsoft.com/en-us/rest/api/storageservices/put-block-list)\n\nAfter transfer checks\n---------------------\n\nAfter your transfer is complete, we recommend performing additional data\nintegrity checks to validate that:\n\n- The correct version of the files are copied, for files that change at the source.\n- The correct set and number of files are copied, to verify that you've set up the transfer jobs correctly.\n- The files were copied correctly, by verifying the metadata on the files, such as file checksums, file size, and so forth."]]