Resumable uploads

Go to examples

This page discusses resumable uploads in Cloud Storage. Resumable uploads are the recommended method for uploading large files, because you don't have to restart them from the beginning if there is a network failure while the upload is underway.

Introduction

A resumable upload allows you to resume data transfer operations to Cloud Storage after a communication failure has interrupted the flow of data. Resumable uploads work by sending multiple requests, each of which contains a portion of the object you're uploading. This is different from a simple upload, which contains all of the object's data in a single request and must restart from the beginning if it fails part way through.

  • Use a resumable upload if you are uploading large files or uploading over a slow connection. For example file size cutoffs for using resumable uploads, see upload size considerations.

  • A completed resumable upload is considered one Class A operation.

How tools and APIs use resumable uploads

Depending on how you interact with Cloud Storage, resumable uploads may be managed automatically on your behalf. Click a tab in the table below to learn more:

Console

The Cloud Console manages resumable uploads automatically on your behalf. However, if you refresh or navigate away from the Cloud Console while an upload is underway, the upload is cancelled.

gsutil

The gsutil command-line tool allows you to set a minimum size for performing resumable uploads with the resumable_threshold parameter in the boto configuration file. The default value for resumable_threshold is 8 MiB.

Client libraries

C++

You can toggle the use of resumable uploads as part of the WriteObject method.

C#

You can initiate a resumable upload with CreateObjectUploader.

Go

You control the minimum size for performing resumable uploads with Writer.ChunkSize. Go always performs chunked resumable uploads.

Java

Resumable uploads are controlled through the writer method.

Node.js

Resumable uploads are automatically managed when using the createWriteStream method.

PHP

Resumable uploads are automatically managed on you behalf, but can be directly controlled using the resumable option.

Python

Resumable uploads occur automatically when the file is larger than 8 MiB. Alternatively, you can use Resumable Media to manage resumable uploads on your own.

Ruby

All uploads are treated as resumable uploads.

REST APIs

JSON API

The Cloud Storage JSON API uses a POST Object request that includes the query parameter uploadType=resumable to initiate the resumable upload. This request returns as session URI that you then use in one or more PUT Object requests to upload the object data. For a step-by-step guide to building your own logic for resumable uploading, see Performing resumable uploads.

XML API

The Cloud Storage JSON API uses a POST Object request that includes the query parameter uploadType=resumable to initiate the resumable upload. This request returns as session URI that you then use in one or more PUT Object requests to upload the object data. For a step-by-step guide to building your own logic for resumable uploading, see Performing resumable uploads.

Resumable uploads of unknown size

The resumable upload mechanism supports transfers where the file size is not known in advance. This can be useful for cases like compressing an object on-the-fly while uploading, since it's difficult to predict the exact file size for the compressed file at the start of a transfer. The mechanism is useful either if you want to stream a transfer that can be resumed after being interrupted, or if chunked transfer encoding does not work for your application.

For more information, see Streaming transfers.

Considerations

This section is useful if you are building your own client that sends resumable upload requests directly to the JSON or XML API.

Session URIs

When you initiate a resumable upload, Cloud Storage returns a session URI, which you use in subsequent requests to upload the actual data. An example of a session URI in the JSON API is:

https://storage.googleapis.com/upload/storage/v1/b/my-bucket/o?uploadType=resumable&name=my-file.jpg&upload_id=ABg5-UxlRQU75tqTINorGYDgM69mX06CzKO1NRFIMOiuTsu_mVsl3E-3uSVz65l65GYuyBuTPWWICWkinL1FWcbvvOA

An example of a session URI in the XML API is:

 https://storage.googleapis.com/my-bucket/my-file.jpg?upload_id=ABg5-UxlRQU75tqTINorGYDgM69mX06CzKO1NRFIMOiuTsu_mVsl3E-3uSVz65l65GYuyBuTPWWICWkinL1FWcbvvOA

This session URI acts as an authentication token, so the requests that use it don't need to be signed and can be used by anyone to upload data to the target bucket without any further authentication. Because of this, be judicious in sharing the session URI and only share it over HTTPS.

A session URI expires after one week. If you use an expired session URI in a request, you receive a 404 Not Found status code. In this case, you have to initiate a new resumable upload, obtain a new session URI, and start the upload from the beginning using the new session URI.

Upload performance

Resumable uploads are pinned in the region where you initiate them. For example, if you initiate a resumable upload in the US and give the session URI to a client in Asia, the upload still goes through the US. Continuing a resumable upload in a region where it wasn't initiated can cause slow uploads.

If you use a Compute Engine instance to initiate a resumable upload, the instance should be in the same location as the Cloud Storage bucket you upload to. You can then use a geo IP service to pick the Compute Engine region to which you route customer requests, which helps keep traffic localized to a geo-region.

Retry guidelines

  • You should retry any requests that return the following status codes:

    • 408 Request Timeout
    • 500 Internal Server Error
    • 502 Bad Gateway
    • 503 Service Unavailable
    • 504 Gateway Timeout
  • Handle 404 Not Found errors by initiating a new resumable upload.

  • When performing retry requests, use truncated exponential backoff.

Integrity checks

We recommend that you request an integrity check of the final uploaded object to be sure that it matches the source file. You can do this by calculating the MD5 digest of the source file and adding it to the Content-MD5 request header.

Checking the integrity of the uploaded file is particularly important if you are uploading a large file over a long period of time, because there is an increased likelihood of the source file being modified over the course of the upload operation.

What's next