Upload and download storage objects in projects

This page shows you how to upload and download objects to and from Google Distributed Cloud (GDC) air-gapped storage buckets.

Before you begin

A project namespace manages bucket resources in the org admin cluster. You must have a project to work with buckets and objects.

You must also have the appropriate bucket permissions to perform the following operation. See Grant bucket access.

Object naming guidelines

Use the following guidelines to name objects:

  • Use UTF-8 characters when naming objects.
  • Refrain from including any personally identifiable information (PII).

Upload objects to storage buckets

Console

  1. In the navigation menu, click Object Storage.
  2. Click the name of the bucket you want to upload the object to.
  3. Optional: If you want to create a folder to store your object, click Create folder > enter a folder name > click Create.
  4. Click Upload file directly, or navigate into the folder you just created and then click Upload file.
  5. Select the desired file and click Open.
  6. Wait for the confirmation message that the upload was successful.

CLI

To upload an object, run the following commands:

gdcloud storage cp LOCAL_PATH s3://REMOTE_PATH
gdcloud storage cp s3://REMOTE_SOURCE_PATH s3://REMOTE_MOVE_DESTINATION_PATH
gdcloud storage mv s3://REMOTE_SOURCE_PATH s3://REMOTE_MOVE_DESTINATION_PATH

The following command uploads all text files from the local directory to a bucket:

gdcloud storage cp *.txt s3://BUCKET

The following command uploads multiple files from the local directory to a bucket:

gdcloud storage cp abc1.txt abc2.txt s3://BUCKET

To upload a folder to a bucket, use the --recursive option to copy an entire directory tree. The following command uploads the directory tree dir:

gdcloud storage cp dir s3://BUCKET --recursive

Perform multipart uploads for large objects, or use multipart uploads automatically when you have a file to upload that is larger than 15 MB. In that case, the file splits into multiple parts, with each part being 15 MB in size. The last part is smaller. Each part uploads separately and reconstructs at the destination when the transfer completes.

If an upload of one part fails, you can restart the upload without affecting any of the other parts already uploaded.

There are two options related to multipart uploads:

  • --disable-multipart: disables multipart uploads for all files.
  • --multipart-chunk-size-mb=SIZE: sets the size of each chunk of a multipart upload.

Files bigger than SIZE automatically upload as multithreaded-multipart. Smaller files upload using the traditional method. SIZE is in megabytes. The default chunk size is 15 MB. The minimum allowed chunk size is 5 MB, and the maximum is 5 GB.

Download objects from storage buckets

Console

  1. In the navigation menu, click Object Storage.
  2. Click the name of the bucket containing the objects.
  3. Select the checkbox next to the name of the object to download.
  4. Click Download.

CLI

To get objects from the bucket:

gdcloud storage cp s3://BUCKET/OBJECT LOCAL_FILE_TO_SAVE

To download all text files from a bucket to your current directory:

gdcloud storage cp s3://BUCKET/*.txt .

To download the text file abc.txt from a bucket to your current directory:

gdcloud storage cp s3://BUCKET/abc.txt .

To download an older version of the file, list all versions of the file first:

gdcloud storage ls s3://BUCKET/abc.txt --all-versions

Example output:

s3://my-bucket/abc.txt#OEQxNTk4MUEtMzEzRS0xMUVFLTk2N0UtQkM4MjAwQkJENjND
s3://my-bucket/abc.txt#ODgzNEYzQ0MtMzEzRS0xMUVFLTk2NEItMjI1MTAwQkJENjND
s3://my-bucket/abc.txt#ODNCNDEzNzgtMzEzRS0xMUVFLTlDOUMtQzRDOTAwQjg3RTg3

Then, download a specific version of the text file abc.txt from the bucket to your current directory:

gdcloud storage cp s3://BUCKET/abc.txt#OEQxNTk4MUEtMzEzRS0xMUVFLTk2N0UtQkM4MjAwQkJENjND .

Use custom AEADKey

For greater customization, you can create your own AEADKey and use it directly when encrypting objects in your bucket. This gives you full control over the encryption key, bypassing the default. Follow Create a key to create a new AEADKey and make sure it's in the same Namespace as the bucket you intend to use. Then, whenever sending the request, make sure the HEADER is configured with x-amz-server-side-encryption: SSE-KMS and x-amz-server-side-encryption-aws-kms-key-id: NAMESPACE_NAME/AEADKey_NAME