Buckets

Create / interact with Google Cloud Storage buckets.

class google.cloud.storage.bucket.Bucket(client, name=None, user_project=None)

Bases: google.cloud.storage._helpers._PropertyMixin

A class representing a Bucket on Cloud Storage.

  • Parameters

    • client (google.cloud.storage.client.Client) – A client which holds credentials and project configuration for the bucket (which requires a project).

    • name (str) – The name of the bucket. Bucket names must start and end with a number or letter.

    • user_project (str) – (Optional) the project ID to be billed for API requests made via this instance.

property name

Get the bucket’s name.

STORAGE_CLASSES( = ('STANDARD', 'NEARLINE', 'COLDLINE', 'ARCHIVE', 'MULTI_REGIONAL', 'REGIONAL', 'DURABLE_REDUCED_AVAILABILITY' )

Allowed values for storage_class.

Default value is STANDARD_STORAGE_CLASS.

See https://cloud.google.com/storage/docs/json_api/v1/buckets#storageClass https://cloud.google.com/storage/docs/storage-classes

property acl()

Create our ACL on demand.

add_lifecycle_delete_rule(**kw)

Add a “delete” rule to lifestyle rules configured for this bucket.

See https://cloud.google.com/storage/docs/lifecycle and

[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)
bucket = client.get_bucket("my-bucket")
bucket.add_lifecycle_delete_rule(age=2)
bucket.patch()
  • Params kw

    arguments passed to LifecycleRuleConditions.

add_lifecycle_set_storage_class_rule(storage_class, **kw)

Add a “delete” rule to lifestyle rules configured for this bucket.

See https://cloud.google.com/storage/docs/lifecycle and

[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)
bucket = client.get_bucket("my-bucket")
bucket.add_lifecycle_set_storage_class_rule(
    "COLD_LINE", matches_storage_class=["NEARLINE"]
)
bucket.patch()
  • Parameters

    storage_class (str, one of STORAGE_CLASSES.) – new storage class to assign to matching items.

  • Params kw

    arguments passed to LifecycleRuleConditions.

blob(blob_name, chunk_size=None, encryption_key=None, kms_key_name=None, generation=None)

Factory constructor for blob object.

NOTE: This will not make an HTTP request; it simply instantiates a blob object owned by this bucket.

  • Parameters

    • blob_name (str) – The name of the blob to be instantiated.

    • chunk_size (int) – The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification.

    • encryption_key (bytes) – (Optional) 32 byte encryption key for customer-supplied encryption.

    • kms_key_name (str) – (Optional) Resource name of KMS key used to encrypt blob’s content.

    • generation (long) – (Optional) If present, selects a specific revision of this object.

  • Return type

    google.cloud.storage.blob.Blob

  • Returns

    The blob object created.

clear_lifecyle_rules()

Set lifestyle rules configured for this bucket.

See https://cloud.google.com/storage/docs/lifecycle and

[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)

property client()

The client bound to this bucket.

configure_website(main_page_suffix=None, not_found_page=None)

Configure website-related properties.

See https://cloud.google.com/storage/docs/hosting-static-website

NOTE: This (apparently) only works if your bucket name is a domain name (and to do that, you need to get approved somehow…).

If you want this bucket to host a website, just provide the name of an index page and a page to use when a blob isn’t found:

client = storage.Client()
bucket = client.get_bucket(bucket_name)
bucket.configure_website("index.html", "404.html")

You probably should also make the whole bucket public:

bucket.make_public(recursive=True, future=True)

This says: “Make the bucket public, and all the stuff already in the bucket, and anything else I add to the bucket. Just make it all public.”

  • Parameters

    • main_page_suffix (str) – The page to use as the main page of a directory. Typically something like index.html.

    • not_found_page (str) – The file to use when a page isn’t found.

copy_blob(blob, destination_bucket, new_name=None, client=None, preserve_acl=True, source_generation=None, timeout=60, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, if_source_generation_match=None, if_source_generation_not_match=None, if_source_metageneration_match=None, if_source_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Copy the given blob to the given bucket, optionally with a new name.

If user_project is set, bills the API request to that project.

  • Parameters

    • blob (google.cloud.storage.blob.Blob) – The blob to be copied.

    • destination_bucket (google.cloud.storage.bucket.Bucket) – The bucket into which the blob should be copied.

    • new_name (str) – (Optional) The new name for the copied file.

    • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • preserve_acl (bool) – DEPRECATED. This argument is not functional! (Optional) Copies ACL from old blob to new blob. Default: True.

    • source_generation (long) – (Optional) The generation of the blob to be copied.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • if_generation_match (long) – (Optional) Makes the operation conditional on whether the destination object’s current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.

    • if_generation_not_match (long) – (Optional) Makes the operation conditional on whether the destination object’s current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.

    • if_metageneration_match (long) – (Optional) Makes the operation conditional on whether the destination object’s current metageneration matches the given value.

    • if_metageneration_not_match (long) – (Optional) Makes the operation conditional on whether the destination object’s current metageneration does not match the given value.

    • if_source_generation_match (long) – (Optional) Makes the operation conditional on whether the source object’s generation matches the given value.

    • if_source_generation_not_match (long) – (Optional) Makes the operation conditional on whether the source object’s generation does not match the given value.

    • if_source_metageneration_match (long) – (Optional) Makes the operation conditional on whether the source object’s current metageneration matches the given value.

    • if_source_metageneration_not_match (long) – (Optional) Makes the operation conditional on whether the source object’s current metageneration does not match the given value.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Return type

    google.cloud.storage.blob.Blob

  • Returns

    The new Blob.

Example

Copy a blob including ACL.

>>> from google.cloud import storage
>>> client = storage.Client(project="project")
>>> bucket = client.bucket("bucket")
>>> dst_bucket = client.bucket("destination-bucket")
>>> blob = bucket.blob("file.ext")
>>> new_blob = bucket.copy_blob(blob, dst_bucket)
>>> new_blob.acl.save(blob.acl)

property cors()

Retrieve or set CORS policies configured for this bucket.

See http://www.w3.org/TR/cors/ and

[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)

NOTE: The getter for this property returns a list which contains copies of the bucket’s CORS policy mappings. Mutating the list or one of its dicts has no effect unless you then re-assign the dict via the setter. E.g.:

>>> policies = bucket.cors
>>> policies.append({'origin': '/foo', ...})
>>> policies[1]['maxAgeSeconds'] = 3600
>>> del policies[0]
>>> bucket.cors = policies
>>> bucket.update()
  • Setter

    Set CORS policies for this bucket.

  • Getter

    Gets the CORS policies for this bucket.

  • Return type

    list of dictionaries

  • Returns

    A sequence of mappings describing each CORS policy.

create(client=None, project=None, location=None, predefined_acl=None, predefined_default_object_acl=None, timeout=60, retry=<google.api_core.retry.Retry object>)

DEPRECATED. Creates current bucket.

NOTE: Direct use of this method is deprecated. Use Client.create_bucket() instead.

If the bucket already exists, will raise google.cloud.exceptions.Conflict.

This implements “storage.buckets.insert”.

If user_project is set, bills the API request to that project.

  • Parameters

    • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • project (str) – (Optional) The project under which the bucket is to be created. If not passed, uses the project set on the client.

    • location (str) – (Optional) The location of the bucket. If not passed, the default location, US, will be used. See https://cloud.google.com/storage/docs/bucket-locations

    • predefined_acl (str) – (Optional) Name of predefined ACL to apply to bucket. See: https://cloud.google.com/storage/docs/access-control/lists#predefined-acl

    • predefined_default_object_acl (str) – (Optional) Name of predefined ACL to apply to bucket’s objects. See: https://cloud.google.com/storage/docs/access-control/lists#predefined-acl

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Raises

    ValueError – if project is None and client’s project is also None.

property default_event_based_hold()

Are uploaded objects automatically placed under an even-based hold?

If True, uploaded objects will be placed under an event-based hold to be released at a future time. When released an object will then begin the retention period determined by the policy retention period for the object bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

If the property is not set locally, returns None.

  • Return type

    bool or NoneType

property default_kms_key_name()

Retrieve / set default KMS encryption key for objects in the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

  • Setter

    Set default KMS encryption key for items in this bucket.

  • Getter

    Get default KMS encryption key for items in this bucket.

  • Return type

    str

  • Returns

    Default KMS encryption key, or None if not set.

property default_object_acl()

Create our defaultObjectACL on demand.

delete(force=False, client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.api_core.retry.Retry object>)

Delete this bucket.

The bucket must be empty in order to submit a delete request. If force=True is passed, this will first attempt to delete all the objects / blobs in the bucket (i.e. try to empty the bucket).

If the bucket doesn’t exist, this will raise google.cloud.exceptions.NotFound. If the bucket is not empty (and force=False), will raise google.cloud.exceptions.Conflict.

If force=True and the bucket contains more than 256 objects / blobs this will cowardly refuse to delete the objects (or the bucket). This is to prevent accidental bucket deletion and to prevent extremely long runtime of this method.

If user_project is set, bills the API request to that project.

  • Parameters

    • force (bool) – If True, empties the bucket’s objects then deletes it.

    • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response on each request.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • if_metageneration_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration matches the given value.

    • if_metageneration_not_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration does not match the given value.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Raises

    ValueError if force is True and the bucket contains more than 256 objects / blobs.

delete_blob(blob_name, client=None, generation=None, timeout=60, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Deletes a blob from the current bucket.

If the blob isn’t found (backend 404), raises a google.cloud.exceptions.NotFound.

For example:

from google.cloud.exceptions import NotFound

client = storage.Client()
bucket = client.get_bucket("my-bucket")
blobs = list(client.list_blobs(bucket))
assert len(blobs) > 0
# [<Blob: my-bucket, my-file.txt>]
bucket.delete_blob("my-file.txt")
try:
    bucket.delete_blob("doesnt-exist")
except NotFound:
    pass

If user_project is set, bills the API request to that project.

  • Parameters

    • blob_name (str) – A blob name to delete.

    • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • generation (long) – (Optional) If present, permanently deletes a specific revision of this object.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • if_generation_match (long) – (Optional) Make the operation conditional on whether the blob’s current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the blob.

    • if_generation_not_match (long) – (Optional) Make the operation conditional on whether the blob’s current generation does not match the given value. If no live blob exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the blob.

    • if_metageneration_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration matches the given value.

    • if_metageneration_not_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration does not match the given value.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Raises

    google.cloud.exceptions.NotFound (to suppress the exception, call delete_blobs, passing a no-op on_error callback, e.g.:

bucket.delete_blobs([blob], on_error=lambda blob: None)

delete_blobs(blobs, on_error=None, client=None, timeout=60, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Deletes a list of blobs from the current bucket.

Uses delete_blob() to delete each individual blob.

If user_project is set, bills the API request to that project.

  • Parameters

    • blobs (list) – A list of Blob-s or blob names to delete.

    • on_error (callable) – (Optional) Takes single argument: blob. Called called once for each blob raising NotFound; otherwise, the exception is propagated.

    • client (Client) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. The timeout applies to each individual blob delete request.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • if_generation_match (list of long) – (Optional) Make the operation conditional on whether the blob’s current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the blob. The list must match blobs item-to-item.

    • if_generation_not_match (list of long) – (Optional) Make the operation conditional on whether the blob’s current generation does not match the given value. If no live blob exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the blob. The list must match blobs item-to-item.

    • if_metageneration_match (list of long) – (Optional) Make the operation conditional on whether the blob’s current metageneration matches the given value. The list must match blobs item-to-item.

    • if_metageneration_not_match (list of long) – (Optional) Make the operation conditional on whether the blob’s current metageneration does not match the given value. The list must match blobs item-to-item.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Raises

    NotFound (if on_error is not passed).

Example

Delete blobs using generation match preconditions.

>>> from google.cloud import storage
>>> client = storage.Client()
>>> bucket = client.bucket("bucket-name")
>>> blobs = [bucket.blob("blob-name-1"), bucket.blob("blob-name-2")]
>>> if_generation_match = [None] * len(blobs)
>>> if_generation_match[0] = "123"  # precondition for "blob-name-1"
>>> bucket.delete_blobs(blobs, if_generation_match=if_generation_match)

disable_logging()

Disable access logging for this bucket.

See https://cloud.google.com/storage/docs/access-logs#disabling

disable_website()

Disable the website configuration for this bucket.

This is really just a shortcut for setting the website-related attributes to None.

enable_logging(bucket_name, object_prefix='')

Enable access logging for this bucket.

See https://cloud.google.com/storage/docs/access-logs

  • Parameters

    • bucket_name (str) – name of bucket in which to store access logs

    • object_prefix (str) – prefix for access log filenames

property etag()

Retrieve the ETag for the bucket.

See https://tools.ietf.org/html/rfc2616#section-3.11 and

[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)
  • Return type

    str or NoneType

  • Returns

    The bucket etag or None if the bucket’s resource has not been loaded from the server.

exists(client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.api_core.retry.Retry object>)

Determines whether or not this bucket exists.

If user_project is set, bills the API request to that project.

  • Parameters

    • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • if_metageneration_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration matches the given value.

    • if_metageneration_not_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration does not match the given value.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Return type

    bool

  • Returns

    True if the bucket exists in Cloud Storage.

classmethod from_string(uri, client=None)

Get a constructor for bucket object by URI.

  • Parameters

    • uri (str) – The bucket uri pass to get bucket object.

    • client (Client or NoneType) – (Optional) The client to use.

  • Return type

    google.cloud.storage.bucket.Bucket

  • Returns

    The bucket object created.

Example

Get a constructor for bucket object by URI..

>>> from google.cloud import storage
>>> from google.cloud.storage.bucket import Bucket
>>> client = storage.Client()
>>> bucket = Bucket.from_string("gs://bucket", client)

generate_signed_url(expiration=None, api_access_endpoint='https://storage.googleapis.com', method='GET', headers=None, query_parameters=None, client=None, credentials=None, version=None, virtual_hosted_style=False, bucket_bound_hostname=None, scheme='http')

Generates a signed URL for this bucket.

NOTE: If you are on Google Compute Engine, you can’t generate a signed URL using GCE service account. Follow Issue 50 for updates on this. If you’d like to be able to generate a signed URL from GCE, you can use a standard service account from a JSON file rather than a GCE service account.

If you have a bucket that you want to allow access to for a set amount of time, you can use this method to generate a URL that is only valid within a certain time period.

If bucket_bound_hostname is set as an argument of api_access_endpoint, https works only if using a CDN.

Example

Generates a signed URL for this bucket using bucket_bound_hostname and scheme.

>>> from google.cloud import storage
>>> client = storage.Client()
>>> bucket = client.get_bucket('my-bucket-name')
>>> url = bucket.generate_signed_url(expiration='url-expiration-time', bucket_bound_hostname='mydomain.tld',
>>>                                  version='v4')
>>> url = bucket.generate_signed_url(expiration='url-expiration-time', bucket_bound_hostname='mydomain.tld',
>>>                                  version='v4',scheme='https')  # If using ``CDN``

This is particularly useful if you don’t want publicly accessible buckets, but don’t want to require users to explicitly log in.

generate_upload_policy(conditions, expiration=None, client=None)

Create a signed upload policy for uploading objects.

This method generates and signs a policy document. You can use policy documents to allow visitors to a website to upload files to Google Cloud Storage without giving them direct write access.

For example:

bucket = client.bucket("my-bucket")
conditions = [["starts-with", "$key", ""], {"acl": "public-read"}]

policy = bucket.generate_upload_policy(conditions)

# Generate an upload form using the form fields.
policy_fields = "".join(
    '<input type="hidden" name="{key}" value="{value}">'.format(
        key=key, value=value
    )
    for key, value in policy.items()
)

upload_form = (
    '<form action="http://{bucket_name}.storage.googleapis.com"'
    '   method="post" enctype="multipart/form-data">'
    '<input type="text" name="key" value="my-test-key">'
    '<input type="hidden" name="bucket" value="{bucket_name}">'
    '<input type="hidden" name="acl" value="public-read">'
    '<input name="file" type="file">'
    '<input type="submit" value="Upload">'
    "{policy_fields}"
    "</form>"
).format(bucket_name=bucket.name, policy_fields=policy_fields)

print(upload_form)
  • Parameters

    • expiration (datetime) – (Optional) Expiration in UTC. If not specified, the policy will expire in 1 hour.

    • conditions (list) – A list of conditions as described in the policy documents documentation.

    • client (Client) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

  • Return type

    dict

  • Returns

    A dictionary of (form field name, form field value) of form fields that should be added to your HTML upload form in order to attach the signature.

get_blob(blob_name, client=None, encryption_key=None, generation=None, timeout=60, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.api_core.retry.Retry object>, **kwargs)

Get a blob object by name.

This will return None if the blob doesn’t exist:

client = storage.Client()
bucket = client.get_bucket("my-bucket")
assert isinstance(bucket.get_blob("/path/to/blob.txt"), Blob)
# <Blob: my-bucket, /path/to/blob.txt>
assert not bucket.get_blob("/does-not-exist.txt")
# None

If user_project is set, bills the API request to that project.

  • Parameters

    • blob_name (str) – The name of the blob to retrieve.

    • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • encryption_key (bytes) – (Optional) 32 byte encryption key for customer-supplied encryption. See https://cloud.google.com/storage/docs/encryption#customer-supplied.

    • generation (long) – (Optional) If present, selects a specific revision of this object.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • if_generation_match (long) – (Optional) Make the operation conditional on whether the blob’s current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the blob.

    • if_generation_not_match (long) – (Optional) Make the operation conditional on whether the blob’s current generation does not match the given value. If no live blob exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the blob.

    • if_metageneration_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration matches the given value.

    • if_metageneration_not_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration does not match the given value.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

    • kwargs – Keyword arguments to pass to the Blob constructor.

  • Return type

    google.cloud.storage.blob.Blob or None

  • Returns

    The blob object if it exists, otherwise None.

get_iam_policy(client=None, requested_policy_version=None, timeout=60, retry=<google.api_core.retry.Retry object>)

Retrieve the IAM policy for the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets/getIamPolicy

If user_project is set, bills the API request to that project.

  • Parameters

    • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • requested_policy_version (int or NoneType) – (Optional) The version of IAM policies to request. If a policy with a condition is requested without setting this, the server will return an error. This must be set to a value of 3 to retrieve IAM policies containing conditions. This is to prevent client code that isn’t aware of IAM conditions from interpreting and modifying policies incorrectly. The service might return a policy with version lower than the one that was requested, based on the feature syntax in the policy fetched.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Return type

    google.api_core.iam.Policy

  • Returns

    the policy instance, based on the resource returned from the getIamPolicy API request.

Example:

from google.cloud.storage.iam import STORAGE_OBJECT_VIEWER_ROLE

policy = bucket.get_iam_policy(requested_policy_version=3)

policy.version = 3

# Add a binding to the policy via it's bindings property
policy.bindings.append({
    "role": STORAGE_OBJECT_VIEWER_ROLE,
    "members": {"serviceAccount:account@project.iam.gserviceaccount.com", ...},
    # Optional:
    "condition": {
        "title": "prefix"
        "description": "Objects matching prefix"
        "expression": "resource.name.startsWith("projects/project-name/buckets/bucket-name/objects/prefix")"
    }
})

bucket.set_iam_policy(policy)

get_logging()

Return info about access logging for this bucket.

See https://cloud.google.com/storage/docs/access-logs#status

  • Return type

    dict or None

  • Returns

    a dict w/ keys, logBucket and logObjectPrefix (if logging is enabled), or None (if not).

get_notification(notification_id, client=None, timeout=60, retry=<google.api_core.retry.Retry object>)

Get Pub / Sub notification for this bucket.

See: https://cloud.google.com/storage/docs/json_api/v1/notifications/get

If user_project is set, bills the API request to that project.

  • Parameters

    • notification_id (str) – The notification id to retrieve the notification configuration.

    • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Return type

    BucketNotification

  • Returns

    notification instance.

Example

Get notification using notification id.

>>> from google.cloud import storage
>>> client = storage.Client()
>>> bucket = client.get_bucket('my-bucket-name')  # API request.
>>> notification = bucket.get_notification(notification_id='id')  # API request.

property iam_configuration()

Retrieve IAM configuration for this bucket.

  • Return type

    IAMConfiguration

  • Returns

    an instance for managing the bucket’s IAM configuration.

property id()

Retrieve the ID for the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

  • Return type

    str or NoneType

  • Returns

    The ID of the bucket or None if the bucket’s resource has not been loaded from the server.

property labels()

Retrieve or set labels assigned to this bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets#labels

NOTE: The getter for this property returns a dict which is a copy of the bucket’s labels. Mutating that dict has no effect unless you then re-assign the dict via the setter. E.g.:

>>> labels = bucket.labels
>>> labels['new_key'] = 'some-label'
>>> del labels['old_key']
>>> bucket.labels = labels
>>> bucket.update()
  • Setter

    Set labels for this bucket.

  • Getter

    Gets the labels for this bucket.

  • Return type

    dict

  • Returns

    Name-value pairs (string->string) labelling the bucket.

property lifecycle_rules()

Retrieve or set lifecycle rules configured for this bucket.

See https://cloud.google.com/storage/docs/lifecycle and

[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)

NOTE: The getter for this property returns a list which contains copies of the bucket’s lifecycle rules mappings. Mutating the list or one of its dicts has no effect unless you then re-assign the dict via the setter. E.g.:

>>> rules = bucket.lifecycle_rules
>>> rules.append({'origin': '/foo', ...})
>>> rules[1]['rule']['action']['type'] = 'Delete'
>>> del rules[0]
>>> bucket.lifecycle_rules = rules
>>> bucket.update()
  • Setter

    Set lifestyle rules for this bucket.

  • Getter

    Gets the lifestyle rules for this bucket.

  • Return type

    generator(dict)

  • Returns

    A sequence of mappings describing each lifecycle rule.

list_blobs(max_results=None, page_token=None, prefix=None, delimiter=None, start_offset=None, end_offset=None, include_trailing_delimiter=None, versions=None, projection='noAcl', fields=None, client=None, timeout=60, retry=<google.api_core.retry.Retry object>)

DEPRECATED. Return an iterator used to find blobs in the bucket.

NOTE: Direct use of this method is deprecated. Use Client.list_blobs instead.

If user_project is set, bills the API request to that project.

  • Parameters

    • max_results (int) – (Optional) The maximum number of blobs to return.

    • page_token (str) – (Optional) If present, return the next batch of blobs, using the value, which must correspond to the nextPageToken value returned in the previous response. Deprecated: use the pages property of the returned iterator instead of manually passing the token.

    • prefix (str) – (Optional) Prefix used to filter blobs.

    • delimiter (str) – (Optional) Delimiter, used with prefix to emulate hierarchy.

    • start_offset (str) – (Optional) Filter results to objects whose names are lexicographically equal to or after startOffset. If endOffset is also set, the objects listed will have names between startOffset (inclusive) and endOffset (exclusive).

    • end_offset (str) – (Optional) Filter results to objects whose names are lexicographically before endOffset. If startOffset is also set, the objects listed will have names between startOffset (inclusive) and endOffset (exclusive).

    • include_trailing_delimiter (boolean) – (Optional) If true, objects that end in exactly one instance of delimiter will have their metadata included in items in addition to prefixes.

    • versions (bool) – (Optional) Whether object versions should be returned as separate blobs.

    • projection (str) – (Optional) If used, must be ‘full’ or ‘noAcl’. Defaults to 'noAcl'. Specifies the set of properties to return.

    • fields (str) – (Optional) Selector specifying which fields to include in a partial response. Must be a list of fields. For example to get a partial response with just the next page token and the name and language of each blob returned: 'items(name,contentLanguage),nextPageToken'. See: https://cloud.google.com/storage/docs/json_api/v1/parameters#fields

    • client (Client) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Return type

    Iterator

  • Returns

    Iterator of all Blob in this bucket matching the arguments.

Example

List blobs in the bucket with user_project.

>>> from google.cloud import storage
>>> client = storage.Client()
>>> bucket = storage.Bucket(client, "my-bucket-name", user_project="my-project")
>>> all_blobs = list(client.list_blobs(bucket))

list_notifications(client=None, timeout=60, retry=<google.api_core.retry.Retry object>)

List Pub / Sub notifications for this bucket.

See: https://cloud.google.com/storage/docs/json_api/v1/notifications/list

If user_project is set, bills the API request to that project.

  • Parameters

    • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Return type

    list of BucketNotification

  • Returns

    notification instances

property location()

Retrieve location configured for this bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets and https://cloud.google.com/storage/docs/bucket-locations

Returns None if the property has not been set before creation, or if the bucket’s resource has not been loaded from the server. :rtype: str or NoneType

property location_type()

Retrieve or set the location type for the bucket.

See https://cloud.google.com/storage/docs/storage-classes

lock_retention_policy(client=None, timeout=60, retry=<google.api_core.retry.Retry object>)

Lock the bucket’s retention policy.

  • Parameters

    • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the blob’s bucket.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Raises

    ValueError – if the bucket has no metageneration (i.e., new or never reloaded); if the bucket has no retention policy assigned; if the bucket’s retention policy is already locked.

make_private(recursive=False, future=False, client=None, timeout=60, retry=<google.api_core.retry.Retry object>)

Update bucket’s ACL, revoking read access for anonymous users.

  • Parameters

    • recursive (bool) – If True, this will make all blobs inside the bucket private as well.

    • future (bool) – If True, this will make all objects created in the future private as well.

    • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. The timeout applies to each underlying request.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Raises

    ValueError – If recursive is True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned by list_blobs() and call make_private() for each blob.

make_public(recursive=False, future=False, client=None, timeout=60, retry=<google.api_core.retry.Retry object>)

Update bucket’s ACL, granting read access to anonymous users.

  • Parameters

    • recursive (bool) – If True, this will make all blobs inside the bucket public as well.

    • future (bool) – If True, this will make all objects created in the future public as well.

    • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. The timeout applies to each underlying request.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Raises

    ValueError – If recursive is True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned by list_blobs() and call make_public() for each blob.

property metageneration()

Retrieve the metageneration for the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

  • Return type

    int or NoneType

  • Returns

    The metageneration of the bucket or None if the bucket’s resource has not been loaded from the server.

notification(topic_name=None, topic_project=None, custom_attributes=None, event_types=None, blob_name_prefix=None, payload_format='NONE', notification_id=None)

Factory: create a notification resource for the bucket.

See: BucketNotification for parameters.

property owner()

Retrieve info about the owner of the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

  • Return type

    dict or NoneType

  • Returns

    Mapping of owner’s role/ID. Returns None if the bucket’s resource has not been loaded from the server.

patch(client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Sends all changed properties in a PATCH request.

Updates the _properties with the response from the backend.

If user_project is set, bills the API request to that project.

  • Parameters

    • client (Client or NoneType) – the client to use. If not passed, falls back to the client stored on the current object.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • if_metageneration_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration matches the given value.

    • if_metageneration_not_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration does not match the given value.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

property path()

The URL path to this bucket.

static path_helper(bucket_name)

Relative URL path for a bucket.

  • Parameters

    bucket_name (str) – The bucket name in the path.

  • Return type

    str

  • Returns

    The relative URL path for bucket_name.

property project_number()

Retrieve the number of the project to which the bucket is assigned.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

  • Return type

    int or NoneType

  • Returns

    The project number that owns the bucket or None if the bucket’s resource has not been loaded from the server.

reload(client=None, projection='noAcl', timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.api_core.retry.Retry object>)

Reload properties from Cloud Storage.

If user_project is set, bills the API request to that project.

  • Parameters

    • client (Client or NoneType) – the client to use. If not passed, falls back to the client stored on the current object.

    • projection (str) – (Optional) If used, must be ‘full’ or ‘noAcl’. Defaults to 'noAcl'. Specifies the set of properties to return.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • if_metageneration_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration matches the given value.

    • if_metageneration_not_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration does not match the given value.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

rename_blob(blob, new_name, client=None, timeout=60, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, if_source_generation_match=None, if_source_generation_not_match=None, if_source_metageneration_match=None, if_source_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Rename the given blob using copy and delete operations.

If user_project is set, bills the API request to that project.

Effectively, copies blob to the same bucket with a new name, then deletes the blob.

WARNING: This method will first duplicate the data and then delete the old blob. This means that with very large objects renaming could be a very (temporarily) costly or a very slow operation. If you need more control over the copy and deletion, instead use google.cloud.storage.blob.Blob.copy_to and google.cloud.storage.blob.Blob.delete directly.

  • Parameters

    • blob (google.cloud.storage.blob.Blob) – The blob to be renamed.

    • new_name (str) – The new name for this blob.

    • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response. The timeout applies to each individual request.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • if_generation_match (long) – (Optional) Makes the operation conditional on whether the destination object’s current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.

    • if_generation_not_match (long) – (Optional) Makes the operation conditional on whether the destination object’s current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.

    • if_metageneration_match (long) – (Optional) Makes the operation conditional on whether the destination object’s current metageneration matches the given value.

    • if_metageneration_not_match (long) – (Optional) Makes the operation conditional on whether the destination object’s current metageneration does not match the given value.

    • if_source_generation_match (long) – (Optional) Makes the operation conditional on whether the source object’s generation matches the given value. Also used in the delete request.

    • if_source_generation_not_match (long) – (Optional) Makes the operation conditional on whether the source object’s generation does not match the given value. Also used in the delete request.

    • if_source_metageneration_match (long) – (Optional) Makes the operation conditional on whether the source object’s current metageneration matches the given value.Also used in the delete request.

    • if_source_metageneration_not_match (long) – (Optional) Makes the operation conditional on whether the source object’s current metageneration does not match the given value. Also used in the delete request.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Return type

    Blob

  • Returns

    The newly-renamed blob.

property requester_pays()

Does the requester pay for API requests for this bucket?

See https://cloud.google.com/storage/docs/requester-pays for details.

  • Setter

    Update whether requester pays for this bucket.

  • Getter

    Query whether requester pays for this bucket.

  • Return type

    bool

  • Returns

    True if requester pays for API requests for the bucket, else False.

property retention_period()

Retrieve or set the retention period for items in the bucket.

  • Return type

    int or NoneType

  • Returns

    number of seconds to retain items after upload or release from event-based lock, or None if the property is not set locally.

property retention_policy_effective_time()

Retrieve the effective time of the bucket’s retention policy.

  • Return type

    datetime.datetime or NoneType

  • Returns

    point-in time at which the bucket’s retention policy is effective, or None if the property is not set locally.

property retention_policy_locked()

Retrieve whthere the bucket’s retention policy is locked.

  • Return type

    bool

  • Returns

    True if the bucket’s policy is locked, or else False if the policy is not locked, or the property is not set locally.

Retrieve the URI for the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

  • Return type

    str or NoneType

  • Returns

    The self link for the bucket or None if the bucket’s resource has not been loaded from the server.

set_iam_policy(policy, client=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Update the IAM policy for the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets/setIamPolicy

If user_project is set, bills the API request to that project.

  • Parameters

    • policy (google.api_core.iam.Policy) – policy instance used to update bucket’s IAM policy.

    • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Return type

    google.api_core.iam.Policy

  • Returns

    the policy instance, based on the resource returned from the setIamPolicy API request.

property storage_class()

Retrieve or set the storage class for the bucket.

See https://cloud.google.com/storage/docs/storage-classes

test_iam_permissions(permissions, client=None, timeout=60, retry=<google.api_core.retry.Retry object>)

API call: test permissions

See https://cloud.google.com/storage/docs/json_api/v1/buckets/testIamPermissions

If user_project is set, bills the API request to that project.

  • Parameters

    • permissions (list of string) – the permissions to check

    • client (Client or NoneType) – (Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

  • Return type

    list of string

  • Returns

    the permissions returned by the testIamPermissions API request.

property time_created()

Retrieve the timestamp at which the bucket was created.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

  • Return type

    datetime.datetime or NoneType

  • Returns

    Datetime object parsed from RFC3339 valid timestamp, or None if the bucket’s resource has not been loaded from the server.

update(client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Sends all properties in a PUT request.

Updates the _properties with the response from the backend.

If user_project is set, bills the API request to that project.

  • Parameters

    • client (Client or NoneType) – the client to use. If not passed, falls back to the client stored on the current object.

    • timeout (float* or [tuple*](https://python.readthedocs.io/en/latest/library/stdtypes.html#tuple)) – (Optional) The amount of time, in seconds, to wait for the server response.

      Can also be passed as a tuple (connect_timeout, read_timeout). See requests.Session.request() documentation for details.

    • if_metageneration_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration matches the given value.

    • if_metageneration_not_match (long) – (Optional) Make the operation conditional on whether the blob’s current metageneration does not match the given value.

    • retry (google.api_core.retry.Retry* or *google.cloud.storage.retry.ConditionalRetryPolicy) – (Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options.

      A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set.

      See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them.

property user_project()

Project ID to be billed for API requests made via this bucket.

If unset, API requests are billed to the bucket owner.

A user project is required for all operations on Requester Pays buckets.

See https://cloud.google.com/storage/docs/requester-pays#requirements for details.

  • Return type

    str

property versioning_enabled()

Is versioning enabled for this bucket?

See https://cloud.google.com/storage/docs/object-versioning for details.

  • Setter

    Update whether versioning is enabled for this bucket.

  • Getter

    Query whether versioning is enabled for this bucket.

  • Return type

    bool

  • Returns

    True if enabled, else False.

class google.cloud.storage.bucket.IAMConfiguration(bucket, uniform_bucket_level_access_enabled=

Bases: dict

Map a bucket’s IAM configuration.

  • Params bucket

    Bucket for which this instance is the policy.

  • Params bucket_policy_only_enabled

    (Optional) Whether the IAM-only policy is enabled for the bucket.

  • Params uniform_bucket_level_locked_time

    (Optional) When the bucket’s IAM-only policy was enabled. This value should normally only be set by the back-end API.

  • Params bucket_policy_only_enabled

    Deprecated alias for uniform_bucket_level_access_enabled.

  • Params bucket_policy_only_locked_time

    Deprecated alias for uniform_bucket_level_access_locked_time.

property bucket()

Bucket for which this instance is the policy.

  • Return type

    Bucket

  • Returns

    the instance’s bucket.

property bucket_policy_only_enabled()

Deprecated alias for uniform_bucket_level_access_enabled.

  • Return type

    bool

  • Returns

    whether the bucket is configured to allow only IAM.

property bucket_policy_only_locked_time()

Deprecated alias for uniform_bucket_level_access_locked_time.

  • Return type

    Union[datetime.datetime, None]

  • Returns

    (readonly) Time after which bucket_policy_only_enabled will be frozen as true.

clear()

copy()

classmethod from_api_repr(resource, bucket)

Factory: construct instance from resource.

  • Params bucket

    Bucket for which this instance is the policy.

  • Parameters

    resource (dict) – mapping as returned from API call.

  • Return type

    IAMConfiguration

  • Returns

    Instance created from resource.

fromkeys(value=None, /)

Create a new dictionary with keys from iterable and values set to value.

get(key, default=None, /)

Return the value for key if key is in the dictionary, else default.

items()

keys()

pop(k, )

If key is not found, default is returned if given, otherwise KeyError is raised

popitem()

Remove and return a (key, value) pair as a 2-tuple.

Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.

setdefault(key, default=None, /)

Insert key with a value of default if key is not in the dictionary.

Return the value for key if key is in the dictionary, else default.

property uniform_bucket_level_access_enabled()

If set, access checks only use bucket-level IAM policies or above.

  • Return type

    bool

  • Returns

    whether the bucket is configured to allow only IAM.

property uniform_bucket_level_access_locked_time()

Deadline for changing uniform_bucket_level_access_enabled from true to false.

If the bucket’s uniform_bucket_level_access_enabled is true, this property is time time after which that setting becomes immutable.

If the bucket’s uniform_bucket_level_access_enabled is false, this property is None.

  • Return type

    Union[datetime.datetime, None]

  • Returns

    (readonly) Time after which uniform_bucket_level_access_enabled will be frozen as true.

update(**F)

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values()

class google.cloud.storage.bucket.LifecycleRuleConditions(age=None, created_before=None, is_live=None, matches_storage_class=None, number_of_newer_versions=None, days_since_custom_time=None, custom_time_before=None, days_since_noncurrent_time=None, noncurrent_time_before=None, _factory=False)

Bases: dict

Map a single lifecycle rule for a bucket.

See: https://cloud.google.com/storage/docs/lifecycle

  • Parameters

    • age (int) – (Optional) Apply rule action to items whose age, in days, exceeds this value.

    • created_before (datetime.date) – (Optional) Apply rule action to items created before this date.

    • is_live (bool) – (Optional) If true, apply rule action to non-versioned items, or to items with no newer versions. If false, apply rule action to versioned items with at least one newer version.

    • matches_storage_class (list(str), one or more of Bucket.STORAGE_CLASSES.) – (Optional) Apply rule action to items which whose storage class matches this value.

    • number_of_newer_versions (int) – (Optional) Apply rule action to versioned items having N newer versions.

    • days_since_custom_time (int) – (Optional) Apply rule action to items whose number of days elapsed since the custom timestamp. This condition is relevant only for versioned objects. The value of the field must be a non negative integer. If it’s zero, the object version will become eligible for lifecycle action as soon as it becomes custom.

    • custom_time_before (datetime.date) – (Optional) Date object parsed from RFC3339 valid date, apply rule action to items whose custom time is before this date. This condition is relevant only for versioned objects, e.g., 2019-03-16.

    • days_since_noncurrent_time (int) – (Optional) Apply rule action to items whose number of days elapsed since the non current timestamp. This condition is relevant only for versioned objects. The value of the field must be a non negative integer. If it’s zero, the object version will become eligible for lifecycle action as soon as it becomes non current.

    • noncurrent_time_before (datetime.date) – (Optional) Date object parsed from RFC3339 valid date, apply rule action to items whose non current time is before this date. This condition is relevant only for versioned objects, e.g, 2019-03-16.

  • Raises

    ValueError – if no arguments are passed.

property age()

Conditon’s age value.

clear()

copy()

property created_before()

Conditon’s created_before value.

property custom_time_before()

Conditon’s ‘custom_time_before’ value.

property days_since_custom_time()

Conditon’s ‘days_since_custom_time’ value.

property days_since_noncurrent_time()

Conditon’s ‘days_since_noncurrent_time’ value.

classmethod from_api_repr(resource)

Factory: construct instance from resource.

  • Parameters

    resource (dict) – mapping as returned from API call.

  • Return type

    LifecycleRuleConditions

  • Returns

    Instance created from resource.

fromkeys(value=None, /)

Create a new dictionary with keys from iterable and values set to value.

get(key, default=None, /)

Return the value for key if key is in the dictionary, else default.

property is_live()

Conditon’s ‘is_live’ value.

items()

keys()

property matches_storage_class()

Conditon’s ‘matches_storage_class’ value.

property noncurrent_time_before()

Conditon’s ‘noncurrent_time_before’ value.

property number_of_newer_versions()

Conditon’s ‘number_of_newer_versions’ value.

pop(k, )

If key is not found, default is returned if given, otherwise KeyError is raised

popitem()

Remove and return a (key, value) pair as a 2-tuple.

Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.

setdefault(key, default=None, /)

Insert key with a value of default if key is not in the dictionary.

Return the value for key if key is in the dictionary, else default.

update(**F)

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values()

class google.cloud.storage.bucket.LifecycleRuleDelete(**kw)

Bases: dict

Map a lifecycle rule deleting matching items.

  • Params kw

    arguments passed to LifecycleRuleConditions.

clear()

copy()

classmethod from_api_repr(resource)

Factory: construct instance from resource.

  • Parameters

    resource (dict) – mapping as returned from API call.

  • Return type

    LifecycleRuleDelete

  • Returns

    Instance created from resource.

fromkeys(value=None, /)

Create a new dictionary with keys from iterable and values set to value.

get(key, default=None, /)

Return the value for key if key is in the dictionary, else default.

items()

keys()

pop(k, )

If key is not found, default is returned if given, otherwise KeyError is raised

popitem()

Remove and return a (key, value) pair as a 2-tuple.

Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.

setdefault(key, default=None, /)

Insert key with a value of default if key is not in the dictionary.

Return the value for key if key is in the dictionary, else default.

update(**F)

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values()

class google.cloud.storage.bucket.LifecycleRuleSetStorageClass(storage_class, **kw)

Bases: dict

Map a lifecycle rule updating storage class of matching items.

  • Parameters

    storage_class (str, one of Bucket.STORAGE_CLASSES.) – new storage class to assign to matching items.

  • Params kw

    arguments passed to LifecycleRuleConditions.

clear()

copy()

classmethod from_api_repr(resource)

Factory: construct instance from resource.

  • Parameters

    resource (dict) – mapping as returned from API call.

  • Return type

    LifecycleRuleDelete

  • Returns

    Instance created from resource.

fromkeys(value=None, /)

Create a new dictionary with keys from iterable and values set to value.

get(key, default=None, /)

Return the value for key if key is in the dictionary, else default.

items()

keys()

pop(k, )

If key is not found, default is returned if given, otherwise KeyError is raised

popitem()

Remove and return a (key, value) pair as a 2-tuple.

Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.

setdefault(key, default=None, /)

Insert key with a value of default if key is not in the dictionary.

Return the value for key if key is in the dictionary, else default.

update(**F)

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values()