Class Bucket (2.1.0)

Bucket(client, name=None, user_project=None)

A class representing a Bucket on Cloud Storage.

Parameters

NameDescription
client Client

A client which holds credentials and project configuration for the bucket (which requires a project).

name str

The name of the bucket. Bucket names must start and end with a number or letter.

user_project str

(Optional) the project ID to be billed for API requests made via this instance.

Properties

acl

Create our ACL on demand.

client

The client bound to this bucket.

cors

Retrieve or set CORS policies configured for this bucket.

See http://www.w3.org/TR/cors/ and https://cloud.google.com/storage/docs/json_api/v1/buckets

Returns
TypeDescription
list of dictionariesA sequence of mappings describing each CORS policy.

default_event_based_hold

Scalar property getter.

default_kms_key_name

Retrieve / set default KMS encryption key for objects in the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

:setter: Set default KMS encryption key for items in this bucket. :getter: Get default KMS encryption key for items in this bucket.

Returns
TypeDescription
strDefault KMS encryption key, or None if not set.

default_object_acl

Create our defaultObjectACL on demand.

etag

Returns
TypeDescription
str or NoneTypeThe bucket etag or None if the bucket's resource has not been loaded from the server.

iam_configuration

Retrieve IAM configuration for this bucket.

Returns
TypeDescription
IAMConfigurationan instance for managing the bucket's IAM configuration.

id

Returns
TypeDescription
str or NoneTypeThe ID of the bucket or None if the bucket's resource has not been loaded from the server.

labels

Retrieve or set labels assigned to this bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets#labels

Returns
TypeDescription
dictName-value pairs (string->string) labelling the bucket.

lifecycle_rules

Retrieve or set lifecycle rules configured for this bucket.

See https://cloud.google.com/storage/docs/lifecycle and https://cloud.google.com/storage/docs/json_api/v1/buckets

Returns
TypeDescription
generator(dict)A sequence of mappings describing each lifecycle rule.

location

Retrieve location configured for this bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets and https://cloud.google.com/storage/docs/bucket-locations

Returns None if the property has not been set before creation, or if the bucket's resource has not been loaded from the server.

location_type

Retrieve or set the location type for the bucket.

See https://cloud.google.com/storage/docs/storage-classes

:setter: Set the location type for this bucket. :getter: Gets the the location type for this bucket.

Returns
TypeDescription
str or NoneTypeIf set, one of MULTI_REGION_LOCATION_TYPE, REGION_LOCATION_TYPE, or DUAL_REGION_LOCATION_TYPE, else None.

metageneration

Retrieve the metageneration for the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

Returns
TypeDescription
int or NoneTypeThe metageneration of the bucket or None if the bucket's resource has not been loaded from the server.

owner

Retrieve info about the owner of the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

Returns
TypeDescription
dict or NoneTypeMapping of owner's role/ID. Returns None if the bucket's resource has not been loaded from the server.

path

The URL path to this bucket.

project_number

Retrieve the number of the project to which the bucket is assigned.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

Returns
TypeDescription
int or NoneTypeThe project number that owns the bucket or None if the bucket's resource has not been loaded from the server.

requester_pays

Does the requester pay for API requests for this bucket?

See https://cloud.google.com/storage/docs/requester-pays for details.

:setter: Update whether requester pays for this bucket. :getter: Query whether requester pays for this bucket.

Returns
TypeDescription
boolTrue if requester pays for API requests for the bucket, else False.

retention_period

Retrieve or set the retention period for items in the bucket.

Returns
TypeDescription
int or NoneTypenumber of seconds to retain items after upload or release from event-based lock, or None if the property is not set locally.

retention_policy_effective_time

Retrieve the effective time of the bucket's retention policy.

Returns
TypeDescription
datetime.datetime or NoneTypepoint-in time at which the bucket's retention policy is effective, or None if the property is not set locally.

retention_policy_locked

Retrieve whthere the bucket's retention policy is locked.

Returns
TypeDescription
boolTrue if the bucket's policy is locked, or else False if the policy is not locked, or the property is not set locally.

rpo

Get the RPO (Recovery Point Objective) of this bucket

See: https://cloud.google.com/storage/docs/managing-turbo-replication

"ASYNC_TURBO" or "DEFAULT"

Retrieve the URI for the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

Returns
TypeDescription
str or NoneTypeThe self link for the bucket or None if the bucket's resource has not been loaded from the server.

storage_class

Retrieve or set the storage class for the bucket.

See https://cloud.google.com/storage/docs/storage-classes

:setter: Set the storage class for this bucket. :getter: Gets the the storage class for this bucket.

Returns
TypeDescription
str or NoneTypeIf set, one of NEARLINE_STORAGE_CLASS, COLDLINE_STORAGE_CLASS, ARCHIVE_STORAGE_CLASS, STANDARD_STORAGE_CLASS, MULTI_REGIONAL_LEGACY_STORAGE_CLASS, REGIONAL_LEGACY_STORAGE_CLASS, or DURABLE_REDUCED_AVAILABILITY_LEGACY_STORAGE_CLASS, else None.

time_created

Retrieve the timestamp at which the bucket was created.

See https://cloud.google.com/storage/docs/json_api/v1/buckets

Returns
TypeDescription
datetime.datetime or NoneTypeDatetime object parsed from RFC3339 valid timestamp, or None if the bucket's resource has not been loaded from the server.

user_project

Project ID to be billed for API requests made via this bucket.

If unset, API requests are billed to the bucket owner.

A user project is required for all operations on Requester Pays buckets.

See https://cloud.google.com/storage/docs/requester-pays#requirements for details.

versioning_enabled

Is versioning enabled for this bucket?

See https://cloud.google.com/storage/docs/object-versioning for details.

:setter: Update whether versioning is enabled for this bucket. :getter: Query whether versioning is enabled for this bucket.

Returns
TypeDescription
boolTrue if enabled, else False.

Methods

Bucket

Bucket(client, name=None, user_project=None)

property name Get the bucket's name.

add_lifecycle_delete_rule

add_lifecycle_delete_rule(**kw)

Add a "delete" rule to lifestyle rules configured for this bucket.

See https://cloud.google.com/storage/docs/lifecycle and https://cloud.google.com/storage/docs/json_api/v1/buckets

.. literalinclude:: snippets.py :start-after: [START add_lifecycle_delete_rule] :end-before: [END add_lifecycle_delete_rule] :dedent: 4

add_lifecycle_set_storage_class_rule

add_lifecycle_set_storage_class_rule(storage_class, **kw)

Add a "delete" rule to lifestyle rules configured for this bucket.

See https://cloud.google.com/storage/docs/lifecycle and https://cloud.google.com/storage/docs/json_api/v1/buckets

.. literalinclude:: snippets.py :start-after: [START add_lifecycle_set_storage_class_rule] :end-before: [END add_lifecycle_set_storage_class_rule] :dedent: 4

Parameter
NameDescription
storage_class str, one of STORAGE_CLASSES.

new storage class to assign to matching items.

blob

blob(
    blob_name, chunk_size=None, encryption_key=None, kms_key_name=None, generation=None
)

Factory constructor for blob object.

Parameters
NameDescription
blob_name str

The name of the blob to be instantiated.

chunk_size int

The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification.

encryption_key bytes

(Optional) 32 byte encryption key for customer-supplied encryption.

kms_key_name str

(Optional) Resource name of KMS key used to encrypt blob's content.

generation long

(Optional) If present, selects a specific revision of this object.

Returns
TypeDescription
BlobThe blob object created.

clear_lifecyle_rules

clear_lifecyle_rules()

configure_website

configure_website(main_page_suffix=None, not_found_page=None)

Configure website-related properties.

See https://cloud.google.com/storage/docs/hosting-static-website

If you want this bucket to host a website, just provide the name of an index page and a page to use when a blob isn't found:

.. literalinclude:: snippets.py :start-after: [START configure_website] :end-before: [END configure_website] :dedent: 4

You probably should also make the whole bucket public:

.. literalinclude:: snippets.py :start-after: [START make_public] :end-before: [END make_public] :dedent: 4

This says: "Make the bucket public, and all the stuff already in the bucket, and anything else I add to the bucket. Just make it all public."

Parameters
NameDescription
main_page_suffix str

The page to use as the main page of a directory. Typically something like index.html.

not_found_page str

The file to use when a page isn't found.

copy_blob

copy_blob(blob, destination_bucket, new_name=None, client=None, preserve_acl=True, source_generation=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, if_source_generation_match=None, if_source_generation_not_match=None, if_source_metageneration_match=None, if_source_metageneration_not_match=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Copy the given blob to the given bucket, optionally with a new name.

If user_project is set, bills the API request to that project.

Parameters
NameDescription
blob Blob

The blob to be copied.

destination_bucket Bucket

The bucket into which the blob should be copied.

new_name str

(Optional) The new name for the copied file.

client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

preserve_acl bool

DEPRECATED. This argument is not functional! (Optional) Copies ACL from old blob to new blob. Default: True.

source_generation long

(Optional) The generation of the blob to be copied.

if_generation_match long

(Optional) See :ref:using-if-generation-match Note that the generation to be matched is that of the destination blob.

if_generation_not_match long

(Optional) See :ref:using-if-generation-not-match Note that the generation to be matched is that of the destination blob.

if_metageneration_match long

(Optional) See :ref:using-if-metageneration-match Note that the metageneration to be matched is that of the destination blob.

if_metageneration_not_match long

(Optional) See :ref:using-if-metageneration-not-match Note that the metageneration to be matched is that of the destination blob.

if_source_generation_match long

(Optional) Makes the operation conditional on whether the source object's generation matches the given value.

if_source_generation_not_match long

(Optional) Makes the operation conditional on whether the source object's generation does not match the given value.

if_source_metageneration_match long

(Optional) Makes the operation conditional on whether the source object's current metageneration matches the given value.

if_source_metageneration_not_match long

(Optional) Makes the operation conditional on whether the source object's current metageneration does not match the given value.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Returns
TypeDescription
BlobThe new Blob. .. rubric:: Example Copy a blob including ACL. >>> from google.cloud import storage >>> client = storage.Client(project="project") >>> bucket = client.bucket("bucket") >>> dst_bucket = client.bucket("destination-bucket") >>> blob = bucket.blob("file.ext") >>> new_blob = bucket.copy_blob(blob, dst_bucket) >>> new_blob.acl.save(blob.acl)

create

create(client=None, project=None, location=None, predefined_acl=None, predefined_default_object_acl=None, timeout=60, retry=<google.api_core.retry.Retry object>)

DEPRECATED. Creates current bucket.

If the bucket already exists, will raise xref_Conflict.

This implements "storage.buckets.insert".

If user_project is set, bills the API request to that project.

Parameters
NameDescription
client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

project str

(Optional) The project under which the bucket is to be created. If not passed, uses the project set on the client.

location str

(Optional) The location of the bucket. If not passed, the default location, US, will be used. See https://cloud.google.com/storage/docs/bucket-locations

predefined_acl str

(Optional) Name of predefined ACL to apply to bucket. See: https://cloud.google.com/storage/docs/access-control/lists#predefined-acl

predefined_default_object_acl str

(Optional) Name of predefined ACL to apply to bucket's objects. See: https://cloud.google.com/storage/docs/access-control/lists#predefined-acl

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Exceptions
TypeDescription
ValueErrorif project is None and client's project is also None.

delete

delete(force=False, client=None, if_metageneration_match=None, if_metageneration_not_match=None, timeout=60, retry=<google.api_core.retry.Retry object>)

Delete this bucket.

The bucket must be empty in order to submit a delete request. If force=True is passed, this will first attempt to delete all the objects / blobs in the bucket (i.e. try to empty the bucket).

If the bucket doesn't exist, this will raise xref_NotFound. If the bucket is not empty (and force=False), will raise xref_Conflict.

If force=True and the bucket contains more than 256 objects / blobs this will cowardly refuse to delete the objects (or the bucket). This is to prevent accidental bucket deletion and to prevent extremely long runtime of this method.

If user_project is set, bills the API request to that project.

Parameters
NameDescription
force bool

If True, empties the bucket's objects then deletes it.

client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

if_metageneration_match long

(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value.

if_metageneration_not_match long

(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Exceptions
TypeDescription
`ValueErrorif force is True and the bucket contains more than 256 objects / blobs.

delete_blob

delete_blob(blob_name, client=None, generation=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Deletes a blob from the current bucket.

If the blob isn't found (backend 404), raises a xref_NotFound.

For example:

.. literalinclude:: snippets.py :start-after: [START delete_blob] :end-before: [END delete_blob] :dedent: 4

If user_project is set, bills the API request to that project.

Parameters
NameDescription
blob_name str

A blob name to delete.

client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

generation long

(Optional) If present, permanently deletes a specific revision of this object.

if_generation_match long

(Optional) See :ref:using-if-generation-match

if_generation_not_match long

(Optional) See :ref:using-if-generation-not-match

if_metageneration_match long

(Optional) See :ref:using-if-metageneration-match

if_metageneration_not_match long

(Optional) See :ref:using-if-metageneration-not-match

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Exceptions
TypeDescription
NotFound(to suppress the exception, call delete_blobs, passing a no-op on_error callback, e.g.: .. literalinclude:: snippets.py :start-after: [START delete_blobs] :end-before: [END delete_blobs] :dedent: 4

delete_blobs

delete_blobs(blobs, on_error=None, client=None, timeout=60, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Deletes a list of blobs from the current bucket.

Uses delete_blob to delete each individual blob.

If user_project is set, bills the API request to that project.

Parameters
NameDescription
blobs list

A list of Blob-s or blob names to delete.

on_error callable

(Optional) Takes single argument: blob. Called called once for each blob raising NotFound; otherwise, the exception is propagated.

client Client

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

if_generation_match list of long

(Optional) See :ref:using-if-generation-match Note that the length of the list must match the length of The list must match blobs item-to-item.

if_generation_not_match list of long

(Optional) See :ref:using-if-generation-not-match The list must match blobs item-to-item.

if_metageneration_match list of long

(Optional) See :ref:using-if-metageneration-match The list must match blobs item-to-item.

if_metageneration_not_match list of long

(Optional) See :ref:using-if-metageneration-not-match The list must match blobs item-to-item.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Exceptions
TypeDescription
NotFound(if on_error is not passed). .. rubric:: Example Delete blobs using generation match preconditions. >>> from google.cloud import storage >>> client = storage.Client() >>> bucket = client.bucket("bucket-name") >>> blobs = [bucket.blob("blob-name-1"), bucket.blob("blob-name-2")] >>> if_generation_match = [None] * len(blobs) >>> if_generation_match[0] = "123" # precondition for "blob-name-1" >>> bucket.delete_blobs(blobs, if_generation_match=if_generation_match)

disable_logging

disable_logging()

Disable access logging for this bucket.

See https://cloud.google.com/storage/docs/access-logs#disabling

disable_website

disable_website()

Disable the website configuration for this bucket.

This is really just a shortcut for setting the website-related attributes to None.

enable_logging

enable_logging(bucket_name, object_prefix="")

Enable access logging for this bucket.

See https://cloud.google.com/storage/docs/access-logs

Parameters
NameDescription
bucket_name str

name of bucket in which to store access logs

object_prefix str

prefix for access log filenames

exists

exists(client=None, timeout=60, if_etag_match=None, if_etag_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.api_core.retry.Retry object>)

Determines whether or not this bucket exists.

If user_project is set, bills the API request to that project.

Parameters
NameDescription
client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

if_etag_match Union[str, Set[str]]

(Optional) Make the operation conditional on whether the bucket's current ETag matches the given value.

if_etag_not_match Union[str, Set[str]])

(Optional) Make the operation conditional on whether the bucket's current ETag does not match the given value.

if_metageneration_match long

(Optional) Make the operation conditional on whether the bucket's current metageneration matches the given value.

if_metageneration_not_match long

(Optional) Make the operation conditional on whether the bucket's current metageneration does not match the given value.

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Returns
TypeDescription
boolTrue if the bucket exists in Cloud Storage.

from_string

from_string(uri, client=None)

Get a constructor for bucket object by URI.

Parameters
NameDescription
uri str

The bucket uri pass to get bucket object.

client Client or NoneType

(Optional) The client to use. Application code should always pass client.

Returns
TypeDescription
BucketThe bucket object created. .. rubric:: Example Get a constructor for bucket object by URI.. >>> from google.cloud import storage >>> from google.cloud.storage.bucket import Bucket >>> client = storage.Client() >>> bucket = Bucket.from_string("gs://bucket", client=client)

generate_signed_url

generate_signed_url(
    expiration=None,
    api_access_endpoint="https://storage.googleapis.com",
    method="GET",
    headers=None,
    query_parameters=None,
    client=None,
    credentials=None,
    version=None,
    virtual_hosted_style=False,
    bucket_bound_hostname=None,
    scheme="http",
)

Generates a signed URL for this bucket.

If you have a bucket that you want to allow access to for a set amount of time, you can use this method to generate a URL that is only valid within a certain time period.

If bucket_bound_hostname is set as an argument of api_access_endpoint, https works only if using a CDN.

.. rubric:: Example

Generates a signed URL for this bucket using bucket_bound_hostname and scheme.

from google.cloud import storage client = storage.Client() bucket = client.get_bucket('my-bucket-name') url = bucket.generate_signed_url(expiration='url-expiration-time', bucket_bound_hostname='mydomain.tld', version='v4') url = bucket.generate_signed_url(expiration='url-expiration-time', bucket_bound_hostname='mydomain.tld', version='v4',scheme='https') # If using CDN

This is particularly useful if you don't want publicly accessible buckets, but don't want to require users to explicitly log in.

Parameters
NameDescription
expiration Union[Integer, datetime.datetime, datetime.timedelta]

Point in time when the signed URL should expire. If a datetime instance is passed without an explicit tzinfo set, it will be assumed to be UTC.

api_access_endpoint str

(Optional) URI base.

method str

The HTTP verb that will be used when requesting the URL.

headers dict

(Optional) Additional HTTP headers to be included as part of the signed URLs. See: https://cloud.google.com/storage/docs/xml-api/reference-headers Requests using the signed URL must pass the specified header (name and value) with each request for the URL.

query_parameters dict

(Optional) Additional query parameters to be included as part of the signed URLs. See: https://cloud.google.com/storage/docs/xml-api/reference-headers#query

client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the blob's bucket.

credentials google.auth.credentials.Credentials or NoneType

The authorization credentials to attach to requests. These credentials identify this application to the service. If none are specified, the client will attempt to ascertain the credentials from the environment.

version str

(Optional) The version of signed credential to create. Must be one of 'v2' 'v4'.

virtual_hosted_style bool

(Optional) If true, then construct the URL relative the bucket's virtual hostname, e.g., '

bucket_bound_hostname str

(Optional) If pass, then construct the URL relative to the bucket-bound hostname. Value cane be a bare or with scheme, e.g., 'example.com' or 'http://example.com'. See: https://cloud.google.com/storage/docs/request-endpoints#cname

scheme str

(Optional) If bucket_bound_hostname is passed as a bare hostname, use this value as the scheme. https will work only when using a CDN. Defaults to "http".

Exceptions
TypeDescription
`ValueErrorwhen version is invalid.
`TypeErrorwhen expiration is not a valid type.
`AttributeErrorif credentials is not an instance of google.auth.credentials.Signing.
Returns
TypeDescription
strA signed URL you can use to access the resource until expiration.

generate_upload_policy

generate_upload_policy(conditions, expiration=None, client=None)

Create a signed upload policy for uploading objects.

This method generates and signs a policy document. You can use policy documents_ to allow visitors to a website to upload files to Google Cloud Storage without giving them direct write access.

For example:

.. literalinclude:: snippets.py :start-after: [START policy_document] :end-before: [END policy_document] :dedent: 4

.. _policy documents: https://cloud.google.com/storage/docs/xml-api /post-object#policydocument

Parameters
NameDescription
expiration datetime

(Optional) Expiration in UTC. If not specified, the policy will expire in 1 hour.

conditions list

A list of conditions as described in the policy documents_ documentation.

client Client

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

Returns
TypeDescription
dictA dictionary of (form field name, form field value) of form fields that should be added to your HTML upload form in order to attach the signature.

get_blob

get_blob(blob_name, client=None, encryption_key=None, generation=None, if_etag_match=None, if_etag_not_match=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, timeout=60, retry=<google.api_core.retry.Retry object>, **kwargs)

Get a blob object by name.

This will return None if the blob doesn't exist:

.. literalinclude:: snippets.py :start-after: [START get_blob] :end-before: [END get_blob] :dedent: 4

If user_project is set, bills the API request to that project.

Parameters
NameDescription
blob_name str

The name of the blob to retrieve.

client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

encryption_key bytes

(Optional) 32 byte encryption key for customer-supplied encryption. See https://cloud.google.com/storage/docs/encryption#customer-supplied.

generation long

(Optional) If present, selects a specific revision of this object.

if_etag_match Union[str, Set[str]]

(Optional) See :ref:using-if-etag-match

if_etag_not_match Union[str, Set[str]]

(Optional) See :ref:using-if-etag-not-match

if_generation_match long

(Optional) See :ref:using-if-generation-match

if_generation_not_match long

(Optional) See :ref:using-if-generation-not-match

if_metageneration_match long

(Optional) See :ref:using-if-metageneration-match

if_metageneration_not_match long

(Optional) See :ref:using-if-metageneration-not-match

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Returns
TypeDescription
Blob or NoneThe blob object if it exists, otherwise None.

get_iam_policy

get_iam_policy(client=None, requested_policy_version=None, timeout=60, retry=<google.api_core.retry.Retry object>)

Retrieve the IAM policy for the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets/getIamPolicy

If user_project is set, bills the API request to that project.

Parameters
NameDescription
client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

requested_policy_version int or NoneType

(Optional) The version of IAM policies to request. If a policy with a condition is requested without setting this, the server will return an error. This must be set to a value of 3 to retrieve IAM policies containing conditions. This is to prevent client code that isn't aware of IAM conditions from interpreting and modifying policies incorrectly. The service might return a policy with version lower than the one that was requested, based on the feature syntax in the policy fetched.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Returns
TypeDescription
google.api_core.iam.Policythe policy instance, based on the resource returned from the getIamPolicy API request. Example: .. code-block:: python from google.cloud.storage.iam import STORAGE_OBJECT_VIEWER_ROLE policy = bucket.get_iam_policy(requested_policy_version=3) policy.version = 3 # Add a binding to the policy via it's bindings property policy.bindings.append({ "role": STORAGE_OBJECT_VIEWER_ROLE, "members": {"serviceAccount:account@project.iam.gserviceaccount.com", ...}, # Optional: "condition": { "title": "prefix" "description": "Objects matching prefix" "expression": "resource.name.startsWith("projects/project-name/buckets/bucket-name/objects/prefix")" } }) bucket.set_iam_policy(policy)

get_logging

get_logging()

Return info about access logging for this bucket.

See https://cloud.google.com/storage/docs/access-logs#status

Returns
TypeDescription
dict or Nonea dict w/ keys, logBucket and logObjectPrefix (if logging is enabled), or None (if not).

get_notification

get_notification(notification_id, client=None, timeout=60, retry=<google.api_core.retry.Retry object>)

Get Pub / Sub notification for this bucket.

See: https://cloud.google.com/storage/docs/json_api/v1/notifications/get

If user_project is set, bills the API request to that project.

Parameters
NameDescription
notification_id str

The notification id to retrieve the notification configuration.

client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Returns
TypeDescription
.BucketNotificationnotification instance. .. rubric:: Example Get notification using notification id. >>> from google.cloud import storage >>> client = storage.Client() >>> bucket = client.get_bucket('my-bucket-name') # API request. >>> notification = bucket.get_notification(notification_id='id') # API request.

list_blobs

list_blobs(max_results=None, page_token=None, prefix=None, delimiter=None, start_offset=None, end_offset=None, include_trailing_delimiter=None, versions=None, projection='noAcl', fields=None, client=None, timeout=60, retry=<google.api_core.retry.Retry object>)

DEPRECATED. Return an iterator used to find blobs in the bucket.

If user_project is set, bills the API request to that project.

Parameters
NameDescription
max_results int

(Optional) The maximum number of blobs to return.

page_token str

(Optional) If present, return the next batch of blobs, using the value, which must correspond to the nextPageToken value returned in the previous response. Deprecated: use the pages property of the returned iterator instead of manually passing the token.

prefix str

(Optional) Prefix used to filter blobs.

delimiter str

(Optional) Delimiter, used with prefix to emulate hierarchy.

start_offset str

(Optional) Filter results to objects whose names are lexicographically equal to or after startOffset. If endOffset is also set, the objects listed will have names between startOffset (inclusive) and endOffset (exclusive).

end_offset str

(Optional) Filter results to objects whose names are lexicographically before endOffset. If startOffset is also set, the objects listed will have names between startOffset (inclusive) and endOffset (exclusive).

include_trailing_delimiter boolean

(Optional) If true, objects that end in exactly one instance of delimiter will have their metadata included in items in addition to prefixes.

versions bool

(Optional) Whether object versions should be returned as separate blobs.

projection str

(Optional) If used, must be 'full' or 'noAcl'. Defaults to 'noAcl'. Specifies the set of properties to return.

fields str

(Optional) Selector specifying which fields to include in a partial response. Must be a list of fields. For example to get a partial response with just the next page token and the name and language of each blob returned: 'items(name,contentLanguage),nextPageToken'. See: https://cloud.google.com/storage/docs/json_api/v1/parameters#fields

client Client

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Returns
TypeDescription
google.api_core.page_iterator.IteratorIterator of all Blob in this bucket matching the arguments. .. rubric:: Example List blobs in the bucket with user_project. >>> from google.cloud import storage >>> client = storage.Client() >>> bucket = storage.Bucket(client, "my-bucket-name", user_project="my-project") >>> all_blobs = list(client.list_blobs(bucket))

list_notifications

list_notifications(client=None, timeout=60, retry=<google.api_core.retry.Retry object>)

List Pub / Sub notifications for this bucket.

See: https://cloud.google.com/storage/docs/json_api/v1/notifications/list

If user_project is set, bills the API request to that project.

Parameters
NameDescription
client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Returns
TypeDescription
list of .BucketNotificationnotification instances

lock_retention_policy

lock_retention_policy(client=None, timeout=60, retry=<google.api_core.retry.Retry object>)

Lock the bucket's retention policy.

Parameters
NameDescription
client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the blob's bucket.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Exceptions
TypeDescription
ValueErrorif the bucket has no metageneration (i.e., new or never reloaded); if the bucket has no retention policy assigned; if the bucket's retention policy is already locked.

make_private

make_private(recursive=False, future=False, client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Update bucket's ACL, revoking read access for anonymous users.

Parameters
NameDescription
recursive bool

If True, this will make all blobs inside the bucket private as well.

future bool

If True, this will make all objects created in the future private as well.

client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

if_metageneration_match long

(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value.

if_metageneration_not_match long

(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value.

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Exceptions
TypeDescription
ValueErrorIf recursive is True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned by list_blobs and call make_private for each blob.

make_public

make_public(recursive=False, future=False, client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Update bucket's ACL, granting read access to anonymous users.

Parameters
NameDescription
recursive bool

If True, this will make all blobs inside the bucket public as well.

future bool

If True, this will make all objects created in the future public as well.

client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

if_metageneration_match long

(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value.

if_metageneration_not_match long

(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value.

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Exceptions
TypeDescription
ValueErrorIf recursive is True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned by list_blobs and call make_public for each blob.

notification

notification(
    topic_name=None,
    topic_project=None,
    custom_attributes=None,
    event_types=None,
    blob_name_prefix=None,
    payload_format="NONE",
    notification_id=None,
)

Factory: create a notification resource for the bucket.

See: .BucketNotification for parameters.

patch

patch(client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Sends all changed properties in a PATCH request.

Updates the _properties with the response from the backend.

If user_project is set, bills the API request to that project.

Parameters
NameDescription
client Client or NoneType

the client to use. If not passed, falls back to the client stored on the current object.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

if_metageneration_match long

(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value.

if_metageneration_not_match long

(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value.

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

path_helper

path_helper(bucket_name)

Relative URL path for a bucket.

Parameter
NameDescription
bucket_name str

The bucket name in the path.

Returns
TypeDescription
strThe relative URL path for bucket_name.

reload

reload(client=None, projection='noAcl', timeout=60, if_etag_match=None, if_etag_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.api_core.retry.Retry object>)

Reload properties from Cloud Storage.

If user_project is set, bills the API request to that project.

Parameters
NameDescription
client Client or NoneType

the client to use. If not passed, falls back to the client stored on the current object.

projection str

(Optional) If used, must be 'full' or 'noAcl'. Defaults to 'noAcl'. Specifies the set of properties to return.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

if_etag_match Union[str, Set[str]]

(Optional) Make the operation conditional on whether the bucket's current ETag matches the given value.

if_etag_not_match Union[str, Set[str]])

(Optional) Make the operation conditional on whether the bucket's current ETag does not match the given value.

if_metageneration_match long

(Optional) Make the operation conditional on whether the bucket's current metageneration matches the given value.

if_metageneration_not_match long

(Optional) Make the operation conditional on whether the bucket's current metageneration does not match the given value.

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

rename_blob

rename_blob(blob, new_name, client=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, if_source_generation_match=None, if_source_generation_not_match=None, if_source_metageneration_match=None, if_source_metageneration_not_match=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Rename the given blob using copy and delete operations.

If user_project is set, bills the API request to that project.

Effectively, copies blob to the same bucket with a new name, then deletes the blob.

Parameters
NameDescription
blob Blob

The blob to be renamed.

new_name str

The new name for this blob.

client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

if_generation_match long

(Optional) See :ref:using-if-generation-match Note that the generation to be matched is that of the destination blob.

if_generation_not_match long

(Optional) See :ref:using-if-generation-not-match Note that the generation to be matched is that of the destination blob.

if_metageneration_match long

(Optional) See :ref:using-if-metageneration-match Note that the metageneration to be matched is that of the destination blob.

if_metageneration_not_match long

(Optional) See :ref:using-if-metageneration-not-match Note that the metageneration to be matched is that of the destination blob.

if_source_generation_match long

(Optional) Makes the operation conditional on whether the source object's generation matches the given value. Also used in the (implied) delete request.

if_source_generation_not_match long

(Optional) Makes the operation conditional on whether the source object's generation does not match the given value. Also used in the (implied) delete request.

if_source_metageneration_match long

(Optional) Makes the operation conditional on whether the source object's current metageneration matches the given value. Also used in the (implied) delete request.

if_source_metageneration_not_match long

(Optional) Makes the operation conditional on whether the source object's current metageneration does not match the given value. Also used in the (implied) delete request.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Returns
TypeDescription
BlobThe newly-renamed blob.

set_iam_policy

set_iam_policy(policy, client=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Update the IAM policy for the bucket.

See https://cloud.google.com/storage/docs/json_api/v1/buckets/setIamPolicy

If user_project is set, bills the API request to that project.

Parameters
NameDescription
policy google.api_core.iam.Policy

policy instance used to update bucket's IAM policy.

client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Returns
TypeDescription
google.api_core.iam.Policythe policy instance, based on the resource returned from the setIamPolicy API request.

test_iam_permissions

test_iam_permissions(permissions, client=None, timeout=60, retry=<google.api_core.retry.Retry object>)

API call: test permissions

See https://cloud.google.com/storage/docs/json_api/v1/buckets/testIamPermissions

If user_project is set, bills the API request to that project.

Parameters
NameDescription
permissions list of string

the permissions to check

client Client or NoneType

(Optional) The client to use. If not passed, falls back to the client stored on the current bucket.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries

Returns
TypeDescription
list of stringthe permissions returned by the testIamPermissions API request.

update

update(client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)

Sends all properties in a PUT request.

Updates the _properties with the response from the backend.

If user_project is set, bills the API request to that project.

Parameters
NameDescription
client Client or NoneType

the client to use. If not passed, falls back to the client stored on the current object.

timeout float or tuple

(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts

if_metageneration_match long

(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value.

if_metageneration_not_match long

(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value.

retry google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy

(Optional) How to retry the RPC. See: configuring_retries