- 2.17.0 (latest)
- 2.16.0
- 2.15.0
- 2.14.0
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.0
- 2.8.0
- 2.7.0
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.1
- 2.1.0
- 2.0.0
- 1.44.0
- 1.43.0
- 1.42.3
- 1.41.1
- 1.40.0
- 1.39.0
- 1.38.0
- 1.37.1
- 1.36.2
- 1.35.1
- 1.34.0
- 1.33.0
- 1.32.0
- 1.31.2
- 1.30.0
- 1.29.0
- 1.28.1
- 1.27.0
- 1.26.0
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
Bucket(client, name=None, user_project=None)
A class representing a Bucket on Cloud Storage.
Parameters | |
---|---|
Name | Description |
client |
Client
A client which holds credentials and project configuration for the bucket (which requires a project). |
name |
str
The name of the bucket. Bucket names must start and end with a number or letter. |
user_project |
str
(Optional) the project ID to be billed for API requests made via this instance. |
Properties
acl
Create our ACL on demand.
client
The client bound to this bucket.
cors
Retrieve or set CORS policies configured for this bucket.
See http://www.w3.org/TR/cors/ and https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
list of dictionaries | A sequence of mappings describing each CORS policy. |
default_event_based_hold
Scalar property getter.
default_kms_key_name
Retrieve / set default KMS encryption key for objects in the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
:setter: Set default KMS encryption key for items in this bucket. :getter: Get default KMS encryption key for items in this bucket.
Returns | |
---|---|
Type | Description |
str | Default KMS encryption key, or None if not set. |
default_object_acl
Create our defaultObjectACL on demand.
etag
Retrieve the ETag for the bucket.
See https://tools.ietf.org/html/rfc2616#section-3.11 and https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
str or | The bucket etag or None if the bucket's resource has not been loaded from the server. |
iam_configuration
Retrieve IAM configuration for this bucket.
Returns | |
---|---|
Type | Description |
| an instance for managing the bucket's IAM configuration. |
id
Retrieve the ID for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
str or | The ID of the bucket or None if the bucket's resource has not been loaded from the server. |
labels
Retrieve or set labels assigned to this bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets#labels
Returns | |
---|---|
Type | Description |
| Name-value pairs (string->string) labelling the bucket. |
lifecycle_rules
Retrieve or set lifecycle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
generator(dict) | A sequence of mappings describing each lifecycle rule. |
location
Retrieve location configured for this bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets and https://cloud.google.com/storage/docs/bucket-locations
Returns None
if the property has not been set before creation,
or if the bucket's resource has not been loaded from the server.
location_type
Retrieve or set the location type for the bucket.
See https://cloud.google.com/storage/docs/storage-classes
:setter: Set the location type for this bucket. :getter: Gets the the location type for this bucket.
Returns | |
---|---|
Type | Description |
str or | If set, one of MULTI_REGION_LOCATION_TYPE, REGION_LOCATION_TYPE, or DUAL_REGION_LOCATION_TYPE, else None . |
metageneration
Retrieve the metageneration for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
int or | The metageneration of the bucket or None if the bucket's resource has not been loaded from the server. |
owner
Retrieve info about the owner of the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
dict or | Mapping of owner's role/ID. Returns None if the bucket's resource has not been loaded from the server. |
path
The URL path to this bucket.
project_number
Retrieve the number of the project to which the bucket is assigned.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
int or | The project number that owns the bucket or None if the bucket's resource has not been loaded from the server. |
requester_pays
Does the requester pay for API requests for this bucket?
See https://cloud.google.com/storage/docs/requester-pays for details.
:setter: Update whether requester pays for this bucket. :getter: Query whether requester pays for this bucket.
Returns | |
---|---|
Type | Description |
bool | True if requester pays for API requests for the bucket, else False. |
retention_period
Retrieve or set the retention period for items in the bucket.
Returns | |
---|---|
Type | Description |
int or | number of seconds to retain items after upload or release from event-based lock, or None if the property is not set locally. |
retention_policy_effective_time
Retrieve the effective time of the bucket's retention policy.
Returns | |
---|---|
Type | Description |
datetime.datetime or | point-in time at which the bucket's retention policy is effective, or None if the property is not set locally. |
retention_policy_locked
Retrieve whthere the bucket's retention policy is locked.
Returns | |
---|---|
Type | Description |
bool | True if the bucket's policy is locked, or else False if the policy is not locked, or the property is not set locally. |
self_link
Retrieve the URI for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
str or | The self link for the bucket or None if the bucket's resource has not been loaded from the server. |
storage_class
Retrieve or set the storage class for the bucket.
See https://cloud.google.com/storage/docs/storage-classes
:setter: Set the storage class for this bucket. :getter: Gets the the storage class for this bucket.
Returns | |
---|---|
Type | Description |
str or | If set, one of NEARLINE_STORAGE_CLASS, COLDLINE_STORAGE_CLASS, ARCHIVE_STORAGE_CLASS, STANDARD_STORAGE_CLASS, MULTI_REGIONAL_LEGACY_STORAGE_CLASS, REGIONAL_LEGACY_STORAGE_CLASS, or DURABLE_REDUCED_AVAILABILITY_LEGACY_STORAGE_CLASS, else None . |
time_created
Retrieve the timestamp at which the bucket was created.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Returns | |
---|---|
Type | Description |
| Datetime object parsed from RFC3339 valid timestamp, or None if the bucket's resource has not been loaded from the server. |
user_project
Project ID to be billed for API requests made via this bucket.
If unset, API requests are billed to the bucket owner.
A user project is required for all operations on Requester Pays buckets.
See https://cloud.google.com/storage/docs/requester-pays#requirements for details.
versioning_enabled
Is versioning enabled for this bucket?
See https://cloud.google.com/storage/docs/object-versioning for details.
:setter: Update whether versioning is enabled for this bucket. :getter: Query whether versioning is enabled for this bucket.
Returns | |
---|---|
Type | Description |
bool | True if enabled, else False. |
Methods
Bucket
Bucket(client, name=None, user_project=None)
property name
Get the bucket's name.
add_lifecycle_delete_rule
add_lifecycle_delete_rule(**kw)
Add a "delete" rule to lifestyle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and https://cloud.google.com/storage/docs/json_api/v1/buckets
.. literalinclude:: snippets.py :start-after: [START add_lifecycle_delete_rule] :end-before: [END add_lifecycle_delete_rule] :dedent: 4
add_lifecycle_set_storage_class_rule
add_lifecycle_set_storage_class_rule(storage_class, **kw)
Add a "delete" rule to lifestyle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and https://cloud.google.com/storage/docs/json_api/v1/buckets
.. literalinclude:: snippets.py :start-after: [START add_lifecycle_set_storage_class_rule] :end-before: [END add_lifecycle_set_storage_class_rule] :dedent: 4
Parameter | |
---|---|
Name | Description |
storage_class |
str, one of
new storage class to assign to matching items. |
blob
blob(
blob_name, chunk_size=None, encryption_key=None, kms_key_name=None, generation=None
)
Factory constructor for blob object.
Parameters | |
---|---|
Name | Description |
blob_name |
str
The name of the blob to be instantiated. |
chunk_size |
int
The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification. |
encryption_key |
bytes
(Optional) 32 byte encryption key for customer-supplied encryption. |
kms_key_name |
str
(Optional) Resource name of KMS key used to encrypt blob's content. |
generation |
long
(Optional) If present, selects a specific revision of this object. |
Returns | |
---|---|
Type | Description |
Blob | The blob object created. |
clear_lifecyle_rules
clear_lifecyle_rules()
Set lifestyle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and https://cloud.google.com/storage/docs/json_api/v1/buckets
configure_website
configure_website(main_page_suffix=None, not_found_page=None)
Configure website-related properties.
See https://cloud.google.com/storage/docs/hosting-static-website
If you want this bucket to host a website, just provide the name of an index page and a page to use when a blob isn't found:
.. literalinclude:: snippets.py :start-after: [START configure_website] :end-before: [END configure_website] :dedent: 4
You probably should also make the whole bucket public:
.. literalinclude:: snippets.py :start-after: [START make_public] :end-before: [END make_public] :dedent: 4
This says: "Make the bucket public, and all the stuff already in the bucket, and anything else I add to the bucket. Just make it all public."
Parameters | |
---|---|
Name | Description |
main_page_suffix |
str
The page to use as the main page of a directory. Typically something like index.html. |
not_found_page |
str
The file to use when a page isn't found. |
copy_blob
copy_blob(blob, destination_bucket, new_name=None, client=None, preserve_acl=True, source_generation=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, if_source_generation_match=None, if_source_generation_not_match=None, if_source_metageneration_match=None, if_source_metageneration_not_match=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Copy the given blob to the given bucket, optionally with a new name.
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
blob |
Blob
The blob to be copied. |
destination_bucket |
Bucket
The bucket into which the blob should be copied. |
new_name |
str
(Optional) The new name for the copied file. |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
preserve_acl |
bool
DEPRECATED. This argument is not functional! (Optional) Copies ACL from old blob to new blob. Default: True. |
source_generation |
long
(Optional) The generation of the blob to be copied. |
if_generation_match |
long
(Optional) See :ref: |
if_generation_not_match |
long
(Optional) See :ref: |
if_metageneration_match |
long
(Optional) See :ref: |
if_metageneration_not_match |
long
(Optional) See :ref: |
if_source_generation_match |
long
(Optional) Makes the operation conditional on whether the source object's generation matches the given value. |
if_source_generation_not_match |
long
(Optional) Makes the operation conditional on whether the source object's generation does not match the given value. |
if_source_metageneration_match |
long
(Optional) Makes the operation conditional on whether the source object's current metageneration matches the given value. |
if_source_metageneration_not_match |
long
(Optional) Makes the operation conditional on whether the source object's current metageneration does not match the given value. |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Returns | |
---|---|
Type | Description |
Blob | The new Blob. .. rubric:: Example Copy a blob including ACL. >>> from google.cloud import storage >>> client = storage.Client(project="project") >>> bucket = client.bucket("bucket") >>> dst_bucket = client.bucket("destination-bucket") >>> blob = bucket.blob("file.ext") >>> new_blob = bucket.copy_blob(blob, dst_bucket) >>> new_blob.acl.save(blob.acl) |
create
create(client=None, project=None, location=None, predefined_acl=None, predefined_default_object_acl=None, timeout=60, retry=<google.api_core.retry.Retry object>)
DEPRECATED. Creates current bucket.
If the bucket already exists, will raise xref_Conflict.
This implements "storage.buckets.insert".
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
project |
str
(Optional) The project under which the bucket is to be created. If not passed, uses the project set on the client. |
location |
str
(Optional) The location of the bucket. If not passed, the default location, US, will be used. See https://cloud.google.com/storage/docs/bucket-locations |
predefined_acl |
str
(Optional) Name of predefined ACL to apply to bucket. See: https://cloud.google.com/storage/docs/access-control/lists#predefined-acl |
predefined_default_object_acl |
str
(Optional) Name of predefined ACL to apply to bucket's objects. See: https://cloud.google.com/storage/docs/access-control/lists#predefined-acl |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Exceptions | |
---|---|
Type | Description |
ValueError | if project is None and client's project is also None. |
delete
delete(force=False, client=None, if_metageneration_match=None, if_metageneration_not_match=None, timeout=60, retry=<google.api_core.retry.Retry object>)
Delete this bucket.
The bucket must be empty in order to submit a delete request. If
force=True
is passed, this will first attempt to delete all the
objects / blobs in the bucket (i.e. try to empty the bucket).
If the bucket doesn't exist, this will raise
xref_NotFound. If the bucket is not empty
(and force=False
), will raise xref_Conflict.
If force=True
and the bucket contains more than 256 objects / blobs
this will cowardly refuse to delete the objects (or the bucket). This
is to prevent accidental bucket deletion and to prevent extremely long
runtime of this method.
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
force |
bool
If True, empties the bucket's objects then deletes it. |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
if_metageneration_match |
long
(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value. |
if_metageneration_not_match |
long
(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value. |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Exceptions | |
---|---|
Type | Description |
`ValueError | if force is True and the bucket contains more than 256 objects / blobs. |
delete_blob
delete_blob(blob_name, client=None, generation=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Deletes a blob from the current bucket.
If the blob isn't found (backend 404), raises a xref_NotFound.
For example:
.. literalinclude:: snippets.py :start-after: [START delete_blob] :end-before: [END delete_blob] :dedent: 4
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
blob_name |
str
A blob name to delete. |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
generation |
long
(Optional) If present, permanently deletes a specific revision of this object. |
if_generation_match |
long
(Optional) See :ref: |
if_generation_not_match |
long
(Optional) See :ref: |
if_metageneration_match |
long
(Optional) See :ref: |
if_metageneration_not_match |
long
(Optional) See :ref: |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Exceptions | |
---|---|
Type | Description |
NotFound | (to suppress the exception, call delete_blobs , passing a no-op on_error callback, e.g.: .. literalinclude:: snippets.py :start-after: [START delete_blobs] :end-before: [END delete_blobs] :dedent: 4 |
delete_blobs
delete_blobs(blobs, on_error=None, client=None, timeout=60, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Deletes a list of blobs from the current bucket.
Uses delete_blob
to delete each individual blob.
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
blobs |
list
A list of Blob-s or blob names to delete. |
on_error |
callable
(Optional) Takes single argument: |
client |
Client
(Optional) The client to use. If not passed, falls back to the |
if_generation_match |
list of long
(Optional) See :ref: |
if_generation_not_match |
list of long
(Optional) See :ref: |
if_metageneration_match |
list of long
(Optional) See :ref: |
if_metageneration_not_match |
list of long
(Optional) See :ref: |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Exceptions | |
---|---|
Type | Description |
NotFound | (if on_error is not passed). .. rubric:: Example Delete blobs using generation match preconditions. >>> from google.cloud import storage >>> client = storage.Client() >>> bucket = client.bucket("bucket-name") >>> blobs = [bucket.blob("blob-name-1"), bucket.blob("blob-name-2")] >>> if_generation_match = [None] * len(blobs) >>> if_generation_match[0] = "123" # precondition for "blob-name-1" >>> bucket.delete_blobs(blobs, if_generation_match=if_generation_match) |
disable_logging
disable_logging()
Disable access logging for this bucket.
See https://cloud.google.com/storage/docs/access-logs#disabling
disable_website
disable_website()
Disable the website configuration for this bucket.
This is really just a shortcut for setting the website-related
attributes to None
.
enable_logging
enable_logging(bucket_name, object_prefix="")
Enable access logging for this bucket.
Parameters | |
---|---|
Name | Description |
bucket_name |
str
name of bucket in which to store access logs |
object_prefix |
str
prefix for access log filenames |
exists
exists(client=None, timeout=60, if_etag_match=None, if_etag_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.api_core.retry.Retry object>)
Determines whether or not this bucket exists.
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
if_etag_match |
Union[str, Set[str]]
(Optional) Make the operation conditional on whether the bucket's current ETag matches the given value. |
if_etag_not_match |
Union[str, Set[str]])
(Optional) Make the operation conditional on whether the bucket's current ETag does not match the given value. |
if_metageneration_match |
long
(Optional) Make the operation conditional on whether the bucket's current metageneration matches the given value. |
if_metageneration_not_match |
long
(Optional) Make the operation conditional on whether the bucket's current metageneration does not match the given value. |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Returns | |
---|---|
Type | Description |
bool | True if the bucket exists in Cloud Storage. |
from_string
from_string(uri, client=None)
Get a constructor for bucket object by URI.
Parameters | |
---|---|
Name | Description |
uri |
str
The bucket uri pass to get bucket object. |
client |
Client or
(Optional) The client to use. Application code should always pass |
Returns | |
---|---|
Type | Description |
Bucket | The bucket object created. .. rubric:: Example Get a constructor for bucket object by URI.. >>> from google.cloud import storage >>> from google.cloud.storage.bucket import Bucket >>> client = storage.Client() >>> bucket = Bucket.from_string("gs://bucket", client=client) |
generate_signed_url
generate_signed_url(
expiration=None,
api_access_endpoint="https://storage.googleapis.com",
method="GET",
headers=None,
query_parameters=None,
client=None,
credentials=None,
version=None,
virtual_hosted_style=False,
bucket_bound_hostname=None,
scheme="http",
)
Generates a signed URL for this bucket.
If you have a bucket that you want to allow access to for a set amount of time, you can use this method to generate a URL that is only valid within a certain time period.
If bucket_bound_hostname
is set as an argument of api_access_endpoint
,
https
works only if using a CDN
.
.. rubric:: Example
Generates a signed URL for this bucket using bucket_bound_hostname and scheme.
from google.cloud import storage client = storage.Client() bucket = client.get_bucket('my-bucket-name') url = bucket.generate_signed_url(expiration='url-expiration-time', bucket_bound_hostname='mydomain.tld', version='v4') url = bucket.generate_signed_url(expiration='url-expiration-time', bucket_bound_hostname='mydomain.tld', version='v4',scheme='https') # If using
CDN
This is particularly useful if you don't want publicly accessible buckets, but don't want to require users to explicitly log in.
Parameters | |
---|---|
Name | Description |
expiration |
Union[Integer, datetime.datetime, datetime.timedelta]
Point in time when the signed URL should expire. If a |
api_access_endpoint |
str
(Optional) URI base. |
method |
str
The HTTP verb that will be used when requesting the URL. |
headers |
dict
(Optional) Additional HTTP headers to be included as part of the signed URLs. See: https://cloud.google.com/storage/docs/xml-api/reference-headers Requests using the signed URL must pass the specified header (name and value) with each request for the URL. |
query_parameters |
dict
(Optional) Additional query parameters to be included as part of the signed URLs. See: https://cloud.google.com/storage/docs/xml-api/reference-headers#query |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
credentials |
The authorization credentials to attach to requests. These credentials identify this application to the service. If none are specified, the client will attempt to ascertain the credentials from the environment. |
version |
str
(Optional) The version of signed credential to create. Must be one of 'v2' 'v4'. |
virtual_hosted_style |
bool
(Optional) If true, then construct the URL relative the bucket's virtual hostname, e.g., '
|
bucket_bound_hostname |
str
(Optional) If pass, then construct the URL relative to the bucket-bound hostname. Value cane be a bare or with scheme, e.g., 'example.com' or 'http://example.com'. See: https://cloud.google.com/storage/docs/request-endpoints#cname |
scheme |
str
(Optional) If |
Exceptions | |
---|---|
Type | Description |
`ValueError | when version is invalid. |
`TypeError | when expiration is not a valid type. |
`AttributeError | if credentials is not an instance of google.auth.credentials.Signing . |
Returns | |
---|---|
Type | Description |
str | A signed URL you can use to access the resource until expiration. |
generate_upload_policy
generate_upload_policy(conditions, expiration=None, client=None)
Create a signed upload policy for uploading objects.
This method generates and signs a policy document. You can use
policy documents
_ to allow visitors to a website to upload files to
Google Cloud Storage without giving them direct write access.
For example:
.. literalinclude:: snippets.py :start-after: [START policy_document] :end-before: [END policy_document] :dedent: 4
.. _policy documents: https://cloud.google.com/storage/docs/xml-api /post-object#policydocument
Parameters | |
---|---|
Name | Description |
expiration |
datetime
(Optional) Expiration in UTC. If not specified, the policy will expire in 1 hour. |
conditions |
list
A list of conditions as described in the |
client |
Client
(Optional) The client to use. If not passed, falls back to the |
Returns | |
---|---|
Type | Description |
dict | A dictionary of (form field name, form field value) of form fields that should be added to your HTML upload form in order to attach the signature. |
get_blob
get_blob(blob_name, client=None, encryption_key=None, generation=None, if_etag_match=None, if_etag_not_match=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, timeout=60, retry=<google.api_core.retry.Retry object>, **kwargs)
Get a blob object by name.
This will return None if the blob doesn't exist:
.. literalinclude:: snippets.py :start-after: [START get_blob] :end-before: [END get_blob] :dedent: 4
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
blob_name |
str
The name of the blob to retrieve. |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
encryption_key |
bytes
(Optional) 32 byte encryption key for customer-supplied encryption. See https://cloud.google.com/storage/docs/encryption#customer-supplied. |
generation |
long
(Optional) If present, selects a specific revision of this object. |
if_etag_match |
Union[str, Set[str]]
(Optional) See :ref: |
if_etag_not_match |
Union[str, Set[str]]
(Optional) See :ref: |
if_generation_match |
long
(Optional) See :ref: |
if_generation_not_match |
long
(Optional) See :ref: |
if_metageneration_match |
long
(Optional) See :ref: |
if_metageneration_not_match |
long
(Optional) See :ref: |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Returns | |
---|---|
Type | Description |
Blob or None | The blob object if it exists, otherwise None. |
get_iam_policy
get_iam_policy(client=None, requested_policy_version=None, timeout=60, retry=<google.api_core.retry.Retry object>)
Retrieve the IAM policy for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets/getIamPolicy
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
requested_policy_version |
int or
(Optional) The version of IAM policies to request. If a policy with a condition is requested without setting this, the server will return an error. This must be set to a value of 3 to retrieve IAM policies containing conditions. This is to prevent client code that isn't aware of IAM conditions from interpreting and modifying policies incorrectly. The service might return a policy with version lower than the one that was requested, based on the feature syntax in the policy fetched. |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Returns | |
---|---|
Type | Description |
| the policy instance, based on the resource returned from the getIamPolicy API request. Example: .. code-block:: python from google.cloud.storage.iam import STORAGE_OBJECT_VIEWER_ROLE policy = bucket.get_iam_policy(requested_policy_version=3) policy.version = 3 # Add a binding to the policy via it's bindings property policy.bindings.append({ "role": STORAGE_OBJECT_VIEWER_ROLE, "members": {"serviceAccount:account@project.iam.gserviceaccount.com", ...}, # Optional: "condition": { "title": "prefix" "description": "Objects matching prefix" "expression": "resource.name.startsWith("projects/project-name/buckets/bucket-name/objects/prefix")" } }) bucket.set_iam_policy(policy) |
get_logging
get_logging()
Return info about access logging for this bucket.
See https://cloud.google.com/storage/docs/access-logs#status
Returns | |
---|---|
Type | Description |
dict or None | a dict w/ keys, logBucket and logObjectPrefix (if logging is enabled), or None (if not). |
get_notification
get_notification(notification_id, client=None, timeout=60, retry=<google.api_core.retry.Retry object>)
Get Pub / Sub notification for this bucket.
See: https://cloud.google.com/storage/docs/json_api/v1/notifications/get
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
notification_id |
str
The notification id to retrieve the notification configuration. |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Returns | |
---|---|
Type | Description |
| notification instance. .. rubric:: Example Get notification using notification id. >>> from google.cloud import storage >>> client = storage.Client() >>> bucket = client.get_bucket('my-bucket-name') # API request. >>> notification = bucket.get_notification(notification_id='id') # API request. |
list_blobs
list_blobs(max_results=None, page_token=None, prefix=None, delimiter=None, start_offset=None, end_offset=None, include_trailing_delimiter=None, versions=None, projection='noAcl', fields=None, client=None, timeout=60, retry=<google.api_core.retry.Retry object>)
DEPRECATED. Return an iterator used to find blobs in the bucket.
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
max_results |
int
(Optional) The maximum number of blobs to return. |
page_token |
str
(Optional) If present, return the next batch of blobs, using the value, which must correspond to the |
prefix |
str
(Optional) Prefix used to filter blobs. |
delimiter |
str
(Optional) Delimiter, used with |
start_offset |
str
(Optional) Filter results to objects whose names are lexicographically equal to or after |
end_offset |
str
(Optional) Filter results to objects whose names are lexicographically before |
include_trailing_delimiter |
boolean
(Optional) If true, objects that end in exactly one instance of |
versions |
bool
(Optional) Whether object versions should be returned as separate blobs. |
projection |
str
(Optional) If used, must be 'full' or 'noAcl'. Defaults to |
fields |
str
(Optional) Selector specifying which fields to include in a partial response. Must be a list of fields. For example to get a partial response with just the next page token and the name and language of each blob returned: |
client |
Client
(Optional) The client to use. If not passed, falls back to the |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Returns | |
---|---|
Type | Description |
| Iterator of all Blob in this bucket matching the arguments. .. rubric:: Example List blobs in the bucket with user_project. >>> from google.cloud import storage >>> client = storage.Client() >>> bucket = storage.Bucket(client, "my-bucket-name", user_project="my-project") >>> all_blobs = list(client.list_blobs(bucket)) |
list_notifications
list_notifications(client=None, timeout=60, retry=<google.api_core.retry.Retry object>)
List Pub / Sub notifications for this bucket.
See: https://cloud.google.com/storage/docs/json_api/v1/notifications/list
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Returns | |
---|---|
Type | Description |
list of | notification instances |
lock_retention_policy
lock_retention_policy(client=None, timeout=60, retry=<google.api_core.retry.Retry object>)
Lock the bucket's retention policy.
Parameters | |
---|---|
Name | Description |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Exceptions | |
---|---|
Type | Description |
ValueError | if the bucket has no metageneration (i.e., new or never reloaded); if the bucket has no retention policy assigned; if the bucket's retention policy is already locked. |
make_private
make_private(recursive=False, future=False, client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Update bucket's ACL, revoking read access for anonymous users.
Parameters | |
---|---|
Name | Description |
recursive |
bool
If True, this will make all blobs inside the bucket private as well. |
future |
bool
If True, this will make all objects created in the future private as well. |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
if_metageneration_match |
long
(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value. |
if_metageneration_not_match |
long
(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value. |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Exceptions | |
---|---|
Type | Description |
ValueError | If recursive is True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned by list_blobs and call make_private for each blob. |
make_public
make_public(recursive=False, future=False, client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Update bucket's ACL, granting read access to anonymous users.
Parameters | |
---|---|
Name | Description |
recursive |
bool
If True, this will make all blobs inside the bucket public as well. |
future |
bool
If True, this will make all objects created in the future public as well. |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
if_metageneration_match |
long
(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value. |
if_metageneration_not_match |
long
(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value. |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Exceptions | |
---|---|
Type | Description |
ValueError | If recursive is True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned by list_blobs and call make_public for each blob. |
notification
notification(
topic_name=None,
topic_project=None,
custom_attributes=None,
event_types=None,
blob_name_prefix=None,
payload_format="NONE",
notification_id=None,
)
Factory: create a notification resource for the bucket.
See: .BucketNotification
for parameters.
patch
patch(client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Sends all changed properties in a PATCH request.
Updates the _properties
with the response from the backend.
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
client |
Client or
the client to use. If not passed, falls back to the |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
if_metageneration_match |
long
(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value. |
if_metageneration_not_match |
long
(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value. |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
path_helper
path_helper(bucket_name)
Relative URL path for a bucket.
Parameter | |
---|---|
Name | Description |
bucket_name |
str
The bucket name in the path. |
Returns | |
---|---|
Type | Description |
str | The relative URL path for bucket_name . |
reload
reload(client=None, projection='noAcl', timeout=60, if_etag_match=None, if_etag_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.api_core.retry.Retry object>)
Reload properties from Cloud Storage.
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
client |
Client or
the client to use. If not passed, falls back to the |
projection |
str
(Optional) If used, must be 'full' or 'noAcl'. Defaults to |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
if_etag_match |
Union[str, Set[str]]
(Optional) Make the operation conditional on whether the bucket's current ETag matches the given value. |
if_etag_not_match |
Union[str, Set[str]])
(Optional) Make the operation conditional on whether the bucket's current ETag does not match the given value. |
if_metageneration_match |
long
(Optional) Make the operation conditional on whether the bucket's current metageneration matches the given value. |
if_metageneration_not_match |
long
(Optional) Make the operation conditional on whether the bucket's current metageneration does not match the given value. |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
rename_blob
rename_blob(blob, new_name, client=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, if_source_generation_match=None, if_source_generation_not_match=None, if_source_metageneration_match=None, if_source_metageneration_not_match=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Rename the given blob using copy and delete operations.
If user_project
is set, bills the API request to that project.
Effectively, copies blob to the same bucket with a new name, then deletes the blob.
Parameters | |
---|---|
Name | Description |
blob |
Blob
The blob to be renamed. |
new_name |
str
The new name for this blob. |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
if_generation_match |
long
(Optional) See :ref: |
if_generation_not_match |
long
(Optional) See :ref: |
if_metageneration_match |
long
(Optional) See :ref: |
if_metageneration_not_match |
long
(Optional) See :ref: |
if_source_generation_match |
long
(Optional) Makes the operation conditional on whether the source object's generation matches the given value. Also used in the (implied) delete request. |
if_source_generation_not_match |
long
(Optional) Makes the operation conditional on whether the source object's generation does not match the given value. Also used in the (implied) delete request. |
if_source_metageneration_match |
long
(Optional) Makes the operation conditional on whether the source object's current metageneration matches the given value. Also used in the (implied) delete request. |
if_source_metageneration_not_match |
long
(Optional) Makes the operation conditional on whether the source object's current metageneration does not match the given value. Also used in the (implied) delete request. |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Returns | |
---|---|
Type | Description |
| The newly-renamed blob. |
set_iam_policy
set_iam_policy(policy, client=None, timeout=60, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Update the IAM policy for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets/setIamPolicy
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
policy |
policy instance used to update bucket's IAM policy. |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Returns | |
---|---|
Type | Description |
| the policy instance, based on the resource returned from the setIamPolicy API request. |
test_iam_permissions
test_iam_permissions(permissions, client=None, timeout=60, retry=<google.api_core.retry.Retry object>)
API call: test permissions
See https://cloud.google.com/storage/docs/json_api/v1/buckets/testIamPermissions
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
permissions |
list of string
the permissions to check |
client |
Client or
(Optional) The client to use. If not passed, falls back to the |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Returns | |
---|---|
Type | Description |
list of string | the permissions returned by the testIamPermissions API request. |
update
update(client=None, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>)
Sends all properties in a PUT request.
Updates the _properties
with the response from the backend.
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
client |
Client or
the client to use. If not passed, falls back to the |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
if_metageneration_match |
long
(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value. |
if_metageneration_not_match |
long
(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value. |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |