- 2.17.0 (latest)
- 2.16.0
- 2.15.0
- 2.14.0
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.0
- 2.8.0
- 2.7.0
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.1
- 2.1.0
- 2.0.0
- 1.44.0
- 1.43.0
- 1.42.3
- 1.41.1
- 1.40.0
- 1.39.0
- 1.38.0
- 1.37.1
- 1.36.2
- 1.35.1
- 1.34.0
- 1.33.0
- 1.32.0
- 1.31.2
- 1.30.0
- 1.29.0
- 1.28.1
- 1.27.0
- 1.26.0
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
Client(project=<object object>, credentials=None, _http=None, client_info=None, client_options=None)
Client to bundle configuration needed for API requests.
Parameters | |
---|---|
Name | Description |
project |
str or None
the project which the client acts on behalf of. Will be passed when creating a topic. If not passed, falls back to the default inferred from the environment. |
credentials |
(Optional) The OAuth2 Credentials to use for this client. If not passed (and if no |
_http |
(Optional) HTTP object to make requests. Can be any object that defines |
client_info |
The client info used to send a user-agent string along with API requests. If |
client_options |
(Optional) Client options used to set user options on the client. API Endpoint should be set through client_options. |
Properties
current_batch
Currently-active batch.
Returns | |
---|---|
Type | Description |
Batch or | The batch at the top of the batch stack. |
Methods
batch
batch()
Factory constructor for batch object.
Returns | |
---|---|
Type | Description |
Batch | The batch object created. |
bucket
bucket(bucket_name, user_project=None)
Factory constructor for bucket object.
Parameters | |
---|---|
Name | Description |
bucket_name |
str
The name of the bucket to be instantiated. |
user_project |
str
(Optional) The project ID to be billed for API requests made via the bucket. |
Returns | |
---|---|
Type | Description |
Bucket | The bucket object created. |
create_anonymous_client
create_anonymous_client()
Factory: return client with anonymous credentials.
Returns | |
---|---|
Type | Description |
Client | Instance w/ anonymous credentials and no project. |
create_bucket
create_bucket(bucket_or_name, requester_pays=None, project=None, user_project=None, location=None, data_locations=None, predefined_acl=None, predefined_default_object_acl=None, timeout=60, retry=<google.api_core.retry.Retry object>)
API call: create a new bucket via a POST request.
See https://cloud.google.com/storage/docs/json_api/v1/buckets/insert
Parameters | |
---|---|
Name | Description |
bucket_or_name |
Union[ Bucket, str, ]
The bucket resource to pass or name to create. |
requester_pays |
bool
DEPRECATED. Use Bucket().requester_pays instead. (Optional) Whether requester pays for API requests for this bucket and its blobs. |
project |
str
(Optional) The project under which the bucket is to be created. If not passed, uses the project set on the client. |
user_project |
str
(Optional) The project ID to be billed for API requests made via created bucket. |
location |
str
(Optional) The location of the bucket. If not passed, the default location, US, will be used. If specifying a dual-region, |
data_locations |
list of str
(Optional) The list of regional locations of a custom dual-region bucket. Dual-regions require exactly 2 regional locations. See: https://cloud.google.com/storage/docs/locations |
predefined_acl |
str
(Optional) Name of predefined ACL to apply to bucket. See: https://cloud.google.com/storage/docs/access-control/lists#predefined-acl |
predefined_default_object_acl |
str
(Optional) Name of predefined ACL to apply to bucket's objects. See: https://cloud.google.com/storage/docs/access-control/lists#predefined-acl |
timeout |
Optional[Union[float, Tuple[float, float]]]
The amount of time, in seconds, to wait for the server response. Can also be passed as a tuple (connect_timeout, read_timeout). See |
retry |
Optional[Union[google.api_core.retry.Retry, google.cloud.storage.retry.ConditionalRetryPolicy]]
How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options. A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set. See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them. |
Exceptions | |
---|---|
Type | Description |
google.cloud.exceptions.Conflict | If the bucket already exists. .. rubric:: Examples Create a bucket using a string. .. literalinclude:: snippets.py :start-after: START create_bucket :end-before: END create_bucket :dedent: 4 Create a bucket using a resource. >>> from google.cloud import storage >>> client = storage.Client() >>> # Set properties on a plain resource object. >>> bucket = storage.Bucket("my-bucket-name") >>> bucket.location = "europe-west6" >>> bucket.storage_class = "COLDLINE" >>> # Pass that resource object to the client. >>> bucket = client.create_bucket(bucket) # API request. |
create_hmac_key
create_hmac_key(
service_account_email, project_id=None, user_project=None, timeout=60, retry=None
)
Create an HMAC key for a service account.
Parameters | |
---|---|
Name | Description |
service_account_email |
str
e-mail address of the service account |
project_id |
str
(Optional) Explicit project ID for the key. Defaults to the client's project. |
user_project |
str
(Optional) This parameter is currently ignored. |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options. A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set. See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them. |
Returns | |
---|---|
Type | Description |
Tuple[HMACKeyMetadata, str] | metadata for the created key, plus the bytes of the key's secret, which is an 40-character base64-encoded string. |
download_blob_to_file
download_blob_to_file(blob_or_uri, file_obj, start=None, end=None, raw_download=False, if_etag_match=None, if_etag_not_match=None, if_generation_match=None, if_generation_not_match=None, if_metageneration_match=None, if_metageneration_not_match=None, timeout=60, checksum='md5', retry=<google.api_core.retry.Retry object>)
Download the contents of a blob object or blob URI into a file-like object.
Parameters | |
---|---|
Name | Description |
blob_or_uri |
Union[ Blob, str, ]
The blob resource to pass or URI to download. |
file_obj |
file
A file handle to which to write the blob's data. |
start |
int
(Optional) The first byte in a range to be downloaded. |
end |
int
(Optional) The last byte in a range to be downloaded. |
raw_download |
bool
(Optional) If true, download the object without any expansion. |
if_etag_match |
Union[str, Set[str]]
(Optional) See :ref: |
if_etag_not_match |
Union[str, Set[str]]
(Optional) See :ref: |
if_generation_match |
long
(Optional) See :ref: |
if_generation_not_match |
long
(Optional) See :ref: |
if_metageneration_match |
long
(Optional) See :ref: |
if_metageneration_not_match |
long
(Optional) See :ref: |
timeout |
[Union[float, Tuple[float, float]]]
(Optional) The amount of time, in seconds, to wait for the server response. See: |
checksum |
str
(Optional) The type of checksum to compute to verify the integrity of the object. The response headers must contain a checksum of the requested type. If the headers lack an appropriate checksum (for instance in the case of transcoded or ranged downloads where the remote service does not know the correct checksum, including downloads where chunk_size is set) an INFO-level log will be emitted. Supported values are "md5", "crc32c" and None. The default is "md5". |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy .. rubric:: Examples Download a blob using a blob resource. >>> from google.cloud import storage >>> client = storage.Client() >>> bucket = client.get_bucket('my-bucket-name') >>> blob = storage.Blob('path/to/blob', bucket) >>> with open('file-to-download-to', 'w') as file_obj: >>> client.download_blob_to_file(blob, file_obj) # API request. Download a blob using a URI. >>> from google.cloud import storage >>> client = storage.Client() >>> with open('file-to-download-to', 'wb') as file_obj: >>> client.download_blob_to_file( >>> 'gs://bucket_name/path/to/blob', file_obj)
(Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options. A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set. See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them. Media operations (downloads and uploads) do not support non-default predicates in a Retry object. The default will always be used. Other configuration changes for Retry objects such as delays and deadlines are respected. |
generate_signed_post_policy_v4
generate_signed_post_policy_v4(
bucket_name,
blob_name,
expiration,
conditions=None,
fields=None,
credentials=None,
virtual_hosted_style=False,
bucket_bound_hostname=None,
scheme="http",
service_account_email=None,
access_token=None,
)
Generate a V4 signed policy object.
Parameters | |
---|---|
Name | Description |
bucket_name |
str
Bucket name. |
blob_name |
str
Object name. |
expiration |
Union[Integer, datetime.datetime, datetime.timedelta]
Policy expiration time. If a |
conditions |
list
(Optional) List of POST policy conditions, which are used to restrict what is allowed in the request. |
fields |
dict
(Optional) Additional elements to include into request. |
credentials |
(Optional) Credentials object with an associated private key to sign text. |
virtual_hosted_style |
bool
(Optional) If True, construct the URL relative to the bucket virtual hostname, e.g., '
|
bucket_bound_hostname |
str
(Optional) If passed, construct the URL relative to the bucket-bound hostname. Value can be bare or with a scheme, e.g., 'example.com' or 'http://example.com'. See: https://cloud.google.com/storage/docs/request-endpoints#cname |
scheme |
str
(Optional) If |
service_account_email |
str
(Optional) E-mail address of the service account. |
access_token |
str
(Optional) Access token for a service account. |
Returns | |
---|---|
Type | Description |
dict | Signed POST policy. .. rubric:: Example Generate signed POST policy and upload a file. >>> import datetime >>> from google.cloud import storage >>> client = storage.Client() >>> tz = datetime.timezone(datetime.timedelta(hours=1), 'CET') >>> policy = client.generate_signed_post_policy_v4( "bucket-name", "blob-name", expiration=datetime.datetime(2020, 3, 17, tzinfo=tz), conditions=[ ["content-length-range", 0, 255] ], fields=[ "x-goog-meta-hello" => "world" ], ) >>> with open("bucket-name", "rb") as f: files = {"file": ("bucket-name", f)} requests.post(policy["url"], data=policy["fields"], files=files) |
get_bucket
get_bucket(bucket_or_name, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.api_core.retry.Retry object>)
API call: retrieve a bucket via a GET request.
See https://cloud.google.com/storage/docs/json_api/v1/buckets/get
Parameters | |
---|---|
Name | Description |
bucket_or_name |
Union[ Bucket, str, ]
The bucket resource to pass or name to create. |
timeout |
Optional[Union[float, Tuple[float, float]]]
The amount of time, in seconds, to wait for the server response. Can also be passed as a tuple (connect_timeout, read_timeout). See |
if_metageneration_match |
Optional[long]
Make the operation conditional on whether the blob's current metageneration matches the given value. |
if_metageneration_not_match |
Optional[long]
Make the operation conditional on whether the blob's current metageneration does not match the given value. |
retry |
Optional[Union[google.api_core.retry.Retry, google.cloud.storage.retry.ConditionalRetryPolicy]]
How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options. A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set. See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them. |
Exceptions | |
---|---|
Type | Description |
google.cloud.exceptions.NotFound | If the bucket is not found. .. rubric:: Examples Retrieve a bucket using a string. .. literalinclude:: snippets.py :start-after: START get_bucket :end-before: END get_bucket :dedent: 4 Get a bucket using a resource. >>> from google.cloud import storage >>> client = storage.Client() >>> # Set properties on a plain resource object. >>> bucket = client.get_bucket("my-bucket-name") >>> # Time passes. Another program may have modified the bucket ... # in the meantime, so you want to get the latest state. >>> bucket = client.get_bucket(bucket) # API request. |
get_hmac_key_metadata
get_hmac_key_metadata(access_id, project_id=None, user_project=None, timeout=60)
Return a metadata instance for the given HMAC key.
Parameters | |
---|---|
Name | Description |
access_id |
str
Unique ID of an existing key. |
project_id |
str
(Optional) Project ID of an existing key. Defaults to client's project. |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
user_project |
str
(Optional) This parameter is currently ignored. |
get_service_account_email
get_service_account_email(project=None, timeout=60, retry=<google.api_core.retry.Retry object>)
Get the email address of the project's GCS service account
Parameters | |
---|---|
Name | Description |
project |
str
(Optional) Project ID to use for retreiving GCS service account email address. Defaults to the client's project. |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Returns | |
---|---|
Type | Description |
str | service account email address |
list_blobs
list_blobs(bucket_or_name, max_results=None, page_token=None, prefix=None, delimiter=None, start_offset=None, end_offset=None, include_trailing_delimiter=None, versions=None, projection='noAcl', fields=None, page_size=None, timeout=60, retry=<google.api_core.retry.Retry object>)
Return an iterator used to find blobs in the bucket.
If user_project
is set, bills the API request to that project.
Parameters | |
---|---|
Name | Description |
bucket_or_name |
Union[ Bucket, str, ]
The bucket resource to pass or name to create. |
max_results |
int
(Optional) The maximum number of blobs to return. |
page_token |
str
(Optional) If present, return the next batch of blobs, using the value, which must correspond to the |
prefix |
str
(Optional) Prefix used to filter blobs. |
delimiter |
str
(Optional) Delimiter, used with |
start_offset |
str
(Optional) Filter results to objects whose names are lexicographically equal to or after |
end_offset |
str
(Optional) Filter results to objects whose names are lexicographically before |
include_trailing_delimiter |
boolean
(Optional) If true, objects that end in exactly one instance of |
versions |
bool
(Optional) Whether object versions should be returned as separate blobs. |
projection |
str
(Optional) If used, must be 'full' or 'noAcl'. Defaults to |
fields |
str
(Optional) Selector specifying which fields to include in a partial response. Must be a list of fields. For example to get a partial response with just the next page token and the name and language of each blob returned: |
page_size |
int
(Optional) Maximum number of blobs to return in each page. Defaults to a value set by the API. |
timeout |
Optional[Union[float, Tuple[float, float]]]
The amount of time, in seconds, to wait for the server response. Can also be passed as a tuple (connect_timeout, read_timeout). See |
retry |
Optional[Union[google.api_core.retry.Retry, google.cloud.storage.retry.ConditionalRetryPolicy]]
How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options. A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set. See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them. |
list_buckets
list_buckets(max_results=None, page_token=None, prefix=None, projection='noAcl', fields=None, project=None, page_size=None, timeout=60, retry=<google.api_core.retry.Retry object>)
Get all buckets in the project associated to the client.
This will not populate the list of blobs available in each bucket.
.. literalinclude:: snippets.py :start-after: START list_buckets :end-before: END list_buckets :dedent: 4
This implements "storage.buckets.list".
Parameters | |
---|---|
Name | Description |
max_results |
int
(Optional) The maximum number of buckets to return. |
page_token |
str
(Optional) If present, return the next batch of buckets, using the value, which must correspond to the |
prefix |
str
(Optional) Filter results to buckets whose names begin with this prefix. |
projection |
str
(Optional) Specifies the set of properties to return. If used, must be 'full' or 'noAcl'. Defaults to 'noAcl'. |
fields |
str
(Optional) Selector specifying which fields to include in a partial response. Must be a list of fields. For example to get a partial response with just the next page token and the language of each bucket returned: 'items/id,nextPageToken' |
project |
str
(Optional) The project whose buckets are to be listed. If not passed, uses the project set on the client. |
page_size |
int
(Optional) Maximum number of buckets to return in each page. Defaults to a value set by the API. |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Exceptions | |
---|---|
Type | Description |
ValueError | if both project is None and the client's project is also None . |
Returns | |
---|---|
Type | Description |
| Iterator of all Bucket belonging to this project. |
list_hmac_keys
list_hmac_keys(max_results=None, service_account_email=None, show_deleted_keys=None, project_id=None, user_project=None, timeout=60, retry=<google.api_core.retry.Retry object>)
List HMAC keys for a project.
Parameters | |
---|---|
Name | Description |
max_results |
int
(Optional) Max number of keys to return in a given page. |
service_account_email |
str
(Optional) Limit keys to those created by the given service account. |
show_deleted_keys |
bool
(Optional) Included deleted keys in the list. Default is to exclude them. |
project_id |
str
(Optional) Explicit project ID for the key. Defaults to the client's project. |
user_project |
str
(Optional) This parameter is currently ignored. |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Returns | |
---|---|
Type | Description |
Tuple[HMACKeyMetadata, str] | metadata for the created key, plus the bytes of the key's secret, which is an 40-character base64-encoded string. |
lookup_bucket
lookup_bucket(bucket_name, timeout=60, if_metageneration_match=None, if_metageneration_not_match=None, retry=<google.api_core.retry.Retry object>)
Get a bucket by name, returning None if not found.
You can use this if you would rather check for a None value than catching an exception:
.. literalinclude:: snippets.py :start-after: START lookup_bucket :end-before: END lookup_bucket :dedent: 4
Parameters | |
---|---|
Name | Description |
bucket_name |
str
The name of the bucket to get. |
timeout |
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: |
if_metageneration_match |
long
(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value. |
if_metageneration_not_match |
long
(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value. |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. See: |
Returns | |
---|---|
Type | Description |
Bucket | The bucket matching the name provided or None if not found. |