- 3.0.0-rc1 (latest)
- 2.19.0
- 2.17.0
- 2.16.0
- 2.15.0
- 2.14.0
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.0
- 2.8.0
- 2.7.0
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.1
- 2.1.0
- 2.0.0
- 1.44.0
- 1.43.0
- 1.42.3
- 1.41.1
- 1.40.0
- 1.39.0
- 1.38.0
- 1.37.1
- 1.36.2
- 1.35.1
- 1.34.0
- 1.33.0
- 1.32.0
- 1.31.2
- 1.30.0
- 1.29.0
- 1.28.1
- 1.27.0
- 1.26.0
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
Module for file-like access of blobs, usually invoked via Blob.open().
Classes
BlobReader
BlobReader(blob, chunk_size=None, retry=<google.api_core.retry.retry_unary.Retry object>, **download_kwargs)
A file-like object that reads from a blob.
Parameters | |
---|---|
Name | Description |
blob |
'google.cloud.storage.blob.Blob'
The blob to download. |
chunk_size |
long
(Optional) The minimum number of bytes to read at a time. If fewer bytes than the chunk_size are requested, the remainder is buffered. The default is the chunk_size of the blob, or 40MiB. |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options. A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set. See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them. Media operations (downloads and uploads) do not support non-default predicates in a Retry object. The default will always be used. Other configuration changes for Retry objects such as delays and deadlines are respected. |
BlobWriter
BlobWriter(blob, chunk_size=None, ignore_flush=False, retry=<google.cloud.storage.retry.ConditionalRetryPolicy object>, **upload_kwargs)
A file-like object that writes to a blob.
Parameters | |
---|---|
Name | Description |
blob |
'google.cloud.storage.blob.Blob'
The blob to which to write. |
chunk_size |
long
(Optional) The maximum number of bytes to buffer before sending data to the server, and the size of each request when data is sent. Writes are implemented as a "resumable upload", so chunk_size for writes must be exactly a multiple of 256KiB as with other resumable uploads. The default is the chunk_size of the blob, or 40 MiB. |
ignore_flush |
bool
Makes flush() do nothing instead of raise an error. flush() without closing is not supported by the remote service and therefore calling it on this class normally results in io.UnsupportedOperation. However, that behavior is incompatible with some consumers and wrappers of file objects in Python, such as zipfile.ZipFile or io.TextIOWrapper. Setting ignore_flush will cause flush() to successfully do nothing, for compatibility with those contexts. The correct way to actually flush data to the remote server is to close() (using this object as a context manager is recommended). |
retry |
google.api_core.retry.Retry or google.cloud.storage.retry.ConditionalRetryPolicy
(Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry value will enable retries, and the object will define retriable response codes and errors and configure backoff and timeout options. A google.cloud.storage.retry.ConditionalRetryPolicy value wraps a Retry object and activates it only if certain conditions are met. This class exists to provide safe defaults for RPC calls that are not technically safe to retry normally (due to potential data duplication or other side-effects) but become safe to retry if a condition such as if_metageneration_match is set. See the retry.py source code and docstrings in this package (google.cloud.storage.retry) for information on retry types and how to configure them. Media operations (downloads and uploads) do not support non-default predicates in a Retry object. The default will always be used. Other configuration changes for Retry objects such as delays and deadlines are respected. |
SlidingBuffer
SlidingBuffer()
A non-rewindable buffer that frees memory of chunks already consumed.
This class is necessary because google-resumable-media-python
expects
tell()
to work relative to the start of the file, not relative to a place
in an intermediate buffer. Using this class, we present an external
interface with consistent seek and tell behavior without having to actually
store bytes already sent.
Behavior of this class differs from an ordinary BytesIO buffer. write()
will always append to the end of the file only and not change the seek
position otherwise. flush()
will delete all data already read (data to the
left of the seek position). tell()
will report the seek position of the
buffer including all deleted data. Additionally the class implements
len() which will report the size of the actual underlying buffer.
This class does not attempt to implement the entire Python I/O interface.