Class BigQueryReadClient

Client for interacting with BigQuery Storage API.

The BigQuery storage API can be used to read data stored in BigQuery.

Inheritance

builtins.object > google.cloud.bigquery_storage_v1.services.big_query_read.client.BigQueryReadClient > BigQueryReadClient

Properties

transport

Returns the transport used by the client instance.

Returns
TypeDescription
BigQueryReadTransportThe transport used by the client instance.

Methods

__exit__

__exit__(type, value, traceback)

Releases underlying transport's resources.

.. warning:: ONLY use as a context manager if the transport is NOT shared with other clients! Exiting the with block will CLOSE the transport and may cause errors in other clients!

common_billing_account_path

common_billing_account_path(billing_account: str)

Returns a fully-qualified billing_account string.

Parameter
NameDescription
billing_account str

common_folder_path

common_folder_path(folder: str)

Returns a fully-qualified folder string.

Parameter
NameDescription
folder str

common_location_path

common_location_path(project: str, location: str)

Returns a fully-qualified location string.

Parameters
NameDescription
project str
location str

common_organization_path

common_organization_path(organization: str)

Returns a fully-qualified organization string.

Parameter
NameDescription
organization str

common_project_path

common_project_path(project: str)

Returns a fully-qualified project string.

Parameter
NameDescription
project str

create_read_session

create_read_session(request: Optional[Union[google.cloud.bigquery_storage_v1.types.storage.CreateReadSessionRequest, dict]] = None, *, parent: Optional[str] = None, read_session: Optional[google.cloud.bigquery_storage_v1.types.stream.ReadSession] = None, max_stream_count: Optional[int] = None, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())

Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push- down filter describing the rows to be returned.

A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.

Data is assigned to each stream such that roughly the same number of rows can be read from each stream. Because the server-side unit for assigning data is collections of rows, the API does not guarantee that each stream will return the same number or rows. Additionally, the limits are enforced based on the number of pre-filtered rows, so some filters can lead to lopsided assignments.

Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.

Parameters
NameDescription
request Union[google.cloud.bigquery_storage_v1.types.CreateReadSessionRequest, dict]

The request object. Request message for CreateReadSession.

parent str

Required. The request project that owns the session, in the form of projects/{project_id}. This corresponds to the parent field on the request instance; if request is provided, this should not be set.

read_session google.cloud.bigquery_storage_v1.types.ReadSession

Required. Session to be created. This corresponds to the read_session field on the request instance; if request is provided, this should not be set.

max_stream_count int

Max initial number of streams. If unset or zero, the server will provide a value of streams so as to produce reasonable throughput. Must be non- negative. The number of streams may be lower than the requested number, depending on the amount parallelism that is reasonable for the table. Error will be returned if the max count is greater than the current system max limit of 1,000. Streams must be read starting from offset 0. This corresponds to the max_stream_count field on the request instance; if request is provided, this should not be set.

retry google.api_core.retry.Retry

Designation of what errors, if any, should be retried.

timeout float

The timeout for this request.

metadata Sequence[Tuple[str, str]]

Strings which should be sent along with the request as metadata.

Returns
TypeDescription
google.cloud.bigquery_storage_v1.types.ReadSessionInformation about the ReadSession.

from_service_account_file

from_service_account_file(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

Parameters
NameDescription
filename str

The path to the service account private key json file.

args

Additional arguments to pass to the constructor.

kwargs

Additional arguments to pass to the constructor.

Returns
TypeDescription
BigQueryReadClientThe constructed client.

from_service_account_info

from_service_account_info(info: dict, *args, **kwargs)

Creates an instance of this client using the provided credentials info.

Parameters
NameDescription
info dict

The service account private key info.

args

Additional arguments to pass to the constructor.

kwargs

Additional arguments to pass to the constructor.

Returns
TypeDescription
BigQueryReadClientThe constructed client.

from_service_account_json

from_service_account_json(filename: str, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

Parameters
NameDescription
filename str

The path to the service account private key json file.

args

Additional arguments to pass to the constructor.

kwargs

Additional arguments to pass to the constructor.

Returns
TypeDescription
BigQueryReadClientThe constructed client.

parse_common_billing_account_path

parse_common_billing_account_path(path: str)

Parse a billing_account path into its component segments.

Parameter
NameDescription
path str

parse_common_folder_path

parse_common_folder_path(path: str)

Parse a folder path into its component segments.

Parameter
NameDescription
path str

parse_common_location_path

parse_common_location_path(path: str)

Parse a location path into its component segments.

Parameter
NameDescription
path str

parse_common_organization_path

parse_common_organization_path(path: str)

Parse a organization path into its component segments.

Parameter
NameDescription
path str

parse_common_project_path

parse_common_project_path(path: str)

Parse a project path into its component segments.

Parameter
NameDescription
path str

parse_read_session_path

parse_read_session_path(path: str)

Parses a read_session path into its component segments.

Parameter
NameDescription
path str

parse_read_stream_path

parse_read_stream_path(path: str)

Parses a read_stream path into its component segments.

Parameter
NameDescription
path str

parse_table_path

parse_table_path(path: str)

Parses a table path into its component segments.

Parameter
NameDescription
path str

read_rows

read_rows(name, offset=0, retry=<_MethodDefault._DEFAULT_VALUE: <object object>>, timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>, metadata=(), retry_delay_callback=None)

Reads rows from the table in the format prescribed by the read session. Each response contains one or more table rows, up to a maximum of 10 MiB per response; read requests which attempt to read individual rows larger than this will fail.

Each request also returns a set of stream statistics reflecting the estimated total number of rows in the read stream. This number is computed based on the total table size and the number of active streams in the read session, and may change as other streams continue to read data.

.. rubric:: Example

from google.cloud import bigquery_storage

client = bigquery_storage.BigQueryReadClient()

TODO: Initialize table:

table = "projects/{}/datasets/{}/tables/{}".format( ... 'project_id': 'your-data-project-id', ... 'dataset_id': 'your_dataset_id', ... 'table_id': 'your_table_id', ... )

TODO: Initialize parent:

parent = 'projects/your-billing-project-id'

requested_session = bigquery_storage.types.ReadSession( ... table=table, ... data_format=bigquery_storage.types.DataFormat.AVRO, ... ) session = client.create_read_session( ... parent=parent, read_session=requested_session ... )

stream = session.streams[0], # TODO: Also read any other streams. read_rows_stream = client.read_rows(stream.name)

for element in read_rows_stream.rows(session): ... # process element ... pass

Parameters
NameDescription
name str

Required. Name of the stream to start reading from, of the form projects/{project_id}/locations/{location}/sessions/{session_id}/streams/{stream_id}

offset Optional[int]

The starting offset from which to begin reading rows from in the stream. The offset requested must be less than the last row read from ReadRows. Requesting a larger offset is undefined.

retry Optional[google.api_core.retry.Retry]

A retry object used to retry requests. If None is specified, requests will not be retried.

timeout Optional[float]

The amount of time, in seconds, to wait for the request to complete. Note that if retry is specified, the timeout applies to each individual attempt.

metadata Optional[Sequence[Tuple[str, str]]]

Additional metadata that is provided to the method.

retry_delay_callback Optional[Callable[[float], None]]

If the client receives a retryable error that asks the client to delay its next attempt and retry_delay_callback is not None, BigQueryReadClient will call retry_delay_callback with the delay duration (in seconds) before it starts sleeping until the next attempt.

Exceptions
TypeDescription
google.api_core.exceptions.GoogleAPICallErrorIf the request failed for any reason.
google.api_core.exceptions.RetryErrorIf the request failed due to a retryable error and retry attempts failed.
ValueErrorIf the parameters are invalid.
Returns
TypeDescription
ReadRowsStreamAn iterable of ReadRowsResponse.

read_session_path

read_session_path(project: str, location: str, session: str)

Returns a fully-qualified read_session string.

Parameters
NameDescription
project str
location str
session str

read_stream_path

read_stream_path(project: str, location: str, session: str, stream: str)

Returns a fully-qualified read_stream string.

Parameters
NameDescription
project str
location str
session str
stream str

split_read_stream

split_read_stream(request: Optional[Union[google.cloud.bigquery_storage_v1.types.storage.SplitReadStreamRequest, dict]] = None, *, retry: Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault] = <_MethodDefault._DEFAULT_VALUE: <object object>>, timeout: Optional[float] = None, metadata: Sequence[Tuple[str, str]] = ())

Splits a given ReadStream into two ReadStream objects. These ReadStream objects are referred to as the primary and the residual streams of the split. The original ReadStream can still be read from in the same manner as before. Both of the returned ReadStream objects can also be read from, and the rows returned by both child streams will be the same as the rows read from the original stream.

Moreover, the two child streams will be allocated back-to-back in the original ReadStream. Concretely, it is guaranteed that for streams original, primary, and residual, that original[0-j] = primary[0-j] and original[j-n] = residual[0-m] once the streams have been read to completion.

Parameters
NameDescription
request Union[google.cloud.bigquery_storage_v1.types.SplitReadStreamRequest, dict]

The request object. Request message for SplitReadStream.

retry google.api_core.retry.Retry

Designation of what errors, if any, should be retried.

timeout float

The timeout for this request.

metadata Sequence[Tuple[str, str]]

Strings which should be sent along with the request as metadata.

Returns
TypeDescription
google.cloud.bigquery_storage_v1.types.SplitReadStreamResponseResponse message for SplitReadStream.

table_path

table_path(project: str, dataset: str, table: str)

Returns a fully-qualified table string.

Parameters
NameDescription
project str
dataset str
table str