Class BigQueryReadClient (0.8.0)

BigQueryReadClient(
    transport=None,
    channel=None,
    credentials=None,
    client_config=None,
    client_info=None,
    client_options=None,
)

Client for interacting with BigQuery Storage API.

The BigQuery storage API can be used to read data stored in BigQuery.

Inheritance

builtins.object > google.cloud.bigquery_storage_v1.gapic.big_query_read_client.BigQueryReadClient > google.cloud.bigquery_storage_v1.client.BigQueryReadClient > BigQueryReadClient

Methods

BigQueryReadClient

BigQueryReadClient(
    transport=None,
    channel=None,
    credentials=None,
    client_config=None,
    client_info=None,
    client_options=None,
)

Constructor.

Parameters
NameDescription
channel grpc.Channel

DEPRECATED. A Channel instance through which to make calls. This argument is mutually exclusive with credentials; providing both will raise an exception.

credentials google.auth.credentials.Credentials

The authorization credentials to attach to requests. These credentials identify this application to the service. If none are specified, the client will attempt to ascertain the credentials from the environment. This argument is mutually exclusive with providing a transport instance to transport; doing so will raise an exception.

client_config dict

DEPRECATED. A dictionary of call options for each method. If not specified, the default configuration is used.

client_info google.api_core.gapic_v1.client_info.ClientInfo

The client info used to send a user-agent string along with API requests. If None, then default info will be used. Generally, you only need to set this if you're developing your own client library.

client_options Union[dict, google.api_core.client_options.ClientOptions]

Client options used to set user options on the client. API Endpoint should be set through client_options.

create_read_session

create_read_session(parent=None, read_session=None, max_stream_count=None, retry=<object object>, timeout=<object object>, metadata=None)

Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.

A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.

Data is assigned to each stream such that roughly the same number of rows can be read from each stream. Because the server-side unit for assigning data is collections of rows, the API does not guarantee that each stream will return the same number or rows. Additionally, the limits are enforced based on the number of pre-filtered rows, so some filters can lead to lopsided assignments.

Read sessions automatically expire 24 hours after they are created and do not require manual clean-up by the caller.

.. rubric:: Example

from google.cloud import bigquery_storage_v1

client = bigquery_storage_v1.BigQueryReadClient()

response = client.create_read_session()

Parameters
NameDescription
parent str

Required. The request project that owns the session, in the form of projects/{project_id}.

read_session Union[dict, ReadSession]

Required. Session to be created. If a dict is provided, it must be of the same form as the protobuf message ReadSession

max_stream_count int

Max initial number of streams. If unset or zero, the server will provide a value of streams so as to produce reasonable throughput. Must be non-negative. The number of streams may be lower than the requested number, depending on the amount parallelism that is reasonable for the table. Error will be returned if the max count is greater than the current system max limit of 1,000. Streams must be read starting from offset 0.

retry Optional[google.api_core.retry.Retry]

A retry object used to retry requests. If None is specified, requests will be retried using a default configuration.

timeout Optional[float]

The amount of time, in seconds, to wait for the request to complete. Note that if retry is specified, the timeout applies to each individual attempt.

metadata Optional[Sequence[Tuple[str, str]]]

Additional metadata that is provided to the method.

Exceptions
TypeDescription
google.api_core.exceptions.GoogleAPICallErrorIf the request failed for any reason.
google.api_core.exceptions.RetryErrorIf the request failed due to a retryable error and retry attempts failed.
ValueErrorIf the parameters are invalid.

from_service_account_file

from_service_account_file(filename, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

Parameter
NameDescription
filename str

The path to the service account private key json file.

Returns
TypeDescription
BigQueryReadClientThe constructed client.

from_service_account_json

from_service_account_json(filename, *args, **kwargs)

Creates an instance of this client using the provided credentials file.

Parameter
NameDescription
filename str

The path to the service account private key json file.

Returns
TypeDescription
BigQueryReadClientThe constructed client.

read_rows

read_rows(name, offset=0, retry=<object object>, timeout=<object object>, metadata=None)

Reads rows from the table in the format prescribed by the read session. Each response contains one or more table rows, up to a maximum of 10 MiB per response; read requests which attempt to read individual rows larger than this will fail.

Each request also returns a set of stream statistics reflecting the estimated total number of rows in the read stream. This number is computed based on the total table size and the number of active streams in the read session, and may change as other streams continue to read data.

.. rubric:: Example

from google.cloud import bigquery_storage_v1

client = bigquery_storage_v1.BigQueryReadClient()

TODO: Initialize table:

table = "projects/{}/datasets/{}/tables/{}".format( ... 'project_id': 'your-data-project-id', ... 'dataset_id': 'your_dataset_id', ... 'table_id': 'your_table_id', ... )

TODO: Initialize parent:

parent = 'projects/your-billing-project-id'

session = client.create_read_session(table, parent) stream=session.streams[0], # TODO: Read the other streams. ...

for element in client.read_rows(stream): ... # process element ... pass

Parameters
NameDescription
name str

Required. Name of the stream to start reading from, of the form projects/{project_id}/locations/{location}/sessions/{session_id}/streams/{stream_id}

offset Optional[int]

The starting offset from which to begin reading rows from in the stream. The offset requested must be less than the last row read from ReadRows. Requesting a larger offset is undefined.

retry Optional[google.api_core.retry.Retry]

A retry object used to retry requests. If None is specified, requests will not be retried.

timeout Optional[float]

The amount of time, in seconds, to wait for the request to complete. Note that if retry is specified, the timeout applies to each individual attempt.

metadata Optional[Sequence[Tuple[str, str]]]

Additional metadata that is provided to the method.

Exceptions
TypeDescription
google.api_core.exceptions.GoogleAPICallErrorIf the request failed for any reason.
google.api_core.exceptions.RetryErrorIf the request failed due to a retryable error and retry attempts failed.
ValueErrorIf the parameters are invalid.
Returns
TypeDescription
ReadRowsStreamAn iterable of ReadRowsResponse.

split_read_stream

split_read_stream(name=None, fraction=None, retry=<object object>, timeout=<object object>, metadata=None)

Splits a given ReadStream into two ReadStream objects. These ReadStream objects are referred to as the primary and the residual streams of the split. The original ReadStream can still be read from in the same manner as before. Both of the returned ReadStream objects can also be read from, and the rows returned by both child streams will be the same as the rows read from the original stream.

Moreover, the two child streams will be allocated back-to-back in the original ReadStream. Concretely, it is guaranteed that for streams original, primary, and residual, that original[0-j] = primary[0-j] and original[j-n] = residual[0-m] once the streams have been read to completion.

.. rubric:: Example

from google.cloud import bigquery_storage_v1

client = bigquery_storage_v1.BigQueryReadClient()

response = client.split_read_stream()

Parameters
NameDescription
name str

Required. Name of the stream to split.

fraction float

A value in the range (0.0, 1.0) that specifies the fractional point at which the original stream should be split. The actual split point is evaluated on pre-filtered rows, so if a filter is provided, then there is no guarantee that the division of the rows between the new child streams will be proportional to this fractional value. Additionally, because the server-side unit for assigning data is collections of rows, this fraction will always map to a data storage boundary on the server side.

retry Optional[google.api_core.retry.Retry]

A retry object used to retry requests. If None is specified, requests will be retried using a default configuration.

timeout Optional[float]

The amount of time, in seconds, to wait for the request to complete. Note that if retry is specified, the timeout applies to each individual attempt.

metadata Optional[Sequence[Tuple[str, str]]]

Additional metadata that is provided to the method.

Exceptions
TypeDescription
google.api_core.exceptions.GoogleAPICallErrorIf the request failed for any reason.
google.api_core.exceptions.RetryErrorIf the request failed due to a retryable error and retry attempts failed.
ValueErrorIf the parameters are invalid.