BigQueryReadClient(
transport=None,
channel=None,
credentials=None,
client_config=None,
client_info=None,
client_options=None,
)
Client for interacting with BigQuery Storage API.
The BigQuery storage API can be used to read data stored in BigQuery.
Methods
BigQueryReadClient
BigQueryReadClient(
transport=None,
channel=None,
credentials=None,
client_config=None,
client_info=None,
client_options=None,
)
Constructor.
Name | Description |
channel |
grpc.Channel
DEPRECATED. A |
credentials |
google.auth.credentials.Credentials
The authorization credentials to attach to requests. These credentials identify this application to the service. If none are specified, the client will attempt to ascertain the credentials from the environment. This argument is mutually exclusive with providing a transport instance to |
client_config |
dict
DEPRECATED. A dictionary of call options for each method. If not specified, the default configuration is used. |
client_info |
google.api_core.gapic_v1.client_info.ClientInfo
The client info used to send a user-agent string along with API requests. If |
client_options |
Union[dict, google.api_core.client_options.ClientOptions]
Client options used to set user options on the client. API Endpoint should be set through client_options. |
create_read_session
create_read_session(parent, read_session, max_stream_count=None, retry=<_MethodDefault._DEFAULT_VALUE: <object object>>, timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>, metadata=None)
Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.
A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.
Data is assigned to each stream such that roughly the same number of rows can be read from each stream. Because the server-side unit for assigning data is collections of rows, the API does not guarantee that each stream will return the same number or rows. Additionally, the limits are enforced based on the number of pre-filtered rows, so some filters can lead to lopsided assignments.
Read sessions automatically expire 24 hours after they are created and do not require manual clean-up by the caller.
.. rubric:: Example
from google.cloud import bigquery_storage_v1
client = bigquery_storage_v1.BigQueryReadClient()
parent = client.project_path('[PROJECT]')
TODO: Initialize
read_session
:read_session = {}
response = client.create_read_session(parent, read_session)
Name | Description |
parent |
str
The resource has one pattern, but the API owner expects to add more later. (This is the inverse of ORIGINALLY_SINGLE_PATTERN, and prevents that from being necessary once there are multiple patterns.) |
read_session |
Union[dict, ReadSession]
Required. Session to be created. If a dict is provided, it must be of the same form as the protobuf message ReadSession |
max_stream_count |
int
Max initial number of streams. If unset or zero, the server will provide a value of streams so as to produce reasonable throughput. Must be non-negative. The number of streams may be lower than the requested number, depending on the amount parallelism that is reasonable for the table. Error will be returned if the max count is greater than the current system max limit of 1,000. Streams must be read starting from offset 0. |
retry |
Optional[google.api_core.retry.Retry]
A retry object used to retry requests. If |
timeout |
Optional[float]
The amount of time, in seconds, to wait for the request to complete. Note that if |
metadata |
Optional[Sequence[Tuple[str, str]]]
Additional metadata that is provided to the method. |
Type | Description |
google.api_core.exceptions.GoogleAPICallError | If the request failed for any reason. |
google.api_core.exceptions.RetryError | If the request failed due to a retryable error and retry attempts failed. |
ValueError | If the parameters are invalid. |
from_service_account_file
from_service_account_file(filename, *args, **kwargs)
Creates an instance of this client using the provided credentials file.
Name | Description |
filename |
str
The path to the service account private key json file. |
Type | Description |
BigQueryReadClient | The constructed client. |
from_service_account_json
from_service_account_json(filename, *args, **kwargs)
Creates an instance of this client using the provided credentials file.
Name | Description |
filename |
str
The path to the service account private key json file. |
Type | Description |
BigQueryReadClient | The constructed client. |
project_path
project_path(project)
Return a fully-qualified project string.
read_rows
read_rows(name, offset=0, retry=<_MethodDefault._DEFAULT_VALUE: <object object>>, timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>, metadata=None)
Reads rows from the table in the format prescribed by the read session. Each response contains one or more table rows, up to a maximum of 10 MiB per response; read requests which attempt to read individual rows larger than this will fail.
Each request also returns a set of stream statistics reflecting the estimated total number of rows in the read stream. This number is computed based on the total table size and the number of active streams in the read session, and may change as other streams continue to read data.
.. rubric:: Example
from google.cloud import bigquery_storage_v1
client = bigquery_storage_v1.BigQueryReadClient()
TODO: Initialize
table
:table = "projects/{}/datasets/{}/tables/{}".format( ... 'project_id': 'your-data-project-id', ... 'dataset_id': 'your_dataset_id', ... 'table_id': 'your_table_id', ... )
TODO: Initialize
parent
:parent = 'projects/your-billing-project-id'
requested_session = bigquery_storage_v1.types.ReadSession( ... table=table, ... data_format=bigquery_storage_v1.enums.DataFormat.AVRO, ... ) session = client.create_read_session(parent, requested_session)
stream = session.streams[0], # TODO: Also read any other streams. read_rows_stream = client.read_rows(stream.name)
for element in read_rows_stream.rows(session): ... # process element ... pass
Name | Description |
name |
str
Required. Name of the stream to start reading from, of the form |
offset |
Optional[int]
The starting offset from which to begin reading rows from in the stream. The offset requested must be less than the last row read from ReadRows. Requesting a larger offset is undefined. |
retry |
Optional[google.api_core.retry.Retry]
A retry object used to retry requests. If |
timeout |
Optional[float]
The amount of time, in seconds, to wait for the request to complete. Note that if |
metadata |
Optional[Sequence[Tuple[str, str]]]
Additional metadata that is provided to the method. |
Type | Description |
google.api_core.exceptions.GoogleAPICallError | If the request failed for any reason. |
google.api_core.exceptions.RetryError | If the request failed due to a retryable error and retry attempts failed. |
ValueError | If the parameters are invalid. |
Type | Description |
ReadRowsStream | An iterable of ReadRowsResponse. |
read_session_path
read_session_path(project, location, session)
Return a fully-qualified read_session string.
read_stream_path
read_stream_path(project, location, session, stream)
Return a fully-qualified read_stream string.
split_read_stream
split_read_stream(name, fraction=None, retry=<_MethodDefault._DEFAULT_VALUE: <object object>>, timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>, metadata=None)
An indicator of the behavior of a given field (for example, that a field is required in requests, or given as output but ignored as input). This does not change the behavior in protocol buffers itself; it only denotes the behavior and may affect how API tooling handles the field.
Note: This enum may receive new values in the future.
.. rubric:: Example
from google.cloud import bigquery_storage_v1
client = bigquery_storage_v1.BigQueryReadClient()
name = client.read_stream_path('[PROJECT]', '[LOCATION]', '[SESSION]', '[STREAM]')
response = client.split_read_stream(name)
Name | Description |
name |
str
Required. Name of the stream to split. |
fraction |
float
A value in the range (0.0, 1.0) that specifies the fractional point at which the original stream should be split. The actual split point is evaluated on pre-filtered rows, so if a filter is provided, then there is no guarantee that the division of the rows between the new child streams will be proportional to this fractional value. Additionally, because the server-side unit for assigning data is collections of rows, this fraction will always map to a data storage boundary on the server side. |
retry |
Optional[google.api_core.retry.Retry]
A retry object used to retry requests. If |
timeout |
Optional[float]
The amount of time, in seconds, to wait for the request to complete. Note that if |
metadata |
Optional[Sequence[Tuple[str, str]]]
Additional metadata that is provided to the method. |
Type | Description |
google.api_core.exceptions.GoogleAPICallError | If the request failed for any reason. |
google.api_core.exceptions.RetryError | If the request failed due to a retryable error and retry attempts failed. |
ValueError | If the parameters are invalid. |
table_path
table_path(project, dataset, table)
Return a fully-qualified table string.