Class BatchSnapshot (3.23.0)

Stay organized with collections Save and categorize content based on your preferences.
BatchSnapshot(database, read_timestamp=None, exact_staleness=None)

Wrapper for generating and processing read / query batches.

Parameters

NameDescription
database Database

database to use

read_timestamp datetime.datetime

Execute all reads at the given timestamp.

exact_staleness datetime.timedelta

Execute all reads at a timestamp that is exact_staleness old.

Inheritance

builtins.object > BatchSnapshot

Methods

close

close()

Clean up underlying session.

execute_sql

execute_sql(*args, **kw)

Convenience method: perform query operation via snapshot.

See execute_sql.

from_dict

from_dict(database, mapping)

Reconstruct an instance from a mapping.

Parameters
NameDescription
database Database

database to use

mapping mapping

serialized state of the instance

generate_query_batches

generate_query_batches(sql, params=None, param_types=None, partition_size_bytes=None, max_partitions=None, query_options=None, *, retry=<_MethodDefault._DEFAULT_VALUE: <object object>>, timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>)

Start a partitioned query operation.

Uses the PartitionQuery API request to start a partitioned query operation. Returns a list of batch information needed to perform the actual queries.

Parameters
NameDescription
sql str

SQL query statement

params dict, {str -> column value}

values for parameter replacement. Keys must match the names used in sql.

param_types dict[str -> Union[dict, .types.Type]]

(Optional) maps explicit types for one or more param values; required if parameters are passed.

partition_size_bytes int

(Optional) desired size for each partition generated. The service uses this as a hint, the actual partition size may differ.

max_partitions int

(Optional) desired maximum number of partitions generated. The service uses this as a hint, the actual number of partitions may differ.

query_options QueryOptions or dict

(Optional) Query optimizer configuration to use for the given query. If a dict is provided, it must be of the same form as the protobuf message QueryOptions

retry google.api_core.retry.Retry

(Optional) The retry settings for this request.

timeout float

(Optional) The timeout for this request.

Returns
TypeDescription
iterable of dictmappings of information used perform actual partitioned reads via process_read_batch.

generate_read_batches

generate_read_batches(table, columns, keyset, index='', partition_size_bytes=None, max_partitions=None, *, retry=<_MethodDefault._DEFAULT_VALUE: <object object>>, timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>)

Start a partitioned batch read operation.

Uses the PartitionRead API request to initiate the partitioned read. Returns a list of batch information needed to perform the actual reads.

Parameters
NameDescription
table str

name of the table from which to fetch data

columns list of str

names of columns to be retrieved

keyset KeySet

keys / ranges identifying rows to be retrieved

index str

(Optional) name of index to use, rather than the table's primary key

partition_size_bytes int

(Optional) desired size for each partition generated. The service uses this as a hint, the actual partition size may differ.

max_partitions int

(Optional) desired maximum number of partitions generated. The service uses this as a hint, the actual number of partitions may differ.

retry google.api_core.retry.Retry

(Optional) The retry settings for this request.

timeout float

(Optional) The timeout for this request.

Returns
TypeDescription
iterable of dictmappings of information used perform actual partitioned reads via process_read_batch.

process

process(batch)

Process a single, partitioned query or read.

Parameter
NameDescription
batch mapping

one of the mappings returned from an earlier call to generate_query_batches.

Exceptions
TypeDescription
ValueErrorif batch does not contain either 'read' or 'query'
Returns
TypeDescription
StreamedResultSeta result set instance which can be used to consume rows.

process_query_batch

process_query_batch(batch, *, retry=<_MethodDefault._DEFAULT_VALUE: <object object>>, timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>)

Process a single, partitioned query.

Parameters
NameDescription
batch mapping

one of the mappings returned from an earlier call to generate_query_batches.

retry google.api_core.retry.Retry

(Optional) The retry settings for this request.

timeout float

(Optional) The timeout for this request.

Returns
TypeDescription
StreamedResultSeta result set instance which can be used to consume rows.

process_read_batch

process_read_batch(batch, *, retry=<_MethodDefault._DEFAULT_VALUE: <object object>>, timeout=<_MethodDefault._DEFAULT_VALUE: <object object>>)

Process a single, partitioned read.

Parameters
NameDescription
batch mapping

one of the mappings returned from an earlier call to generate_read_batches.

retry google.api_core.retry.Retry

(Optional) The retry settings for this request.

timeout float

(Optional) The timeout for this request.

Returns
TypeDescription
StreamedResultSeta result set instance which can be used to consume rows.

read

read(*args, **kw)

Convenience method: perform read operation via snapshot.

See read.

to_dict

to_dict()

Return state as a dictionary.

Result can be used to serialize the instance and reconstitute it later using from_dict.