ReadRowsQuery(
row_keys: typing.Optional[typing.Union[list[str | bytes], str, bytes]] = None,
row_ranges: typing.Optional[
typing.Union[
list[google.cloud.bigtable.data.read_rows_query.RowRange],
google.cloud.bigtable.data.read_rows_query.RowRange,
]
] = None,
limit: typing.Optional[int] = None,
row_filter: typing.Optional[
google.cloud.bigtable.data.row_filters.RowFilter
] = None,
)
Class to encapsulate details of a read row request
Methods
ReadRowsQuery
ReadRowsQuery(
row_keys: typing.Optional[typing.Union[list[str | bytes], str, bytes]] = None,
row_ranges: typing.Optional[
typing.Union[
list[google.cloud.bigtable.data.read_rows_query.RowRange],
google.cloud.bigtable.data.read_rows_query.RowRange,
]
] = None,
limit: typing.Optional[int] = None,
row_filter: typing.Optional[
google.cloud.bigtable.data.row_filters.RowFilter
] = None,
)
Create a new ReadRowsQuery
__eq__
__eq__(other)
RowRanges are equal if they have the same row keys, row ranges, filter and limit, or if they both represent a full scan with the same filter and limit
add_key
add_key(row_key: str | bytes)
Add a row key to this query
A query can contain multiple keys, but ranges should be preferred
Exceptions | |
---|---|
Type | Description |
|
ValueError if an input is not a string or bytes: |
add_range
add_range(row_range: google.cloud.bigtable.data.read_rows_query.RowRange)
Add a range of row keys to this query.
shard
shard(shard_keys: RowKeySamples) -> ShardedQuery
Split this query into multiple queries that can be evenly distributed across nodes and run in parallel
Exceptions | |
---|---|
Type | Description |
|
AttributeError if the query contains a limit: |