Table
User-friendly container for Google Cloud Bigtable Table.
class google.cloud.bigtable.table.ClusterState(replication_state)
Bases: object
Representation of a Cluster State.
Parameters
replication_state (int) – enum value for cluster state Possible replications_state values are 0 for STATE_NOT_KNOWN: The replication state of the table is unknown in this cluster. 1 for INITIALIZING: The cluster was recently created, and the table must finish copying over pre-existing data from other clusters before it can begin receiving live replication updates and serving
Data API
requests. 2 for PLANNED_MAINTENANCE: The table is temporarily unable to serveData API
requests from this cluster due to planned internal maintenance. 3 for UNPLANNED_MAINTENANCE: The table is temporarily unable to serveData API
requests from this cluster due to unplanned or emergency maintenance. 4 for READY: The table can serveData API
requests from this cluster. Depending on replication delay, reads may not immediately reflect the state of the table in other clusters.
_eq_(other)
Checks if two ClusterState instances(self and other) are equal on the basis of instance variable ‘replication_state’.
Parameters
other (ClusterState) – ClusterState instance to compare with.
Return type
Boolean value
Returns
True if two cluster state instances have same replication_state.
_ne_(other)
Checks if two ClusterState instances(self and other) are not equal.
Parameters
other (ClusterState.) – ClusterState instance to compare with.
Return type
Boolean value.
Returns
True if two cluster state instances are not equal.
_repr_()
Representation of cluster state instance as string value for cluster state.
Return type
ClusterState instance
Returns
ClusterState instance as representation of string value for cluster state.
google.cloud.bigtable.table.DEFAULT_RETRY( = <google.api_core.retry.Retry object )
The default retry strategy to be used on retry-able errors.
Used by mutate_rows()
.
class google.cloud.bigtable.table.Table(table_id, instance, mutation_timeout=None, app_profile_id=None)
Bases: object
Representation of a Google Cloud Bigtable Table.
NOTE: We don’t define any properties on a table other than the name.
The only other fields are column_families
and granularity
,
The column_families
are not stored locally and
granularity
is an enum with only one value.
We can use a Table
to:
create()
the tabledelete()
the tablelist_column_families()
in the tableParameters
append_row(row_key)
Create a AppendRow
associated with this table.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
row_keys = [b"row_key_1", b"row_key_2"]
row1_obj = table.append_row(row_keys[0])
row2_obj = table.append_row(row_keys[1])
Parameters
row_key (bytes) – The key for the row being created.
Returns
A row owned by this table.
column_family(column_family_id, gc_rule=None)
Factory to create a column family associated with this table.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
column_family_obj = table.column_family(COLUMN_FAMILY_ID)
Parameters
column_family_id (str) – The ID of the column family. Must be of the form
[_a-zA-Z0-9][-_.a-zA-Z0-9]\*
.gc_rule (
GarbageCollectionRule
) – (Optional) The garbage collection settings for this column family.
Return type
Returns
A column family owned by this table.
conditional_row(row_key, filter_)
Create a ConditionalRow
associated with this table.
For example:
from google.cloud.bigtable import Client
from google.cloud.bigtable.row_filters import PassAllFilter
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
row_keys = [b"row_key_1", b"row_key_2"]
filter_ = PassAllFilter(True)
row1_obj = table.conditional_row(row_keys[0], filter_=filter_)
row2_obj = table.conditional_row(row_keys[1], filter_=filter_)
Parameters
row_key (bytes) – The key for the row being created.
filter (
RowFilter
) – (Optional) Filter to be used for conditional mutations. SeeConditionalRow
for more details.
Returns
A row owned by this table.
create(initial_split_keys=[], column_families={})
Creates this table.
For example:
from google.cloud.bigtable import Client
from google.cloud.bigtable import column_family
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
# Create table without Column families.
table1 = instance.table("table_id1")
table1.create()
# Create table with Column families.
table2 = instance.table("table_id2")
# Define the GC policy to retain only the most recent 2 versions.
max_versions_rule = column_family.MaxVersionsGCRule(2)
table2.create(column_families={"cf1": max_versions_rule})
NOTE: A create request returns a
_generated.table_pb2.Table
but we don’t use
this response.
Parameters
initial_split_keys (list) – (Optional) list of row keys in bytes that will be used to initially split the table into several tablets.
column_failies – (Optional) A map columns to create. The key is the column_id str and the value is a
GarbageCollectionRule
delete()
Delete this table.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table("table_id_del")
table.delete()
direct_row(row_key)
Create a DirectRow
associated with this table.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
row_keys = [b"row_key_1", b"row_key_2"]
row1_obj = table.direct_row(row_keys[0])
row2_obj = table.direct_row(row_keys[1])
Parameters
row_key (bytes) – The key for the row being created.
Returns
A row owned by this table.
drop_by_prefix(row_key_prefix, timeout=None)
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
row_key_prefix = b"row_key_2"
table.drop_by_prefix(row_key_prefix, timeout=200)
Parameters
Raise
google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid.
exists()
Check whether the table exists.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
table_exists = table.exists()
Return type
Returns
True if the table exists, else False.
get_cluster_states()
List the cluster states owned by this table.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
get_cluster_states = table.get_cluster_states()
Return type
Returns
Dictionary of cluster states for this table. Keys are cluster ids and values are :class: ‘ClusterState’ instances.
get_iam_policy()
Gets the IAM access control policy for this table.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table("table_id_iam_policy")
policy = table.get_iam_policy()
Return type
google.cloud.bigtable.policy.Policy
Returns
The current IAM policy of this table.
list_column_families()
List the column families owned by this table.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
column_family_list = table.list_column_families()
Return type
Returns
Dictionary of column families attached to this table. Keys are strings (column family names) and values are
ColumnFamily
instances.Raises
ValueError
if the column family name from the response does not agree with the computed name from the column family ID.
mutate_rows(rows, retry=<google.api_core.retry.Retry object>)
Mutates multiple rows in bulk.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
row_keys = [
b"row_key_1",
b"row_key_2",
b"row_key_3",
b"row_key_4",
b"row_key_20",
b"row_key_22",
b"row_key_200",
]
col_name = b"col-name1"
rows = []
for i, row_key in enumerate(row_keys):
value = "value_{}".format(i).encode()
row = table.row(row_key)
row.set_cell(
COLUMN_FAMILY_ID, col_name, value, timestamp=datetime.datetime.utcnow()
)
rows.append(row)
response = table.mutate_rows(rows)
# validate that all rows written successfully
for i, status in enumerate(response):
if status.code != 0:
print("Row number {} failed to write".format(i))
The method tries to update all specified rows. If some of the rows weren’t updated, it would not remove mutations. They can be applied to the row separately. If row mutations finished successfully, they would be cleaned up.
Optionally, a retry
strategy can be specified to re-attempt
mutations on rows that return transient errors. This method will retry
until all rows succeed or until the request deadline is reached. To
specify a retry
strategy of “do-nothing”, a deadline of 0.0
can be specified.
Parameters
rows (list) – List or other iterable of
DirectRow
instances.retry (
Retry
) – (Optional) Retry delay and deadline arguments. To override, the default valueDEFAULT_RETRY
can be used and modified with thewith_delay()
method or thewith_deadline()
method.
Return type
Returns
A list of response statuses (google.rpc.status_pb2.Status) corresponding to success or failure of each row mutation sent. These will be in the same order as the rows.
mutations_batcher(flush_count=1000, max_row_bytes=5242880)
Factory to create a mutation batcher associated with this instance.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
batcher = table.mutations_batcher()
Parameters
table (class) – class:~google.cloud.bigtable.table.Table.
flush_count (int) – (Optional) Maximum number of rows per batch. If it reaches the max number of rows it calls finish_batch() to mutate the current row batch. Default is FLUSH_COUNT (1000 rows).
max_row_bytes (int) – (Optional) Max number of row mutations size to flush. If it reaches the max number of row mutations size it calls finish_batch() to mutate the current row batch. Default is MAX_ROW_BYTES (5 MB).
property name()
Table name used in requests.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
table_name = table.name
NOTE: This property will not change if table_id
does not, but the
return value is not cached.
The table name is of the form
"projects/../instances/../tables/{table_id}"
Return type
Returns
The table name.
read_row(row_key, filter_=None)
Read a single row from this table.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
row_key = "row_key_1"
row = table.read_row(row_key)
Parameters
Return type
PartialRowData
,NoneType
Returns
The contents of the row if any chunks were returned in the response, otherwise
None
.Raises
ValueError
if a commit row chunk is never encountered.
read_rows(start_key=None, end_key=None, limit=None, filter_=None, end_inclusive=False, row_set=None, retry=<google.api_core.retry.Retry object>)
Read rows from this table.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
# Read full table
partial_rows = table.read_rows()
read_rows = [row for row in partial_rows]
Parameters
start_key (bytes) – (Optional) The beginning of a range of row keys to read from. The range will include
start_key
. If left empty, will be interpreted as the empty string.end_key (bytes) – (Optional) The end of a range of row keys to read from. The range will not include
end_key
. If left empty, will be interpreted as an infinite string.limit (int) – (Optional) The read will terminate after committing to N rows’ worth of results. The default (zero) is to return all results.
filter (
RowFilter
) – (Optional) The filter to apply to the contents of the specified row(s). If unset, reads every column in each row.end_inclusive (bool) – (Optional) Whether the
end_key
should be considered inclusive. The default is False (exclusive).row_set (
row_set.RowSet
) – (Optional) The row set containing multiple row keys and row_ranges.retry (
Retry
) – (Optional) Retry delay and deadline arguments. To override, the default valueDEFAULT_RETRY_READ_ROWS
can be used and modified with thewith_delay()
method or thewith_deadline()
method.
Return type
Returns
A
PartialRowsData
a generator for consuming the streamed results.
row(row_key, filter_=None, append=False)
Factory to create a row associated with this table.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
row_keys = [b"row_key_1", b"row_key_2"]
row1_obj = table.row(row_keys[0])
row2_obj = table.row(row_keys[1])
WARNING: At most one of filter_
and append
can be used in a
Row
.
Parameters
row_key (bytes) – The key for the row being created.
filter (
RowFilter
) – (Optional) Filter to be used for conditional mutations. SeeConditionalRow
for more details.append (bool) – (Optional) Flag to determine if the row should be used for append mutations.
Return type
Returns
A row owned by this table.
Raises
ValueError
if bothfilter_
andappend
are used.
sample_row_keys()
Read a sample of row keys in the table.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table("table_id1_samplerow")
data = table.sample_row_keys()
actual_keys, offset = zip(*[(rk.row_key, rk.offset_bytes) for rk in data])
The returned row keys will delimit contiguous sections of the table of approximately equal size, which can be used to break up the data for distributed tasks like mapreduces.
The elements in the iterator are a SampleRowKeys response and they have
the properties offset_bytes
and row_key
. They occur in sorted
order. The table might have contents before the first row key in the
list and after the last one, but a key containing the empty string
indicates “end of table” and will be the last response given, if
present.
NOTE: Row keys in this list may not have ever been written to or read from, and users should therefore not make any assumptions about the row key structure that are specific to their use case.
The offset_bytes
field on a response indicates the approximate
total storage space used by all rows in the table which precede
row_key
. Buffering the contents of all rows between two subsequent
samples would require space roughly equal to the difference in their
offset_bytes
fields.
Return type
GrpcRendezvous
Returns
A cancel-able iterator. Can be consumed by calling
next()
or by casting to alist
and can be cancelled by callingcancel()
.
set_iam_policy(policy)
Sets the IAM access control policy for this table. Replaces any existing policy.
For more information about policy, please see documentation of class google.cloud.bigtable.policy.Policy
For example:
from google.cloud.bigtable import Client
from google.cloud.bigtable.policy import Policy
from google.cloud.bigtable.policy import BIGTABLE_ADMIN_ROLE
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table("table_id_iam_policy")
new_policy = Policy()
new_policy[BIGTABLE_ADMIN_ROLE] = [Policy.service_account(service_account_email)]
policy_latest = table.set_iam_policy(new_policy)
Parameters
policy (
google.cloud.bigtable.policy.Policy
) – A new IAM policy to replace the current IAM policy of this table.Return type
google.cloud.bigtable.policy.Policy
Returns
The current IAM policy of this table.
test_iam_permissions(permissions)
Tests whether the caller has the given permissions for this table. Returns the permissions that the caller has.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table("table_id_iam_policy")
permissions = ["bigtable.tables.mutateRows", "bigtable.tables.readRows"]
permissions_allowed = table.test_iam_permissions(permissions)
Parameters
permissions (list) – The set of permissions to check for the
resource
. Permissions with wildcards (such as ‘*’ or ‘storage.*’) are not allowed. For more information see IAM Overview. Bigtable Permissions.Return type
Returns
A List(string) of permissions allowed on the table.
truncate(timeout=None)
Truncate the table
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
table = instance.table(TABLE_ID)
table.truncate(timeout=200)
Parameters
timeout (float) – (Optional) The amount of time, in seconds, to wait for the request to complete.
Raise
google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid.
yield_rows(**kwargs)
Read rows from this table.
WARNING: This method will be removed in future releases. Please use
read_rows
instead.
Parameters
start_key (bytes) – (Optional) The beginning of a range of row keys to read from. The range will include
start_key
. If left empty, will be interpreted as the empty string.end_key (bytes) – (Optional) The end of a range of row keys to read from. The range will not include
end_key
. If left empty, will be interpreted as an infinite string.limit (int) – (Optional) The read will terminate after committing to N rows’ worth of results. The default (zero) is to return all results.
filter (
RowFilter
) – (Optional) The filter to apply to the contents of the specified row(s). If unset, reads every column in each row.row_set (
row_set.RowSet
) – (Optional) The row set containing multiple row keys and row_ranges.
Return type
Returns
A
PartialRowData
for each row returned
exception google.cloud.bigtable.table.TableMismatchError()
Bases: ValueError
Row from another table.
exception google.cloud.bigtable.table.TooManyMutationsError()
Bases: ValueError
The number of mutations for bulk request is too big.