Table(table_id, instance, mutation_timeout=None, app_profile_id=None)
Representation of a Google Cloud Bigtable Table.
create
the tabledelete
the tablelist_column_families
in the table
Parameters
Name | Description |
table_id |
str
The ID of the table. |
instance |
Instance
The instance that owns the table. |
app_profile_id |
str
(Optional) The unique name of the AppProfile. |
Properties
name
Table name used in requests.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_table_name] :end-before: [END bigtable_table_name] :dedent: 4
"projects/../instances/../tables/{table_id}"
Type | Description |
str | The table name. |
Methods
append_row
append_row(row_key)
Create a xref_AppendRow associated with this table.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_table_append_row] :end-before: [END bigtable_table_append_row] :dedent: 4
Name | Description |
row_key |
bytes
The key for the row being created. |
backup
backup(backup_id, cluster_id=None, expire_time=None)
Factory to create a Backup linked to this Table.
Name | Description |
backup_id |
str
The ID of the Backup to be created. |
cluster_id |
str
(Optional) The ID of the Cluster. Required for calling 'delete', 'exists' etc. methods. |
expire_time |
(Optional) The expiration time of this new Backup. Required, if the |
column_family
column_family(column_family_id, gc_rule=None)
Factory to create a column family associated with this table.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_table_column_family] :end-before: [END bigtable_table_column_family] :dedent: 4
Name | Description |
column_family_id |
str
The ID of the column family. Must be of the form |
gc_rule |
(Optional) The garbage collection settings for this column family. |
Type | Description |
| A column family owned by this table. |
conditional_row
conditional_row(row_key, filter_)
Create a xref_ConditionalRow associated with this table.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_table_conditional_row] :end-before: [END bigtable_table_conditional_row] :dedent: 4
Name | Description |
row_key |
bytes
The key for the row being created. |
filter_ |
(Optional) Filter to be used for conditional mutations. See |
create
create(initial_split_keys=[], column_families={})
Creates this table.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_create_table] :end-before: [END bigtable_create_table] :dedent: 4
Name | Description |
initial_split_keys |
list
(Optional) list of row keys in bytes that will be used to initially split the table into several tablets. |
column_families |
dict
(Optional) A map columns to create. The key is the column_id str and the value is a |
delete
delete()
Delete this table.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_delete_table] :end-before: [END bigtable_delete_table] :dedent: 4
direct_row
direct_row(row_key)
Create a xref_DirectRow associated with this table.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_table_direct_row] :end-before: [END bigtable_table_direct_row] :dedent: 4
Name | Description |
row_key |
bytes
The key for the row being created. |
drop_by_prefix
drop_by_prefix(row_key_prefix, timeout=None)
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_drop_by_prefix] :end-before: [END bigtable_drop_by_prefix] :dedent: 4
Name | Description |
row_key_prefix |
bytes
Delete all rows that start with this row key prefix. Prefix cannot be zero length. |
timeout |
float
(Optional) The amount of time, in seconds, to wait for the request to complete. :raise: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. |
exists
exists()
Check whether the table exists.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_check_table_exists] :end-before: [END bigtable_check_table_exists] :dedent: 4
Type | Description |
bool | True if the table exists, else False. |
get_cluster_states
get_cluster_states()
List the cluster states owned by this table.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_get_cluster_states] :end-before: [END bigtable_get_cluster_states] :dedent: 4
Type | Description |
dict | Dictionary of cluster states for this table. Keys are cluster ids and values are :class: 'ClusterState' instances. |
get_iam_policy
get_iam_policy()
Gets the IAM access control policy for this table.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_table_get_iam_policy] :end-before: [END bigtable_table_get_iam_policy] :dedent: 4
Type | Description |
Policy | The current IAM policy of this table. |
list_backups
list_backups(cluster_id=None, filter_=None, order_by=None, page_size=0)
List Backups for this Table.
Name | Description |
cluster_id |
str
(Optional) Specifies a single cluster to list Backups from. If none is specified, the returned list contains all the Backups in this Instance. |
filter_ |
str
(Optional) A filter expression that filters backups listed in the response. The expression must specify the field name, a comparison operator, and the value that you want to use for filtering. The value must be a string, a number, or a boolean. The comparison operator must be <, >, <=, >=, !=, =, or :. Colon ':' represents a HAS operator which is roughly synonymous with equality. Filter rules are case insensitive. The fields eligible for filtering are: - |
order_by |
str
(Optional) An expression for specifying the sort order of the results of the request. The string value should specify one or more fields in |
page_size |
int
(Optional) The maximum number of resources contained in the underlying API response. If page streaming is performed per-resource, this parameter does not affect the return value. If page streaming is performed per-page, this determines the maximum number of resources in a page. |
Type | Description |
`ValueErro |
Type | Description |
| Iterator of Backup resources within the current Instance. |
list_column_families
list_column_families()
List the column families owned by this table.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_list_column_families] :end-before: [END bigtable_list_column_families] :dedent: 4
Type | Description |
`ValueErro |
Type | Description |
dict | Dictionary of column families attached to this table. Keys are strings (column family names) and values are .ColumnFamily instances. |
mutate_rows
mutate_rows(rows, retry=<google.api_core.retry.Retry object>)
Mutates multiple rows in bulk.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_mutate_rows] :end-before: [END bigtable_mutate_rows] :dedent: 4
The method tries to update all specified rows. If some of the rows weren't updated, it would not remove mutations. They can be applied to the row separately. If row mutations finished successfully, they would be cleaned up.
Optionally, a retry
strategy can be specified to re-attempt
mutations on rows that return transient errors. This method will retry
until all rows succeed or until the request deadline is reached. To
specify a retry
strategy of "do-nothing", a deadline of 0.0
can be specified.
Name | Description |
rows |
list
List or other iterable of |
retry |
(Optional) Retry delay and deadline arguments. To override, the default value |
Type | Description |
list | A list of response statuses (google.rpc.status_pb2.Status ) corresponding to success or failure of each row mutation sent. These will be in the same order as the rows . |
mutations_batcher
mutations_batcher(flush_count=1000, max_row_bytes=5242880)
Factory to create a mutation batcher associated with this instance.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_mutations_batcher] :end-before: [END bigtable_mutations_batcher] :dedent: 4
Name | Description |
flush_count |
int
(Optional) Maximum number of rows per batch. If it reaches the max number of rows it calls finish_batch() to mutate the current row batch. Default is FLUSH_COUNT (1000 rows). |
max_row_bytes |
int
(Optional) Max number of row mutations size to flush. If it reaches the max number of row mutations size it calls finish_batch() to mutate the current row batch. Default is MAX_ROW_BYTES (5 MB). |
read_row
read_row(row_key, filter_=None)
Read a single row from this table.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_read_row] :end-before: [END bigtable_read_row] :dedent: 4
Name | Description |
row_key |
bytes
The key of the row to read from. |
filter_ |
(Optional) The filter to apply to the contents of the row. If unset, returns the entire row. |
Type | Description |
`ValueErro |
Type | Description |
| The contents of the row if any chunks were returned in the response, otherwise :data:None . |
read_rows
read_rows(start_key=None, end_key=None, limit=None, filter_=None, end_inclusive=False, row_set=None, retry=<google.api_core.retry.Retry object>)
Read rows from this table.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_read_rows] :end-before: [END bigtable_read_rows] :dedent: 4
Name | Description |
start_key |
bytes
(Optional) The beginning of a range of row keys to read from. The range will include |
end_key |
bytes
(Optional) The end of a range of row keys to read from. The range will not include |
limit |
int
(Optional) The read will terminate after committing to N rows' worth of results. The default (zero) is to return all results. |
filter_ |
(Optional) The filter to apply to the contents of the specified row(s). If unset, reads every column in each row. |
end_inclusive |
bool
(Optional) Whether the |
row_set |
(Optional) The row set containing multiple row keys and row_ranges. |
retry |
(Optional) Retry delay and deadline arguments. To override, the default value |
Type | Description |
| A .PartialRowsData a generator for consuming the streamed results. |
restore
restore(new_table_id, cluster_id=None, backup_id=None, backup_name=None)
Creates a new Table by restoring from the Backup specified by either
backup_id
or backup_name
. The returned long-running operation
can be used to track the progress of the operation and to cancel it.
The response
type is Table
, if successful.
Name | Description |
new_table_id |
str
The ID of the Table to create and restore to. This Table must not already exist. |
cluster_id |
str
The ID of the Cluster containing the Backup. This parameter gets overriden by |
backup_id |
str
The ID of the Backup to restore the Table from. This parameter gets overriden by |
backup_name |
str
(Optional) The full name of the Backup to restore from. If specified, it overrides the |
Type | Description |
google.api_core.exceptions.AlreadyExists | If the table already exists. |
google.api_core.exceptions.GoogleAPICallError | If the request failed for any reason. |
google.api_core.exceptions.RetryError | If the request failed due to a retryable error and retry attempts failed. |
ValueError | If the parameters are invalid. |
row
row(row_key, filter_=None, append=False)
Factory to create a row associated with this table.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_table_row] :end-before: [END bigtable_table_row] :dedent: 4
Name | Description |
row_key |
bytes
The key for the row being created. |
filter_ |
(Optional) Filter to be used for conditional mutations. See |
append |
bool
(Optional) Flag to determine if the row should be used for append mutations. |
Type | Description |
`ValueErro |
Type | Description |
Row | A row owned by this table. |
sample_row_keys
sample_row_keys()
Read a sample of row keys in the table.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_sample_row_keys] :end-before: [END bigtable_sample_row_keys] :dedent: 4
The returned row keys will delimit contiguous sections of the table of approximately equal size, which can be used to break up the data for distributed tasks like mapreduces.
The elements in the iterator are a SampleRowKeys response and they have
the properties offset_bytes
and row_key
. They occur in sorted
order. The table might have contents before the first row key in the
list and after the last one, but a key containing the empty string
indicates "end of table" and will be the last response given, if
present.
Type | Description |
GrpcRendezvous | A cancel-able iterator. Can be consumed by calling next() or by casting to a list and can be cancelled by calling cancel() . |
set_iam_policy
set_iam_policy(policy)
Sets the IAM access control policy for this table. Replaces any existing policy.
For more information about policy, please see documentation of
class google.cloud.bigtable.policy.Policy
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_table_set_iam_policy] :end-before: [END bigtable_table_set_iam_policy] :dedent: 4
Name | Description |
policy |
Policy
A new IAM policy to replace the current IAM policy of this table. |
Type | Description |
Policy | The current IAM policy of this table. |
test_iam_permissions
test_iam_permissions(permissions)
Tests whether the caller has the given permissions for this table. Returns the permissions that the caller has.
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_table_test_iam_permissions] :end-before: [END bigtable_table_test_iam_permissions] :dedent: 4
Name | Description |
permissions |
list
The set of permissions to check for the |
Type | Description |
list | A List(string) of permissions allowed on the table. |
truncate
truncate(timeout=None)
Truncate the table
For example:
.. literalinclude:: snippets_table.py :start-after: [START bigtable_truncate_table] :end-before: [END bigtable_truncate_table] :dedent: 4
Name | Description |
timeout |
float
(Optional) The amount of time, in seconds, to wait for the request to complete. :raise: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. |
yield_rows
yield_rows(**kwargs)
Read rows from this table.
Name | Description |
start_key |
bytes
(Optional) The beginning of a range of row keys to read from. The range will include |
end_key |
bytes
(Optional) The end of a range of row keys to read from. The range will not include |
limit |
int
(Optional) The read will terminate after committing to N rows' worth of results. The default (zero) is to return all results. |
filter_ |
(Optional) The filter to apply to the contents of the specified row(s). If unset, reads every column in each row. |
row_set |
(Optional) The row set containing multiple row keys and row_ranges. |
Type | Description |
| A .PartialRowData for each row returned |