Cluster
User friendly container for Google Cloud Bigtable Cluster.
class google.cloud.bigtable.cluster.Cluster(cluster_id, instance, location_id=None, serve_nodes=None, default_storage_type=None, _state=None)
Bases: object
Representation of a Google Cloud Bigtable Cluster.
We can use a Cluster
to:
reload()
itselfcreate()
itselfupdate()
itselfdelete()
itselfParameters
cluster_id (str) – The ID of the cluster.
instance (
Instance
) – The instance where the cluster resides.location_id (str) – (Creation Only) The location where this cluster’s nodes and storage reside . For best performance, clients should be located as close as possible to this cluster. For list of supported locations refer to https://cloud.google.com/bigtable/docs/locations
serve_nodes (int) – (Optional) The number of nodes in the cluster.
default_storage_type (int) – (Optional) The type of storage Possible values are represented by the following constants:
google.cloud.bigtable.enums.StorageType.SSD
.google.cloud.bigtable.enums.StorageType.HDD
, Defaults togoogle.cloud.bigtable.enums.StorageType.UNSPECIFIED
._state (int) – (OutputOnly) The current state of the cluster. Possible values are represented by the following constants:
google.cloud.bigtable.enums.Cluster.State.NOT_KNOWN
.google.cloud.bigtable.enums.Cluster.State.READY
.google.cloud.bigtable.enums.Cluster.State.CREATING
.google.cloud.bigtable.enums.Cluster.State.RESIZING
.google.cloud.bigtable.enums.Cluster.State.DISABLED
.
create()
Create this cluster.
For example:
from google.cloud.bigtable import Client
from google.cloud.bigtable import enums
# Assuming that there is an existing instance with `INSTANCE_ID`
# on the server already.
# to create an instance see
# 'https://cloud.google.com/bigtable/docs/creating-instance'
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
cluster_id = "clus-my-" + UNIQUE_SUFFIX
location_id = "us-central1-a"
serve_nodes = 1
storage_type = enums.StorageType.SSD
cluster = instance.cluster(
cluster_id,
location_id=location_id,
serve_nodes=serve_nodes,
default_storage_type=storage_type,
)
operation = cluster.create()
# We want to make sure the operation completes.
operation.result(timeout=100)
NOTE: Uses the project
, instance
and cluster_id
on the
current Cluster
in addition to the serve_nodes
.
To change them before creating, reset the values via
cluster.serve_nodes = 8
cluster.cluster_id = 'i-changed-my-mind'
before calling create()
.
Return type
Returns
The long-running operation corresponding to the create operation.
delete()
Delete this cluster.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
cluster_to_delete = instance.cluster(cluster_id)
cluster_to_delete.delete()
Marks a cluster and all of its tables for permanent deletion in 7 days.
Immediately upon completion of the request:
Billing will cease for all of the cluster’s reserved resources.
The cluster’s
delete_time
field will be set 7 days in the future.
Soon afterward:
- All tables within the cluster will become unavailable.
At the cluster’s delete_time
:
- The cluster and all of its tables will immediately and irrevocably disappear from the API, and their data will be permanently deleted.
exists()
Check whether the cluster already exists.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
cluster = instance.cluster(CLUSTER_ID)
cluster_exists = cluster.exists()
Return type
Returns
True if the table exists, else False.
classmethod from_pb(cluster_pb, instance)
Creates an cluster instance from a protobuf.
For example:
from google.cloud.bigtable import Client
from google.cloud.bigtable_admin_v2.types import instance as data_v2_pb2
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
cluster = instance.cluster(CLUSTER_ID)
name = cluster.name
cluster_state = cluster.state
serve_nodes = 1
cluster_pb = data_v2_pb2.Cluster(
name=name,
location=LOCATION_ID,
state=cluster_state,
serve_nodes=serve_nodes,
default_storage_type=STORAGE_TYPE,
)
cluster2 = cluster.from_pb(cluster_pb, instance)
Parameters
cluster_pb (
instance.Cluster
) – An instance protobuf object.instance (
google.cloud.bigtable.instance.Instance
) – The instance that owns the cluster.
Return type
Cluster
Returns
The Cluster parsed from the protobuf response.
Raises
ValueError
if the cluster name does not matchprojects/{project}/instances/{instance_id}/clusters/{cluster_id}
or if the parsed instance ID does not match the istance ID on the client. or if the parsed project ID does not match the project ID on the client.
property name()
Cluster name used in requests.
NOTE: This property will not change if _instance
and cluster_id
do not, but the return value is not cached.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
cluster = instance.cluster(CLUSTER_ID)
cluster_name = cluster.name
The cluster name is of the form
"projects/{project}/instances/{instance}/clusters/{cluster_id}"
Return type
Returns
The cluster name.
reload()
Reload the metadata for this cluster.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
cluster = instance.cluster(CLUSTER_ID)
cluster.reload()
property state()
state of cluster.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
cluster = instance.cluster(CLUSTER_ID)
cluster_state = cluster.state
Type
google.cloud.bigtable.enums.Cluster.State
update()
Update this cluster.
For example:
from google.cloud.bigtable import Client
client = Client(admin=True)
instance = client.instance(INSTANCE_ID)
cluster = instance.cluster(CLUSTER_ID)
cluster.serve_nodes = 4
cluster.update()
NOTE: Updates the serve_nodes
. If you’d like to
change them before updating, reset the values via
cluster.serve_nodes = 8
before calling update()
.
Return type
Operation
Returns
The long-running operation corresponding to the update operation.