Reference documentation and code samples for the Cloud Bigtable API class Google::Cloud::Bigtable::Cluster.
Cluster
A configuration object describing how Cloud Bigtable should treat traffic from a particular end user application.
Inherits
- Object
Example
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" # Update cluster.nodes = 3 cluster.save # Delete cluster.delete
Methods
#backup
def backup(backup_id) -> Google::Cloud::Bigtable::Backup, nil
Gets a backup in the cluster.
- backup_id (String) — The unique ID of the requested backup.
-
(Google::Cloud::Bigtable::Backup, nil) — The backup object, or
nil
if not found in the service.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" backup = cluster.backup "my-backup" if backup puts backup.backup_id end
#backups
def backups() -> Array<Google::Cloud::Bigtable::Backup>
Lists all backups in the cluster.
- (Array<Google::Cloud::Bigtable::Backup>) — (See Backup::List)
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" cluster.backups.all do |backup| puts backup.backup_id end
#cluster_id
def cluster_id() -> String
The unique identifier for the cluster.
- (String)
#create_backup
def create_backup(source_table, backup_id, expire_time) -> Google::Cloud::Bigtable::Backup::Job
Creates a new Cloud Bigtable Backup.
- source_table (Table, String) — The table object, or the name of the table, from which the backup is to be created. The table needs to be in the same instance as the backup. Required.
-
backup_id (String) — The id of the backup to be created. This string must
be between 1 and 50 characters in length and match the regex
[_a-zA-Z0-9][-_.a-zA-Z0-9]*
. Required. -
expire_time (Time) — The expiration time of the backup, with microseconds
granularity that must be at least 6 hours and at most 30 days
from the time the request is received. Once the
expire_time
has passed, Cloud Bigtable will delete the backup and free the resources used by the backup. Required.
- (Google::Cloud::Bigtable::Backup::Job) — The job representing the long-running, asynchronous processing of a backup create operation.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" table = instance.table "my-table" expire_time = Time.now + 60 * 60 * 7 job = cluster.create_backup table, "my-backup", expire_time job.wait_until_done! job.done? #=> true if job.error? status = job.error else backup = job.backup end
#creating?
def creating?() -> Boolean
The cluster is currently being created, and may be destroyed if the creation process encounters an error.
- (Boolean)
#delete
def delete() -> Boolean
Permanently deletes the cluster.
-
(Boolean) — Returns
true
if the cluster was deleted.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" cluster.delete
#disabled?
def disabled?() -> Boolean
The cluster has no backing nodes. The data (tables) still exist, but no operations can be performed on the cluster.
- (Boolean)
#instance_id
def instance_id() -> String
The unique identifier for the instance to which the cluster belongs.
- (String)
#kms_key
def kms_key() -> String, nil
The full name of the Cloud KMS encryption key for the cluster, if it is CMEK-protected, in the format
projects/{key_project_id}/locations/{location}/keyRings/{ring_name}/cryptoKeys/{key_name}
.
The requirements for this key are:
- The Cloud Bigtable service account associated with the project that contains this cluster must be granted
the
cloudkms.cryptoKeyEncrypterDecrypter
role on the CMEK key. - Only regional keys can be used and the region of the CMEK key must match the region of the cluster.
- All clusters within an instance must use the same CMEK key.
-
(String, nil) — The full name of the Cloud KMS encryption key, or
nil
if the cluster is not CMEK-protected.
#location
def location() -> String
Cluster location. For example, "us-east1-b"
- (String)
#location_path
def location_path() -> String
Cluster location path in form of
projects/<project_id>/locations/<zone>
- (String)
#nodes
def nodes() -> Integer
The number of nodes allocated to this cluster.
- (Integer)
#nodes=
def nodes=(serve_nodes)
The number of nodes allocated to this cluster. More nodes enable higher throughput and more consistent performance.
- serve_nodes (Integer) — Number of nodes
#path
def path() -> String
The unique name of the cluster. Value in the form
projects/<project_id>/instances/<instance_id>/clusters/<cluster_id>
.
- (String)
#project_id
def project_id() -> String
The unique identifier for the project to which the cluster belongs.
- (String)
#ready?
def ready?() -> Boolean
The cluster has been successfully created and is ready to serve requests.
- (Boolean)
#reload!
def reload!() -> Google::Cloud::Bigtable::Cluster
Reloads cluster data.
#resizing?
def resizing?() -> Boolean
The cluster is currently being resized, and may revert to its previous node count if the process encounters an error. A cluster is still capable of serving requests while being resized, but may perform as if its number of allocated nodes is between the starting and requested states.
- (Boolean)
#save
def save() -> Google::Cloud::Bigtable::Cluster::Job
Updates the cluster.
serve_nodes
is the only updatable field.
- (Google::Cloud::Bigtable::Cluster::Job) — The job representing the long-running, asynchronous processing of an update cluster operation.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" cluster.nodes = 3 job = cluster.save job.done? #=> false # To block until the operation completes. job.wait_until_done! job.done? #=> true if job.error? status = job.error else cluster = job.cluster end
#state
def state() -> Symbol
The current state of the cluster.
Possible values are
:CREATING
, :READY
, :STATE_NOT_KNOWN
, :RESIZING
, :DISABLED
.
- (Symbol)
#storage_type
def storage_type() -> Symbol
The type of storage used by this cluster to serve its parent instance's tables, unless explicitly overridden. Valid values are:
:SSD
- Flash (SSD) storage should be used.:HDD
- Magnetic drive (HDD) storage should be used.
- (Symbol)
#update
def update() -> Google::Cloud::Bigtable::Cluster::Job
Updates the cluster.
serve_nodes
is the only updatable field.
- (Google::Cloud::Bigtable::Cluster::Job) — The job representing the long-running, asynchronous processing of an update cluster operation.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" cluster.nodes = 3 job = cluster.save job.done? #=> false # To block until the operation completes. job.wait_until_done! job.done? #=> true if job.error? status = job.error else cluster = job.cluster end