Cloud Bigtable API - Class Google::Cloud::Bigtable::Backup (v2.11.1)

Reference documentation and code samples for the Cloud Bigtable API class Google::Cloud::Bigtable::Backup.

Backup

A backup of a Cloud Bigtable table. See Cluster#create_backup, Cluster#backup and Cluster#backups.

Inherits

  • Object

Example

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
instance = bigtable.instance "my-instance"
cluster = instance.cluster "my-cluster"

backup = cluster.backup "my-backup"

# Update
backup.expire_time = Time.now + 60 * 60 * 7
backup.save

# Delete
backup.delete

Methods

#backup_id

def backup_id() -> String

The unique identifier for the backup.

Returns
  • (String)

#cluster_id

def cluster_id() -> String

The unique identifier for the cluster to which the backup belongs.

Returns
  • (String)

#copy

def copy(dest_project_id:, dest_instance_id:, dest_cluster_id:, new_backup_id:, expire_time:) -> Google::Cloud::Bigtable::Backup::Job

Creates a copy of the backup at the desired location. Copy of the backup won't be created if the backup is already a copied one.

Parameters
  • dest_project_id (String) — Existing project ID. Copy of the backup will be created in this project. Required.
  • dest_instance_id (Instance, String) — Existing instance ID. Copy of the backup will be created in this instance. Required.
  • dest_cluster_id (String) — Existing cluster ID. Copy of the backup will be created in this cluster. Required.
  • new_backup_id (String) — The id of the copy of the backup to be created. This string must be between 1 and 50 characters in length and match the regex [_a-zA-Z0-9][-_.a-zA-Z0-9]*. Required.
  • expire_time (Time) — The expiration time of the copy of the backup, with microseconds granularity that must be at least 6 hours and at most 30 days from the time the request is received. Once the expire_time has passed, Cloud Bigtable will delete the backup and free the resources used by the backup. Required.
Returns
Example

Create a copy of the backup at a specific location

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
instance = bigtable.instance "my-instance"
cluster = instance.cluster "my-cluster"

backup = cluster.backup "my-backup"

job = backup.copy dest_project_id:"my-project-2",
                  dest_instance_id:"my-instance-2",
                  dest_cluster_id:"my-cluster-2",
                  new_backup_id:"my-backup-2",
                  expire_time: Time.now + 60 * 60 * 7

job.wait_until_done!
job.done? #=> true

if job.error?
  status = job.error
else
  backup = job.backup
end

#creating?

def creating?() -> Boolean

The backup is currently being created, and may be destroyed if the creation process encounters an error.

Returns
  • (Boolean)

#delete

def delete() -> Boolean

Permanently deletes the backup.

Returns
  • (Boolean) — Returns true if the backup was deleted.
Example
require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
instance = bigtable.instance "my-instance"
cluster = instance.cluster "my-cluster"

backup = cluster.backup "my-backup"

backup.delete

#encryption_info

def encryption_info() -> Google::Cloud::Bigtable::EncryptionInfo

The encryption information for the backup. See also Instance::ClusterMap#add.

Returns
Example
require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
instance = bigtable.instance "my-instance"
cluster = instance.cluster "my-cluster"

backup = cluster.backup "my-backup"

encryption_info = backup.encryption_info
encryption_info.encryption_type #=> :GOOGLE_DEFAULT_ENCRYPTION

#end_time

def end_time() -> Time

The time that the backup was finished. The row data in the backup will be no newer than this timestamp.

Returns
  • (Time)

#expire_time

def expire_time() -> Time

The expiration time of the backup, with microseconds granularity that must be at least 6 hours and at most 30 days from the time the request is received. Once the expire time has passed, Cloud Bigtable will delete the backup and free the resources used by the backup.

Returns
  • (Time)

#expire_time=

def expire_time=(new_expire_time)

Sets the expiration time of the backup, with microseconds granularity that must be at least 6 hours and at most 30 days from the time the request is received. Once the #expire_time has passed, Cloud Bigtable will delete the backup and free the resources used by the backup.

Parameter
  • new_expire_time (Time) — The new expiration time of the backup.

#instance_id

def instance_id() -> String

The unique identifier for the instance to which the backup belongs.

Returns
  • (String)

#path

def path() -> String

The unique name of the backup. Value in the form projects/<project>/instances/<instance>/clusters/<cluster>/backups/<backup>.

Returns
  • (String)

#policy

def policy() { |policy| ... } -> Policy

Gets the Cloud IAM access control policy for the backup.

Yields
  • (policy) — A block for updating the policy. The latest policy will be read from the Bigtable service and passed to the block. After the block completes, the modified policy will be written to the service.
Yield Parameter
  • policy (Policy) — the current Cloud IAM Policy for this backup.
Returns
  • (Policy) — The current Cloud IAM Policy for the backup.
Examples
require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
instance = bigtable.instance "my-instance"
cluster = instance.cluster "my-cluster"

backup = cluster.backup "my-backup"

policy = backup.policy

Update the policy by passing a block.

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
instance = bigtable.instance "my-instance"
cluster = instance.cluster "my-cluster"

backup = cluster.backup "my-backup"

backup.policy do |p|
  p.add "roles/owner", "user:owner@example.com"
end # 2 API calls

#policy=

def policy=(new_policy) -> Policy
Alias Of: #update_policy

Updates the Cloud IAM access control policy for the backup. The policy should be read from #policy. See Policy for an explanation of the policy etag property and how to modify policies.

You can also update the policy by passing a block to #policy, which will call this method internally after the block completes.

Parameter
  • new_policy (Policy) — a new or modified Cloud IAM Policy for this backup
Returns
  • (Policy) — The policy returned by the API update operation.
Example
require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
instance = bigtable.instance "my-instance"
cluster = instance.cluster "my-cluster"

backup = cluster.backup "my-backup"

policy = backup.policy
policy.add "roles/owner", "user:owner@example.com"
updated_policy = backup.update_policy policy

puts updated_policy.roles

#project_id

def project_id() -> String

The unique identifier for the project to which the backup belongs.

Returns
  • (String)

#ready?

def ready?() -> Boolean

The backup has been successfully created and is ready to serve requests.

Returns
  • (Boolean)

#reload!

def reload!() -> Google::Cloud::Bigtable::Backup

Reloads backup data.

#restore

def restore(table_id, instance: nil) -> Google::Cloud::Bigtable::Table::RestoreJob

Creates a new table by restoring data from a completed backup. The new table may be created in an instance different than that of the backup.

Parameters
  • table_id (String) — The table ID for the new table. This table must not yet exist. Required.
  • instance (Instance, String) (defaults to: nil) — The instance or the ID of the instance for the new table, if different from the instance of the backup. Optional.
Returns
Examples
require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
instance = bigtable.instance "my-instance"
cluster = instance.cluster "my-cluster"

backup = cluster.backup "my-backup"

job = backup.restore "my-new-table"

job.wait_until_done!
job.done? #=> true

if job.error?
  status = job.error
else
  table = job.table
  optimized = job.optimize_table_operation_name
end

Create the table in a different instance.

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
instance = bigtable.instance "my-instance"
cluster = instance.cluster "my-cluster"

backup = cluster.backup "my-backup"

table_instance = bigtable.instance "my-other-instance"
job = backup.restore "my-new-table", instance: table_instance

job.wait_until_done!
job.done? #=> true

if job.error?
  status = job.error
else
  table = job.table
  optimized = job.optimize_table_operation_name
end

#save

def save() -> Boolean
Aliases

Updates the backup.

expire_time is the only updatable field.

Returns
  • (Boolean) — Returns true if the update succeeded.
Example
require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
instance = bigtable.instance "my-instance"
cluster = instance.cluster "my-cluster"

backup = cluster.backup "my-backup"

# Update
backup.expire_time = Time.now + 60 * 60 * 7
backup.save

#size_bytes

def size_bytes() -> Integer

The size of the backup in bytes.

Returns
  • (Integer)

#source_backup

def source_backup() -> String

The name of the backup from which this backup is copied. Value will be empty if its not a copied backup and of the form - projects/<project>/instances/<instance>/clusters/<cluster>/backups/<source_backup> if this is a copied backup.

Returns
  • (String)

#source_table

def source_table(perform_lookup: nil, view: nil) -> Table

The table from which this backup was created.

Parameters
  • perform_lookup (Boolean) (defaults to: nil) — Creates table object without verifying that the table resource exists. Calls made on this object will raise errors if the table does not exist. Default value is false. Optional. Helps to reduce admin API calls.
  • view (Symbol) (defaults to: nil)

    Table view type. Default view type is :SCHEMA_VIEW. Valid view types are:

    • :NAME_ONLY - Only populates name.
    • :SCHEMA_VIEW - Only populates name and fields related to the table's schema.
    • :REPLICATION_VIEW - Only populates name and fields related to the table's replication state.
    • :FULL - Populates all fields.
Returns

#start_time

def start_time() -> Time

The time that the backup was started (i.e. approximately the time the CreateBackup request is received). The row data in this backup will be no older than this timestamp.

Returns
  • (Time)

#state

def state() -> Symbol

The current state of the backup. Possible values are :CREATING and :READY.

Returns
  • (Symbol)

#test_iam_permissions

def test_iam_permissions(*permissions) -> Array<String>

Tests the specified permissions against the Cloud IAM access control policy.

Parameter
  • permissions (String, Array<String>) — permissions The set of permissions to check access for. Permissions with wildcards (such as * or bigtable.*) are not allowed. See Access Control.
Returns
  • (Array<String>) — The permissions that are configured for the policy.
Example
require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
instance = bigtable.instance "my-instance"
cluster = instance.cluster "my-cluster"

backup = cluster.backup "my-backup"

permissions = backup.test_iam_permissions(
  "bigtable.backups.delete",
  "bigtable.backups.get"
)
permissions.include? "bigtable.backups.delete" #=> false
permissions.include? "bigtable.backups.get" #=> true

#update

def update() -> Boolean
Alias Of: #save

Updates the backup.

expire_time is the only updatable field.

Returns
  • (Boolean) — Returns true if the update succeeded.
Example
require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
instance = bigtable.instance "my-instance"
cluster = instance.cluster "my-cluster"

backup = cluster.backup "my-backup"

# Update
backup.expire_time = Time.now + 60 * 60 * 7
backup.save

#update_policy

def update_policy(new_policy) -> Policy
Aliases

Updates the Cloud IAM access control policy for the backup. The policy should be read from #policy. See Policy for an explanation of the policy etag property and how to modify policies.

You can also update the policy by passing a block to #policy, which will call this method internally after the block completes.

Parameter
  • new_policy (Policy) — a new or modified Cloud IAM Policy for this backup
Returns
  • (Policy) — The policy returned by the API update operation.
Example
require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
instance = bigtable.instance "my-instance"
cluster = instance.cluster "my-cluster"

backup = cluster.backup "my-backup"

policy = backup.policy
policy.add "roles/owner", "user:owner@example.com"
updated_policy = backup.update_policy policy

puts updated_policy.roles