Reference documentation and code samples for the google-cloud-bigtable class Google::Cloud::Bigtable::Backup.
Backup
A backup of a Cloud Bigtable table. See Cluster#create_backup, Cluster#backup and Cluster#backups.
Inherits
- Object
Example
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" backup = cluster.backup "my-backup" # Update backup.expire_time = Time.now + 60 * 60 * 7 backup.save # Delete backup.delete
Methods
#backup_id
def backup_id() -> String
The unique identifier for the backup.
- (String)
#cluster_id
def cluster_id() -> String
The unique identifier for the cluster to which the backup belongs.
- (String)
#creating?
def creating?() -> Boolean
The backup is currently being created, and may be destroyed if the creation process encounters an error.
- (Boolean)
#delete
def delete() -> Boolean
Permanently deletes the backup.
-
(Boolean) — Returns
true
if the backup was deleted.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" backup = cluster.backup "my-backup" backup.delete
#encryption_info
def encryption_info() -> Google::Cloud::Bigtable::EncryptionInfo
The encryption information for the backup. See also Instance::ClusterMap#add.
- (Google::Cloud::Bigtable::EncryptionInfo) — The encryption information for the backup.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" backup = cluster.backup "my-backup" encryption_info = backup.encryption_info encryption_info.encryption_type #=> :GOOGLE_DEFAULT_ENCRYPTION
#end_time
def end_time() -> Time
The time that the backup was finished. The row data in the backup will be no newer than this timestamp.
- (Time)
#expire_time
def expire_time() -> Time
The expiration time of the backup, with microseconds granularity that must be at least 6 hours and at most 30 days from the time the request is received. Once the expire time has passed, Cloud Bigtable will delete the backup and free the resources used by the backup.
- (Time)
#expire_time=
def expire_time=(new_expire_time)
Sets the expiration time of the backup, with microseconds granularity that must be at least 6 hours and at most 30 days from the time the request is received. Once the #expire_time has passed, Cloud Bigtable will delete the backup and free the resources used by the backup.
- new_expire_time (Time) — The new expiration time of the backup.
#instance_id
def instance_id() -> String
The unique identifier for the instance to which the backup belongs.
- (String)
#path
def path() -> String
The unique name of the backup. Value in the form
projects/<project>/instances/<instance>/clusters/<cluster>/backups/<backup>
.
- (String)
#policy
def policy() { |policy| ... } -> Policy
Gets the Cloud IAM access control policy for the backup.
- (policy) — A block for updating the policy. The latest policy will be read from the Bigtable service and passed to the block. After the block completes, the modified policy will be written to the service.
- policy (Policy) — the current Cloud IAM Policy for this backup.
- (Policy) — The current Cloud IAM Policy for the backup.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" backup = cluster.backup "my-backup" policy = backup.policy
Update the policy by passing a block.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" backup = cluster.backup "my-backup" backup.policy do |p| p.add "roles/owner", "user:owner@example.com" end # 2 API calls
#policy=
def policy=(new_policy) -> Policy
Updates the Cloud IAM access control
policy for the backup. The policy should be read from #policy.
See Policy for an explanation of the policy
etag
property and how to modify policies.
You can also update the policy by passing a block to #policy, which will call this method internally after the block completes.
- new_policy (Policy) — a new or modified Cloud IAM Policy for this backup
- (Policy) — The policy returned by the API update operation.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" backup = cluster.backup "my-backup" policy = backup.policy policy.add "roles/owner", "user:owner@example.com" updated_policy = backup.update_policy policy puts updated_policy.roles
#project_id
def project_id() -> String
The unique identifier for the project to which the backup belongs.
- (String)
#ready?
def ready?() -> Boolean
The backup has been successfully created and is ready to serve requests.
- (Boolean)
#reload!
def reload!() -> Google::Cloud::Bigtable::Backup
Reloads backup data.
#restore
def restore(table_id, instance: nil) -> Google::Cloud::Bigtable::Table::RestoreJob
Creates a new table by restoring data from a completed backup. The new table may be created in an instance different than that of the backup.
- table_id (String) — The table ID for the new table. This table must not yet exist. Required.
- instance (Instance, String) (defaults to: nil) — The instance or the ID of the instance for the new table, if different from the instance of the backup. Optional.
- (Google::Cloud::Bigtable::Table::RestoreJob) — The job representing the long-running, asynchronous processing of a backup restore table operation.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" backup = cluster.backup "my-backup" job = backup.restore "my-new-table" job.wait_until_done! job.done? #=> true if job.error? status = job.error else table = job.table optimized = job.optimize_table_operation_name end
Create the table in a different instance.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" backup = cluster.backup "my-backup" table_instance = bigtable.instance "my-other-instance" job = backup.restore "my-new-table", instance: table_instance job.wait_until_done! job.done? #=> true if job.error? status = job.error else table = job.table optimized = job.optimize_table_operation_name end
#save
def save() -> Boolean
Updates the backup.
expire_time
is the only updatable field.
-
(Boolean) — Returns
true
if the update succeeded.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" backup = cluster.backup "my-backup" # Update backup.expire_time = Time.now + 60 * 60 * 7 backup.save
#size_bytes
def size_bytes() -> Integer
The size of the backup in bytes.
- (Integer)
#source_table
def source_table(perform_lookup: nil, view: nil) -> Table
The table from which this backup was created.
-
perform_lookup (Boolean) (defaults to: nil) — Creates table object without verifying that the table resource exists. Calls
made on this object will raise errors if the table does not exist. Default value is
false
. Optional. Helps to reduce admin API calls. -
view (Symbol) (defaults to: nil) —
Table view type. Default view type is
:SCHEMA_VIEW
. Valid view types are::NAME_ONLY
- Only populatesname
.:SCHEMA_VIEW
- Only populatesname
and fields related to the table's schema.:REPLICATION_VIEW
- Only populatesname
and fields related to the table's replication state.:FULL
- Populates all fields.
- (Table)
#start_time
def start_time() -> Time
The time that the backup was started (i.e. approximately the time the CreateBackup
request is received). The
row data in this backup will be no older than this timestamp.
- (Time)
#state
def state() -> Symbol
The current state of the backup. Possible values are :CREATING
and :READY
.
- (Symbol)
#test_iam_permissions
def test_iam_permissions(*permissions) -> Array<String>
Tests the specified permissions against the Cloud IAM access control policy.
-
permissions (String, Array<String>) — permissions The set of permissions to
check access for. Permissions with wildcards (such as
*
orbigtable.*
) are not allowed. See Access Control.
- (Array<String>) — The permissions that are configured for the policy.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" backup = cluster.backup "my-backup" permissions = backup.test_iam_permissions( "bigtable.backups.delete", "bigtable.backups.get" ) permissions.include? "bigtable.backups.delete" #=> false permissions.include? "bigtable.backups.get" #=> true
#update
def update() -> Boolean
Updates the backup.
expire_time
is the only updatable field.
-
(Boolean) — Returns
true
if the update succeeded.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" backup = cluster.backup "my-backup" # Update backup.expire_time = Time.now + 60 * 60 * 7 backup.save
#update_policy
def update_policy(new_policy) -> Policy
Updates the Cloud IAM access control
policy for the backup. The policy should be read from #policy.
See Policy for an explanation of the policy
etag
property and how to modify policies.
You can also update the policy by passing a block to #policy, which will call this method internally after the block completes.
- new_policy (Policy) — a new or modified Cloud IAM Policy for this backup
- (Policy) — The policy returned by the API update operation.
require "google/cloud/bigtable" bigtable = Google::Cloud::Bigtable.new instance = bigtable.instance "my-instance" cluster = instance.cluster "my-cluster" backup = cluster.backup "my-backup" policy = backup.policy policy.add "roles/owner", "user:owner@example.com" updated_policy = backup.update_policy policy puts updated_policy.roles