Gérer les sauvegardes

Cette page fournit des informations sur les opérations de sauvegarde Spanner. Pour en savoir plus sur les sauvegardes, consultez la section Présentation des sauvegardes.

Avant de commencer

  • Pour obtenir les autorisations dont vous avez besoin pour gérer les sauvegardes, demandez à votre administrateur de vous accorder le les rôles IAM suivants sur l'instance:

  • Les exemples de CLI gcloud présentés sur cette page partent des principes suivants :

Copier une sauvegarde

Console

  1. Dans la console Google Cloud, accédez à la page Instances de Spanner.

    Accéder à la page "Instances"

  2. Cliquez sur l'instance contenant la base de données que vous souhaitez copier.

  3. Cliquez sur la base de données.

  4. Dans le volet de navigation, cliquez sur Sauvegarder/Restaurer.

  5. Dans le tableau Sauvegardes, sélectionnez Actions pour votre sauvegarde, puis cliquez sur Copier.

  6. Remplissez le formulaire en choisissant une instance de destination, en fournissant un nom et en sélectionnant une date d'expiration pour la copie de sauvegarde.

  7. Cliquez sur Copier.

Pour vérifier la progression d'une opération de copie, consultez Vérifiez la progression de l'opération.

Si l'opération prend trop de temps, vous pouvez l'annuler. Pour en savoir plus, consultez Annulez une opération d'instance de longue durée.

gcloud

Vous pouvez copier une sauvegarde sur une autre instance du même projet ou sur une autre instance d'un autre projet.

Copier une sauvegarde dans le même projet

Si vous choisissez de copier la sauvegarde vers une autre instance du même projet, vous devez créer une instance (ou en avoir une prête) pour la sauvegarde copiée. Vous ne pouvez pas créer d'instance dans le cadre de l'opération de copie de sauvegarde. De plus, l'heure d'expiration de la sauvegarde doit être d'au moins six heures à compter du moment où la requête de copie actuelle est traitée et au plus 366 jours après la create_time de la sauvegarde source.

Avant d'utiliser les données de la commande ci-dessous, effectuez les remplacements suivants :

  • PROJECT_ID : ID du projet.
  • SOURCE_INSTANCE_ID: ID de l'instance Spanner source.
  • SOURCE_DATABASE_ID: ID de la base de données Spanner source.
  • SOURCE_BACKUP_NAME: nom de la sauvegarde Spanner.
  • DESTINATION_INSTANCE_ID : ID de l'instance Spanner cible.
  • DESTINATION_BACKUP_NAME: nom de la sauvegarde Spanner de destination.
  • EXPIRATION_DATE: date et heure d'expiration
  • ENCRYPTION_TYPE: le type de chiffrement de la sauvegarde créée. Les valeurs valides sont USE_CONFIG_DEFAULT_OR_BACKUP_ENCRYPTION, GOOGLE_DEFAULT_ENCRYPTION ou CUSTOMER_MANAGED_ENCRYPTION. Si vous utilisez CUSTOMER_MANAGED_ENCRYPTION, vous devez spécifier un kmsKeyName.

Exécutez la commande suivante :

Linux, macOS ou Cloud Shell

gcloud spanner backups copy \
--source-instance=INSTANCE_ID \
--source-backup=SOURCE_BACKUP_NAME \
--destination-instance=DESTINATION_INSTANCE_ID \
--destination-backup=DESTINATION_BACKUP_NAME \
--expiration-date=EXPIRATION_DATE
--encryption-type=ENCRYPTION_TYPE

Windows (PowerShell)

gcloud spanner backups copy `
--source-instance=INSTANCE_ID `
--source-backup=SOURCE_BACKUP_NAME `
--destination-instance=DESTINATION_INSTANCE_ID `
--destination-backup=DESTINATION_BACKUP_NAME `
--expiration-date=EXPIRATION_DATE
--encryption-type=ENCRYPTION_TYPE

Windows (cmd.exe)

gcloud spanner backups copy ^
--source-instance=INSTANCE_ID ^
--source-backup=SOURCE_BACKUP_NAME ^
--destination-instance=DESTINATION_INSTANCE_ID ^
--destination-backup=DESTINATION_BACKUP_NAME ^
--expiration-date=EXPIRATION_DATE
--encryption-type=ENCRYPTION_TYPE

Vous devriez obtenir un résultat semblable à celui-ci :

createTime: '2022-03-29T22:06:05.905823Z'
database: projects/PROJECT_ID/instances/INSTANCE_ID/databases/SOURCE_DATABASE_ID
databaseDialect: GOOGLE_STANDARD_SQL
encryptionInfo:
encryptionType: GOOGLE_DEFAULT_ENCRYPTION
expireTime: '2022-03-30T10:49:41Z'
maxExpireTime: '2023-03-17T20:46:33.479336Z'
name: projects/PROJECT_ID/instances/DESTINATION_INSTANCE_ID/backups/DESTINATION_BACKUP_NAME
sizeBytes: '7957667'
state: READY
versionTime: '2022-03-16T20:46:33.479336Z'

Copier une sauvegarde dans un autre projet

Si vous choisissez de copier la sauvegarde dans un autre projet, vous devez disposer d'un autre avec sa propre instance prête pour la sauvegarde copiée. Vous ne pouvez pas créer de projet dans le cadre de l'opération de copie de sauvegarde. Par ailleurs, l'heure d'expiration de la sauvegarde doit être d'au moins six heures à compter de l'heure à laquelle la demande de copie actuelle est traitée au plus tard 366 jours après la source sauvegarde create_time.

Avant d'utiliser les données de la commande ci-dessous, effectuez les remplacements suivants :

  • SOURCE_PROJECT_ID : ID du projet source.
  • SOURCE_INSTANCE_ID: ID de l'instance Spanner source.
  • SOURCE_DATABASE_ID: ID de la base de données Spanner source.
  • SOURCE_BACKUP_NAME: nom de la sauvegarde Spanner.
  • DESTINATION_PROJECT_ID: ID du projet de destination.
  • DESTINATION_INSTANCE_ID: ID de l'instance Spanner cible
  • DESTINATION_BACKUP_NAME: nom de la sauvegarde Spanner de destination.
  • EXPIRATION_DATE: date et heure d'expiration
  • ENCRYPTION_TYPE: le type de chiffrement de la sauvegarde créée. Les valeurs valides sont USE_CONFIG_DEFAULT_OR_BACKUP_ENCRYPTION, GOOGLE_DEFAULT_ENCRYPTION ou CUSTOMER_MANAGED_ENCRYPTION. Si vous utilisez CUSTOMER_MANAGED_ENCRYPTION, vous devez spécifier un kmsKeyName.

Exécutez la commande suivante :

Linux, macOS ou Cloud Shell

gcloud spanner backups copy \
--source-backup=projects/SOURCE_PROJECT_ID/instances/INSTANCE_ID/backups/SOURCE_BACKUP_NAME \
--destination-backup=projects/DESTINATION_PROJECT_ID/instances/DESTINATION_INSTANCE_ID/backups/DESTINATION_BACKUP_NAME \
--expiration-date=EXPIRATION_DATE
--encryption-type=ENCRYPTION_TYPE

Windows (PowerShell)

gcloud spanner backups copy `
--source-backup=projects/SOURCE_PROJECT_ID/instances/INSTANCE_ID/backups/SOURCE_BACKUP_NAME `
--destination-backup=projects/DESTINATION_PROJECT_ID/instances/DESTINATION_INSTANCE_ID/backups/DESTINATION_BACKUP_NAME `
--expiration-date=EXPIRATION_DATE
--encryption-type=ENCRYPTION_TYPE

Windows (cmd.exe)

gcloud spanner backups copy ^
--source-backup=projects/SOURCE_PROJECT_ID/instances/INSTANCE_ID/backups/SOURCE_BACKUP_NAME ^
--destination-backup=projects/DESTINATION_PROJECT_ID/instances/DESTINATION_INSTANCE_ID/backups/DESTINATION_BACKUP_NAME ^
--expiration-date=EXPIRATION_DATE
--encryption-type=ENCRYPTION_TYPE

Vous devriez obtenir un résultat semblable à celui-ci :

createTime: '2022-03-29T22:06:05.905823Z'
database: projects/SOURCE_PROJECT_ID/instances/INSTANCE_ID/databases/DATABASE_ID
databaseDialect: GOOGLE_STANDARD_SQL
encryptionInfo:
encryptionType: GOOGLE_DEFAULT_ENCRYPTION
expireTime: '2022-03-30T10:49:41Z'
maxExpireTime: '2023-03-17T20:46:33.479336Z'
name: projects/DESTINATION_PROJECT_ID/instances/DESTINATION_INSTANCE_ID/backups/DESTINATION_BACKUP_NAME
sizeBytes: '7957667'
state: READY
versionTime: '2022-03-16T20:46:33.479336Z'

Pour vérifier la progression d'une opération de copie, consultez la section Vérifier la progression de l'opération.

Bibliothèques clientes

L'exemple de code suivant copie une sauvegarde existante. Vous pouvez copier la sauvegarde dans une instance d'une autre région ou d'un autre projet. Une fois l'opération terminée, l'exemple extrait et imprime des informations sur la sauvegarde copiée nouvellement créée, telles que son nom, sa taille, son état de sauvegarde et version_time.

C++

void CopyBackup(google::cloud::spanner_admin::DatabaseAdminClient client,
                std::string const& src_project_id,
                std::string const& src_instance_id,
                std::string const& src_backup_id,
                std::string const& dst_project_id,
                std::string const& dst_instance_id,
                std::string const& dst_backup_id,
                google::cloud::spanner::Timestamp expire_time) {
  google::cloud::spanner::Backup source(
      google::cloud::spanner::Instance(src_project_id, src_instance_id),
      src_backup_id);
  google::cloud::spanner::Instance dst_in(dst_project_id, dst_instance_id);
  auto copy_backup =
      client
          .CopyBackup(dst_in.FullName(), dst_backup_id, source.FullName(),
                      expire_time.get<google::protobuf::Timestamp>().value())
          .get();
  if (!copy_backup) throw std::move(copy_backup).status();
  std::cout << "Copy Backup " << copy_backup->name()  //
            << " of " << source.FullName()            //
            << " of size " << copy_backup->size_bytes() << " bytes as of "
            << google::cloud::spanner::MakeTimestamp(
                   copy_backup->version_time())
                   .value()
            << " was created at "
            << google::cloud::spanner::MakeTimestamp(copy_backup->create_time())
                   .value()
            << ".\n";
}

C#


using Google.Api.Gax;
using Google.Cloud.Spanner.Admin.Database.V1;
using Google.Cloud.Spanner.Common.V1;
using Google.Protobuf.WellKnownTypes;
using System;

public class CopyBackupSample
{
    public Backup CopyBackup(string sourceInstanceId, string sourceProjectId, string sourceBackupId, 
        string targetInstanceId, string targetProjectId, string targetBackupId, 
        DateTimeOffset expireTime)
    {
        DatabaseAdminClient databaseAdminClient = DatabaseAdminClient.Create();

        var request = new CopyBackupRequest
        {
            SourceBackupAsBackupName = new BackupName(sourceProjectId, sourceInstanceId, sourceBackupId), 
            ParentAsInstanceName = new InstanceName(targetProjectId, targetInstanceId),
            BackupId = targetBackupId,
            ExpireTime = Timestamp.FromDateTimeOffset(expireTime) 
        };

        var response = databaseAdminClient.CopyBackup(request);
        Console.WriteLine("Waiting for the operation to finish.");
        var completedResponse = response.PollUntilCompleted(new PollSettings(Expiration.FromTimeout(TimeSpan.FromMinutes(15)), TimeSpan.FromMinutes(2)));

        if (completedResponse.IsFaulted)
        {
            Console.WriteLine($"Error while creating backup: {completedResponse.Exception}");
            throw completedResponse.Exception;
        }

        Backup backup = completedResponse.Result;

        Console.WriteLine($"Backup created successfully.");
        Console.WriteLine($"Backup with Id {sourceBackupId} has been copied from {sourceProjectId}/{sourceInstanceId} to {targetProjectId}/{targetInstanceId} Backup {targetBackupId}");
        Console.WriteLine($"Backup {backup.Name} of size {backup.SizeBytes} bytes was created at {backup.CreateTime} from {backup.Database} and is in state {backup.State} and has version time {backup.VersionTime.ToDateTime()}");

        return backup;
    }
}

Go


import (
	"context"
	"fmt"
	"io"
	"time"

	database "cloud.google.com/go/spanner/admin/database/apiv1"
	adminpb "cloud.google.com/go/spanner/admin/database/apiv1/databasepb"
	pbt "github.com/golang/protobuf/ptypes/timestamp"
)

// copyBackup copies an existing backup to a given instance in same or different region, or in same or different project.
func copyBackup(w io.Writer, instancePath string, copyBackupId string, sourceBackupPath string) error {
	// instancePath := "projects/my-project/instances/my-instance"
	// copyBackupId := "my-copy-backup"
	// sourceBackupPath := "projects/my-project/instances/my-instance/backups/my-source-backup"

	// Add timeout to context.
	ctx, cancel := context.WithTimeout(context.Background(), time.Hour)
	defer cancel()

	// Instantiate database admin client.
	adminClient, err := database.NewDatabaseAdminClient(ctx)
	if err != nil {
		return fmt.Errorf("database.NewDatabaseAdminClient: %w", err)
	}
	defer adminClient.Close()

	expireTime := time.Now().AddDate(0, 0, 14)

	// Instantiate the request for performing copy backup operation.
	copyBackupReq := adminpb.CopyBackupRequest{
		Parent:       instancePath,
		BackupId:     copyBackupId,
		SourceBackup: sourceBackupPath,
		ExpireTime:   &pbt.Timestamp{Seconds: expireTime.Unix(), Nanos: int32(expireTime.Nanosecond())},
	}

	// Start copying the backup.
	copyBackupOp, err := adminClient.CopyBackup(ctx, &copyBackupReq)
	if err != nil {
		return fmt.Errorf("adminClient.CopyBackup: %w", err)
	}

	// Wait for copy backup operation to complete.
	fmt.Fprintf(w, "Waiting for backup copy %s/backups/%s to complete...\n", instancePath, copyBackupId)
	copyBackup, err := copyBackupOp.Wait(ctx)
	if err != nil {
		return fmt.Errorf("copyBackup.Wait: %w", err)
	}

	// Check if long-running copyBackup operation is completed.
	if !copyBackupOp.Done() {
		return fmt.Errorf("backup %v could not be copied to %v", sourceBackupPath, copyBackupId)
	}

	// Get the name, create time, version time and backup size.
	copyBackupCreateTime := time.Unix(copyBackup.CreateTime.Seconds, int64(copyBackup.CreateTime.Nanos))
	copyBackupVersionTime := time.Unix(copyBackup.VersionTime.Seconds, int64(copyBackup.VersionTime.Nanos))
	fmt.Fprintf(w,
		"Backup %s of size %d bytes was created at %s with version time %s\n",
		copyBackup.Name,
		copyBackup.SizeBytes,
		copyBackupCreateTime.Format(time.RFC3339),
		copyBackupVersionTime.Format(time.RFC3339))

	return nil
}

Java


import com.google.cloud.Timestamp;
import com.google.cloud.spanner.Spanner;
import com.google.cloud.spanner.SpannerException;
import com.google.cloud.spanner.SpannerExceptionFactory;
import com.google.cloud.spanner.SpannerOptions;
import com.google.cloud.spanner.admin.database.v1.DatabaseAdminClient;
import com.google.spanner.admin.database.v1.Backup;
import com.google.spanner.admin.database.v1.BackupName;
import com.google.spanner.admin.database.v1.InstanceName;
import java.time.Instant;
import java.time.OffsetDateTime;
import java.time.ZoneId;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;

public class CopyBackupSample {

  static void copyBackup() {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "my-project";
    String instanceId = "my-instance";
    String sourceBackupId = "my-backup";
    String destinationBackupId = "my-destination-backup";

    try (Spanner spanner =
        SpannerOptions.newBuilder().setProjectId(projectId).build().getService();
        DatabaseAdminClient databaseAdminClient = spanner.createDatabaseAdminClient()) {
      copyBackup(databaseAdminClient, projectId, instanceId, sourceBackupId, destinationBackupId);
    }
  }

  static void copyBackup(
      DatabaseAdminClient databaseAdminClient,
      String projectId,
      String instanceId,
      String sourceBackupId,
      String destinationBackupId) {

    Timestamp expireTime =
        Timestamp.ofTimeMicroseconds(
            TimeUnit.MICROSECONDS.convert(
                System.currentTimeMillis() + TimeUnit.DAYS.toMillis(14),
                TimeUnit.MILLISECONDS));

    // Initiate the request which returns an OperationFuture.
    System.out.println("Copying backup [" + destinationBackupId + "]...");
    Backup destinationBackup;
    try {
      // Creates a copy of an existing backup.
      // Wait for the backup operation to complete.
      destinationBackup = databaseAdminClient.copyBackupAsync(
          InstanceName.of(projectId, instanceId), destinationBackupId,
          BackupName.of(projectId, instanceId, sourceBackupId), expireTime.toProto()).get();
      System.out.println("Copied backup [" + destinationBackup.getName() + "]");
    } catch (ExecutionException e) {
      throw (SpannerException) e.getCause();
    } catch (InterruptedException e) {
      throw SpannerExceptionFactory.propagateInterrupt(e);
    }
    // Load the metadata of the new backup from the server.
    destinationBackup = databaseAdminClient.getBackup(destinationBackup.getName());
    System.out.println(
        String.format(
            "Backup %s of size %d bytes was copied at %s for version of database at %s",
            destinationBackup.getName(),
            destinationBackup.getSizeBytes(),
            OffsetDateTime.ofInstant(
                Instant.ofEpochSecond(destinationBackup.getCreateTime().getSeconds(),
                    destinationBackup.getCreateTime().getNanos()),
                ZoneId.systemDefault()),
            OffsetDateTime.ofInstant(
                Instant.ofEpochSecond(destinationBackup.getVersionTime().getSeconds(),
                    destinationBackup.getVersionTime().getNanos()),
                ZoneId.systemDefault())));
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.
 */
// const instanceId = 'my-instance';
// const backupId = 'my-backup',
// const sourceBackupPath = 'projects/my-project-id/instances/my-source-instance/backups/my-source-backup',
// const projectId = 'my-project-id';

// Imports the Google Cloud Spanner client library
const {Spanner} = require('@google-cloud/spanner');
const {PreciseDate} = require('@google-cloud/precise-date');

// Creates a client
const spanner = new Spanner({
  projectId: projectId,
});

// Gets a reference to a Cloud Spanner Database Admin Client object
const databaseAdminClient = spanner.getDatabaseAdminClient();

async function spannerCopyBackup() {
  // Expire copy backup 14 days in the future
  const expireTime = Spanner.timestamp(
    Date.now() + 1000 * 60 * 60 * 24 * 14
  ).toStruct();

  // Copy the source backup
  try {
    console.log(`Creating copy of the source backup ${sourceBackupPath}.`);
    const [operation] = await databaseAdminClient.copyBackup({
      parent: databaseAdminClient.instancePath(projectId, instanceId),
      sourceBackup: sourceBackupPath,
      backupId: backupId,
      expireTime: expireTime,
    });

    console.log(
      `Waiting for backup copy ${databaseAdminClient.backupPath(
        projectId,
        instanceId,
        backupId
      )} to complete...`
    );
    await operation.promise();

    // Verify the copy backup is ready
    const [copyBackup] = await databaseAdminClient.getBackup({
      name: databaseAdminClient.backupPath(projectId, instanceId, backupId),
    });

    if (copyBackup.state === 'READY') {
      console.log(
        `Backup copy ${copyBackup.name} of size ` +
          `${copyBackup.sizeBytes} bytes was created at ` +
          `${new PreciseDate(copyBackup.createTime).toISOString()} ` +
          'with version time ' +
          `${new PreciseDate(copyBackup.versionTime).toISOString()}`
      );
    } else {
      console.error('ERROR: Copy of backup is not ready.');
    }
  } catch (err) {
    console.error('ERROR:', err);
  }
}
spannerCopyBackup();

PHP

use Google\Cloud\Spanner\Admin\Database\V1\CopyBackupRequest;
use Google\Cloud\Spanner\Admin\Database\V1\Client\DatabaseAdminClient;
use Google\Protobuf\Timestamp;

/**
 * Create a copy backup from another source backup.
 * Example:
 * ```
 * copy_backup($projectId, $destInstanceId, $destBackupId, $sourceInstanceId, $sourceBackupId);
 * ```
 *
 * @param string $projectId The Google Cloud project ID.
 * @param string $destInstanceId The Spanner instance ID where the copy backup will reside.
 * @param string $destBackupId The Spanner backup ID of the new backup to be created.
 * @param string $sourceInstanceId The Spanner instance ID of the source backup.
 * @param string $sourceBackupId The Spanner backup ID of the source.
 */
function copy_backup(
    string $projectId,
    string $destInstanceId,
    string $destBackupId,
    string $sourceInstanceId,
    string $sourceBackupId
): void {
    $databaseAdminClient = new DatabaseAdminClient();

    $destInstanceFullName = DatabaseAdminClient::instanceName($projectId, $destInstanceId);
    $expireTime = new Timestamp();
    $expireTime->setSeconds((new \DateTime('+8 hours'))->getTimestamp());
    $sourceBackupFullName = DatabaseAdminClient::backupName($projectId, $sourceInstanceId, $sourceBackupId);
    $request = new CopyBackupRequest([
        'source_backup' => $sourceBackupFullName,
        'parent' => $destInstanceFullName,
        'backup_id' => $destBackupId,
        'expire_time' => $expireTime
    ]);

    $operationResponse = $databaseAdminClient->copyBackup($request);
    $operationResponse->pollUntilComplete();

    if ($operationResponse->operationSucceeded()) {
        $destBackupInfo = $operationResponse->getResult();
        printf(
            'Backup %s of size %d bytes was copied at %d from the source backup %s' . PHP_EOL,
            basename($destBackupInfo->getName()),
            $destBackupInfo->getSizeBytes(),
            $destBackupInfo->getCreateTime()->getSeconds(),
            $sourceBackupId
        );
        printf('Version time of the copied backup: %d' . PHP_EOL, $destBackupInfo->getVersionTime()->getSeconds());
    } else {
        $error = $operationResponse->getError();
        printf('Backup not created due to error: %s.' . PHP_EOL, $error->getMessage());
    }
}

Python

def copy_backup(instance_id, backup_id, source_backup_path):
    """Copies a backup."""

    from google.cloud.spanner_admin_database_v1.types import \
        backup as backup_pb

    spanner_client = spanner.Client()
    database_admin_api = spanner_client.database_admin_api

    # Create a backup object and wait for copy backup operation to complete.
    expire_time = datetime.utcnow() + timedelta(days=14)
    request = backup_pb.CopyBackupRequest(
        parent=database_admin_api.instance_path(spanner_client.project, instance_id),
        backup_id=backup_id,
        source_backup=source_backup_path,
        expire_time=expire_time,
    )

    operation = database_admin_api.copy_backup(request)

    # Wait for backup operation to complete.
    copy_backup = operation.result(2100)

    # Verify that the copy backup is ready.
    assert copy_backup.state == backup_pb.Backup.State.READY

    print(
        "Backup {} of size {} bytes was created at {} with version time {}".format(
            copy_backup.name,
            copy_backup.size_bytes,
            copy_backup.create_time,
            copy_backup.version_time,
        )
    )

Ruby

# project_id  = "Your Google Cloud project ID"
# instance_id = "The ID of the destination instance that will contain the backup copy"
# backup_id = "The ID of the backup copy"
# source_backup = "The source backup to be copied"

require "google/cloud/spanner"
require "google/cloud/spanner/admin/database"

database_admin_client = Google::Cloud::Spanner::Admin::Database.database_admin

instance_path = database_admin_client.instance_path project: project_id, instance: instance_id
backup_path = database_admin_client.backup_path project: project_id,
                                                instance: instance_id,
                                                backup: backup_id
source_backup = database_admin_client.backup_path project: project_id,
                                                  instance: instance_id,
                                                  backup: source_backup_id

expire_time = Time.now + (14 * 24 * 3600) # 14 days from now

job = database_admin_client.copy_backup parent: instance_path,
                                        backup_id: backup_id,
                                        source_backup: source_backup,
                                        expire_time: expire_time

puts "Copy backup operation in progress"

job.wait_until_done!

backup = database_admin_client.get_backup name: backup_path
puts "Backup #{backup_id} of size #{backup.size_bytes} bytes was copied at #{backup.create_time} from #{source_backup} for version #{backup.version_time}"

Vérifier la progression de l'opération

Console

  1. Dans la console Google Cloud, accédez à la page Instances de Spanner.

    Accéder à la page "Instances"

  2. Cliquez sur l'instance contenant la base de données pour laquelle vous souhaitez afficher l'opération de sauvegarde.

  3. Cliquez sur la base de données.

  4. Dans le volet de navigation, cliquez sur Opérations. La page Opérations affiche une liste des des opérations en cours d'exécution.

gcloud

Utilisez gcloud spanner operations describe pour vérifier la progression d'une opération.

  1. Obtenez l'ID de l'opération :

    Avant d'utiliser les données de la commande ci-dessous, effectuez les remplacements suivants :

    • INSTANCE_NAME: nom de l'instance Spanner.
    • DATABASE_NAME : nom de la base de données Spanner.

    Exécutez la commande suivante :

    Linux, macOS ou Cloud Shell

    gcloud spanner operations list --instance=INSTANCE_NAME \
    --database=DATABASE_NAME --type=backup

    Windows (PowerShell)

    gcloud spanner operations list --instance=INSTANCE_NAME `
    --database=DATABASE_NAME --type=backup

    Windows (cmd.exe)

    gcloud spanner operations list --instance=INSTANCE_NAME ^
    --database=DATABASE_NAME --type=backup

    Vous devriez obtenir un résultat semblable à celui-ci :

    OPERATION_ID     DONE  @TYPE                 BACKUP          SOURCE_DATABASE       START_TIME                   END_TIME
    _auto_op_123456  True  CreateBackupMetadata  example-db-backup-7  example-db       2020-02-04T02:12:38.075515Z  2020-02-04T02:22:40.581170Z
    _auto_op_234567  True  CreateBackupMetadata  example-db-backup-6  example-db       2020-02-04T02:05:43.920377Z  2020-02-04T02:07:59.089820Z
    

    Consignes d'utilisation :

    • Pour limiter la liste, spécifiez l'option --filter. Exemple :

      • --filter="metadata.name:example-db" ne répertorie que les opérations sur une base de données spécifique.
      • --filter="error:*" ne répertorie que les opérations de sauvegarde qui ont échoué.

      Pour en savoir plus sur la syntaxe des filtres, consultez gcloud topic filters. Pour en savoir plus sur le filtrage des opérations de sauvegarde, consultez le champ filter dans ListBackupOperationsRequest.

    • L'option --type n'est pas sensible à la casse.

  2. Exécutez gcloud spanner operations describe :

    Avant d'utiliser les données de la commande ci-dessous, effectuez les remplacements suivants :

    • OPERATION_ID : ID de l'opération.
    • INSTANCE_NAME: nom de l'instance Spanner.
    • DATABASE_NAME : nom de la base de données Spanner.

    Exécutez la commande suivante :

    Linux, macOS ou Cloud Shell

    gcloud spanner operations describe OPERATION_ID \
    --instance=INSTANCE_NAME \
    --backup=BACKUP_NAME \

    Windows (PowerShell)

    gcloud spanner operations describe OPERATION_ID `
    --instance=INSTANCE_NAME `
    --backup=BACKUP_NAME `

    Windows (cmd.exe)

    gcloud spanner operations describe OPERATION_ID ^
    --instance=INSTANCE_NAME ^
    --backup=BACKUP_NAME ^

    Vous devriez obtenir un résultat semblable à celui-ci :

    done: true
    metadata:
    ...
    progress:
    - endTime: '2022-03-01T00:28:06.691403Z'
      progressPercent: 100
      startTime: '2022-03-01T00:28:04.221401Z'
    - endTime: '2022-03-01T00:28:17.624588Z'
      startTime: '2022-03-01T00:28:06.691403Z'
      progressPercent: 100
    ...
    
    La section progress de la sortie indique le pourcentage de l'opération terminée.

    Si l'opération prend trop de temps, vous pouvez l'annuler. Pour en savoir plus, consultez Annulez une opération de sauvegarde de longue durée.

Bibliothèques clientes

L'exemple de code suivant répertorie toutes les opérations en cours de création de sauvegardes (opérations avec CreateBackupMetadata) et de copie de sauvegardes (opérations avec CopyBackupMetadata) filtrées par une base de données donnée.

Pour en savoir plus sur la syntaxe de filtrage, consultez le paramètre filter dans backupOperations.list.

C++

void ListBackupOperations(
    google::cloud::spanner_admin::DatabaseAdminClient client,
    std::string const& project_id, std::string const& instance_id,
    std::string const& database_id, std::string const& backup_id) {
  google::cloud::spanner::Instance in(project_id, instance_id);
  google::cloud::spanner::Database database(in, database_id);
  google::cloud::spanner::Backup backup(in, backup_id);

  google::spanner::admin::database::v1::ListBackupOperationsRequest request;
  request.set_parent(in.FullName());

  request.set_filter(std::string("(metadata.@type=type.googleapis.com/") +
                     "google.spanner.admin.database.v1.CreateBackupMetadata)" +
                     " AND (metadata.database=" + database.FullName() + ")");
  for (auto& operation : client.ListBackupOperations(request)) {
    if (!operation) throw std::move(operation).status();
    google::spanner::admin::database::v1::CreateBackupMetadata metadata;
    operation->metadata().UnpackTo(&metadata);
    std::cout << "Backup " << metadata.name() << " of database "
              << metadata.database() << " is "
              << metadata.progress().progress_percent() << "% complete.\n";
  }

  request.set_filter(std::string("(metadata.@type:type.googleapis.com/") +
                     "google.spanner.admin.database.v1.CopyBackupMetadata)" +
                     " AND (metadata.source_backup=" + backup.FullName() + ")");
  for (auto& operation : client.ListBackupOperations(request)) {
    if (!operation) throw std::move(operation).status();
    google::spanner::admin::database::v1::CopyBackupMetadata metadata;
    operation->metadata().UnpackTo(&metadata);
    std::cout << "Copy " << metadata.name() << " of backup "
              << metadata.source_backup() << " is "
              << metadata.progress().progress_percent() << "% complete.\n";
  }
}

C#

Pour répertorier toutes les opérations de création de sauvegarde :


using Google.Cloud.Spanner.Admin.Database.V1;
using Google.Cloud.Spanner.Common.V1;
using Google.LongRunning;
using System;
using System.Collections.Generic;

public class ListBackupOperationsSample
{
    public IEnumerable<Operation> ListBackupOperations(string projectId, string instanceId, string databaseId)
    {
        // Create the DatabaseAdminClient instance.
        DatabaseAdminClient databaseAdminClient = DatabaseAdminClient.Create();

        var filter = $"(metadata.@type:type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata) AND (metadata.database:{databaseId})";
        ListBackupOperationsRequest request = new ListBackupOperationsRequest
        {
            ParentAsInstanceName = InstanceName.FromProjectInstance(projectId, instanceId),
            Filter = filter
        };

        // List the create backup operations on the database.
        var backupOperations = databaseAdminClient.ListBackupOperations(request);

        foreach (var operation in backupOperations)
        {
            CreateBackupMetadata metadata = operation.Metadata.Unpack<CreateBackupMetadata>();
            Console.WriteLine($"Backup {metadata.Name} on " + $"database {metadata.Database} is " + $"{metadata.Progress.ProgressPercent}% complete");
        }

        return backupOperations;
    }
}

Pour répertorier toutes les opérations de sauvegarde de copie :


using Google.Cloud.Spanner.Admin.Database.V1;
using Google.Cloud.Spanner.Common.V1;
using Google.LongRunning;
using System;
using System.Collections.Generic;

public class ListCopyBackupOperationsSample
{
    public IEnumerable<Operation> ListCopyBackupOperations(string projectId, string instanceId, string databaseId, string backupId)
    {
        // Create the DatabaseAdminClient instance.
        DatabaseAdminClient databaseAdminClient = DatabaseAdminClient.Create();

        var filter = $"(metadata.@type:type.googleapis.com/google.spanner.admin.database.v1.CopyBackupMetadata) AND (metadata.source_backup:{backupId})";
        ListBackupOperationsRequest request = new ListBackupOperationsRequest
        {
            ParentAsInstanceName = InstanceName.FromProjectInstance(projectId, instanceId),
            Filter = filter
        };

        // List the copy backup operations on the database.
        var backupOperations = databaseAdminClient.ListBackupOperations(request);

        foreach (var operation in backupOperations)
        {
            CopyBackupMetadata metadata = operation.Metadata.Unpack<CopyBackupMetadata>();
            Console.WriteLine($"Backup {metadata.Name} from source backup {metadata.SourceBackup} is {metadata.Progress.ProgressPercent}% complete");
        }

        return backupOperations;
    }
}

Go


import (
	"context"
	"fmt"
	"io"
	"regexp"

	database "cloud.google.com/go/spanner/admin/database/apiv1"
	adminpb "cloud.google.com/go/spanner/admin/database/apiv1/databasepb"
	"github.com/golang/protobuf/ptypes"
	"google.golang.org/api/iterator"
)

// listBackupOperations lists the backup operations that are pending or have completed/failed/cancelled within the last 7 days.
func listBackupOperations(w io.Writer, db string, backupId string) error {
	// db := "projects/my-project/instances/my-instance/databases/my-database"
	// backupID := "my-backup"

	ctx := context.Background()

	adminClient, err := database.NewDatabaseAdminClient(ctx)
	if err != nil {
		return err
	}
	defer adminClient.Close()

	matches := regexp.MustCompile("^(.*)/databases/(.*)$").FindStringSubmatch(db)
	if matches == nil || len(matches) != 3 {
		return fmt.Errorf("Invalid database id %s", db)
	}
	instanceName := matches[1]
	// List the CreateBackup operations.
	filter := fmt.Sprintf("(metadata.@type:type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata) AND (metadata.database:%s)", db)
	iter := adminClient.ListBackupOperations(ctx, &adminpb.ListBackupOperationsRequest{
		Parent: instanceName,
		Filter: filter,
	})
	for {
		resp, err := iter.Next()
		if err == iterator.Done {
			break
		}
		if err != nil {
			return err
		}
		metadata := &adminpb.CreateBackupMetadata{}
		if err := ptypes.UnmarshalAny(resp.Metadata, metadata); err != nil {
			return err
		}
		fmt.Fprintf(w, "Backup %s on database %s is %d%% complete.\n",
			metadata.Name,
			metadata.Database,
			metadata.Progress.ProgressPercent,
		)
	}

	// List the CopyBackup operations.
	filter = fmt.Sprintf("(metadata.@type:type.googleapis.com/google.spanner.admin.database.v1.CopyBackupMetadata) AND (metadata.source_backup:%s)", backupId)
	iter = adminClient.ListBackupOperations(ctx, &adminpb.ListBackupOperationsRequest{
		Parent: instanceName,
		Filter: filter,
	})
	for {
		resp, err := iter.Next()
		if err == iterator.Done {
			break
		}
		if err != nil {
			return err
		}
		metadata := &adminpb.CopyBackupMetadata{}
		if err := ptypes.UnmarshalAny(resp.Metadata, metadata); err != nil {
			return err
		}
		fmt.Fprintf(w, "Backup %s copied from %s is %d%% complete.\n",
			metadata.Name,
			metadata.SourceBackup,
			metadata.Progress.ProgressPercent,
		)
	}

	return nil
}

Java

static void listBackupOperations(
    DatabaseAdminClient databaseAdminClient,
    String projectId, String instanceId,
    String databaseId, String backupId) {
  InstanceName instanceName = InstanceName.of(projectId, instanceId);
  // Get 'CreateBackup' operations for the sample database.
  String filter =
      String.format(
          "(metadata.@type:type.googleapis.com/"
              + "google.spanner.admin.database.v1.CreateBackupMetadata) "
              + "AND (metadata.database:%s)",
          DatabaseName.of(projectId, instanceId, databaseId).toString());
  ListBackupOperationsRequest listBackupOperationsRequest =
      ListBackupOperationsRequest.newBuilder()
          .setParent(instanceName.toString()).setFilter(filter).build();
  ListBackupOperationsPagedResponse createBackupOperations
      = databaseAdminClient.listBackupOperations(listBackupOperationsRequest);
  System.out.println("Create Backup Operations:");
  for (Operation op : createBackupOperations.iterateAll()) {
    try {
      CreateBackupMetadata metadata = op.getMetadata().unpack(CreateBackupMetadata.class);
      System.out.println(
          String.format(
              "Backup %s on database %s pending: %d%% complete",
              metadata.getName(),
              metadata.getDatabase(),
              metadata.getProgress().getProgressPercent()));
    } catch (InvalidProtocolBufferException e) {
      // The returned operation does not contain CreateBackupMetadata.
      System.err.println(e.getMessage());
    }
  }
  // Get copy backup operations for the sample database.
  filter = String.format(
      "(metadata.@type:type.googleapis.com/"
          + "google.spanner.admin.database.v1.CopyBackupMetadata) "
          + "AND (metadata.source_backup:%s)",
      BackupName.of(projectId, instanceId, backupId).toString());
  listBackupOperationsRequest =
      ListBackupOperationsRequest.newBuilder()
          .setParent(instanceName.toString()).setFilter(filter).build();
  ListBackupOperationsPagedResponse copyBackupOperations =
      databaseAdminClient.listBackupOperations(listBackupOperationsRequest);
  System.out.println("Copy Backup Operations:");
  for (Operation op : copyBackupOperations.iterateAll()) {
    try {
      CopyBackupMetadata copyBackupMetadata =
          op.getMetadata().unpack(CopyBackupMetadata.class);
      System.out.println(
          String.format(
              "Copy Backup %s on backup %s pending: %d%% complete",
              copyBackupMetadata.getName(),
              copyBackupMetadata.getSourceBackup(),
              copyBackupMetadata.getProgress().getProgressPercent()));
    } catch (InvalidProtocolBufferException e) {
      // The returned operation does not contain CopyBackupMetadata.
      System.err.println(e.getMessage());
    }
  }
}

Node.js


// Imports the Google Cloud client library
const {Spanner, protos} = require('@google-cloud/spanner');

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// const projectId = 'my-project-id';
// const databaseId = 'my-database';
// const backupId = 'my-backup';
// const instanceId = 'my-instance';

// Creates a client
const spanner = new Spanner({
  projectId: projectId,
});

// Gets a reference to a Cloud Spanner Database Admin Client object
const databaseAdminClient = spanner.getDatabaseAdminClient();

// List create backup operations
try {
  const [backupOperations] = await databaseAdminClient.listBackupOperations({
    parent: databaseAdminClient.instancePath(projectId, instanceId),
    filter:
      '(metadata.@type:type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata) ' +
      `AND (metadata.database:${databaseId})`,
  });
  console.log('Create Backup Operations:');
  backupOperations.forEach(backupOperation => {
    const metadata =
      protos.google.spanner.admin.database.v1.CreateBackupMetadata.decode(
        backupOperation.metadata.value
      );
    console.log(
      `Backup ${metadata.name} on database ${metadata.database} is ` +
        `${metadata.progress.progressPercent}% complete.`
    );
  });
} catch (err) {
  console.error('ERROR:', err);
}

// List copy backup operations
try {
  console.log(
    '(metadata.@type:type.googleapis.com/google.spanner.admin.database.v1.CopyBackupMetadata) ' +
      `AND (metadata.source_backup:${backupId})`
  );
  const [backupOperations] = await databaseAdminClient.listBackupOperations({
    parent: databaseAdminClient.instancePath(projectId, instanceId),
    filter:
      '(metadata.@type:type.googleapis.com/google.spanner.admin.database.v1.CopyBackupMetadata) ' +
      `AND (metadata.source_backup:${backupId})`,
  });
  console.log('Copy Backup Operations:');
  backupOperations.forEach(backupOperation => {
    const metadata =
      protos.google.spanner.admin.database.v1.CopyBackupMetadata.decode(
        backupOperation.metadata.value
      );
    console.log(
      `Backup ${metadata.name} copied from source backup ${metadata.sourceBackup} is ` +
        `${metadata.progress.progressPercent}% complete.`
    );
  });
} catch (err) {
  console.error('ERROR:', err);
}

PHP

use Google\Cloud\Spanner\Admin\Database\V1\Client\DatabaseAdminClient;
use Google\Cloud\Spanner\Admin\Database\V1\CreateBackupMetadata;
use Google\Cloud\Spanner\Admin\Database\V1\CopyBackupMetadata;
use Google\Cloud\Spanner\Admin\Database\V1\ListBackupOperationsRequest;

/**
 * List all create backup operations in an instance.
 * Optionally passing the backupId will also list the
 * copy backup operations on the backup.
 *
 * @param string $projectId The Google Cloud project ID.
 * @param string $instanceId The Spanner instance ID.
 * @param string $databaseId The Spanner database ID.
 * @param string $backupId The Spanner backup ID whose copy operations need to be listed.
 */
function list_backup_operations(
    string $projectId,
    string $instanceId,
    string $databaseId,
    string $backupId
): void {
    $databaseAdminClient = new DatabaseAdminClient();

    $parent = DatabaseAdminClient::instanceName($projectId, $instanceId);

    // List the CreateBackup operations.
    $filterCreateBackup = '(metadata.@type:type.googleapis.com/' .
        'google.spanner.admin.database.v1.CreateBackupMetadata) AND ' . "(metadata.database:$databaseId)";

    // See https://cloud.google.com/spanner/docs/reference/rpc/google.spanner.admin.database.v1#listbackupoperationsrequest
    // for the possible filter values
    $filterCopyBackup = sprintf('(metadata.@type:type.googleapis.com/' .
        'google.spanner.admin.database.v1.CopyBackupMetadata) AND ' . "(metadata.source_backup:$backupId)");
    $operations = $databaseAdminClient->listBackupOperations(
        new ListBackupOperationsRequest([
            'parent' => $parent,
            'filter' => $filterCreateBackup
        ])
    );

    foreach ($operations->iterateAllElements() as $operation) {
        $obj = new CreateBackupMetadata();
        $meta = $operation->getMetadata()->unpack($obj);
        $backupName = basename($meta->getName());
        $dbName = basename($meta->getDatabase());
        $progress = $meta->getProgress()->getProgressPercent();
        printf('Backup %s on database %s is %d%% complete.' . PHP_EOL, $backupName, $dbName, $progress);
    }

    $operations = $databaseAdminClient->listBackupOperations(
        new ListBackupOperationsRequest([
            'parent' => $parent,
            'filter' => $filterCopyBackup
        ])
    );

    foreach ($operations->iterateAllElements() as $operation) {
        $obj = new CopyBackupMetadata();
        $meta = $operation->getMetadata()->unpack($obj);
        $backupName = basename($meta->getName());
        $progress = $meta->getProgress()->getProgressPercent();
        printf('Copy Backup %s on source backup %s is %d%% complete.' . PHP_EOL, $backupName, $backupId, $progress);
    }
}

Python

def list_backup_operations(instance_id, database_id, backup_id):
    from google.cloud.spanner_admin_database_v1.types import \
        backup as backup_pb

    spanner_client = spanner.Client()
    database_admin_api = spanner_client.database_admin_api

    # List the CreateBackup operations.
    filter_ = (
        "(metadata.@type:type.googleapis.com/"
        "google.spanner.admin.database.v1.CreateBackupMetadata) "
        "AND (metadata.database:{})"
    ).format(database_id)
    request = backup_pb.ListBackupOperationsRequest(
        parent=database_admin_api.instance_path(spanner_client.project, instance_id),
        filter=filter_,
    )
    operations = database_admin_api.list_backup_operations(request)
    for op in operations:
        metadata = protobuf_helpers.from_any_pb(
            backup_pb.CreateBackupMetadata, op.metadata
        )
        print(
            "Backup {} on database {}: {}% complete.".format(
                metadata.name, metadata.database, metadata.progress.progress_percent
            )
        )

    # List the CopyBackup operations.
    filter_ = (
        "(metadata.@type:type.googleapis.com/google.spanner.admin.database.v1.CopyBackupMetadata) "
        "AND (metadata.source_backup:{})"
    ).format(backup_id)
    request = backup_pb.ListBackupOperationsRequest(
        parent=database_admin_api.instance_path(spanner_client.project, instance_id),
        filter=filter_,
    )
    operations = database_admin_api.list_backup_operations(request)
    for op in operations:
        metadata = protobuf_helpers.from_any_pb(
            backup_pb.CopyBackupMetadata, op.metadata
        )
        print(
            "Backup {} on source backup {}: {}% complete.".format(
                metadata.name,
                metadata.source_backup,
                metadata.progress.progress_percent,
            )
        )

Ruby

Pour répertorier toutes les opérations de création de sauvegarde :

# project_id  = "Your Google Cloud project ID"
# instance_id = "Your Spanner instance ID"
# database_id = "Your Spanner database ID"

require "google/cloud/spanner"
require "google/cloud/spanner/admin/database"

database_admin_client = Google::Cloud::Spanner::Admin::Database.database_admin
instance_path = database_admin_client.instance_path project: project_id, instance: instance_id

jobs = database_admin_client.list_backup_operations parent: instance_path,
                                                    filter: "metadata.@type:type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata"
jobs.each do |job|
  if job.error?
    puts job.error
  else
    puts "Backup #{job.results.name} on database #{database_id} is #{job.metadata.progress.progress_percent}% complete"
  end
end

Pour répertorier toutes les opérations de sauvegarde de copie :

# project_id  = "Your Google Cloud project ID"
# instance_id = "Your Spanner instance ID"
# backup_id = "You Spanner backup ID"

require "google/cloud/spanner"
require "google/cloud/spanner/admin/database"

database_admin_client = Google::Cloud::Spanner::Admin::Database.database_admin
instance_path = database_admin_client.instance_path project: project_id, instance: instance_id

filter = "(metadata.@type:type.googleapis.com/google.spanner.admin.database.v1.CopyBackupMetadata) AND (metadata.source_backup:#{backup_id})"

jobs = database_admin_client.list_backup_operations parent: instance_path,
                                                    filter: filter
jobs.each do |job|
  if job.error?
    puts job.error
  else
    puts "Backup #{job.results.name} on source backup #{backup_id} is #{job.metadata.progress.progress_percent}% complete"
  end
end

Annuler une opération de sauvegarde

Console

La console Google Cloud ne permet pas d'annuler une opération de sauvegarde. Toutefois, vous pouvez annuler les opérations qui prennent trop de temps à l'aide de la Google Cloud CLI, de l'API REST ou de l'API RPC. Pour en savoir plus, consultez la section Annuler une opération d'instance de longue durée.

gcloud

  1. Obtenez l'ID d'opération:

    Avant d'utiliser les données de la commande ci-dessous, effectuez les remplacements suivants :

    • INSTANCE_NAME: nom de l'instance Spanner.
    • DATABASE_NAME : nom de la base de données Spanner.

    Exécutez la commande suivante :

    Linux, macOS ou Cloud Shell

    gcloud spanner operations list --instance=INSTANCE_NAME \
    --database=DATABASE_NAME --type=backup

    Windows (PowerShell)

    gcloud spanner operations list --instance=INSTANCE_NAME `
    --database=DATABASE_NAME --type=backup

    Windows (cmd.exe)

    gcloud spanner operations list --instance=INSTANCE_NAME ^
    --database=DATABASE_NAME --type=backup

    Vous devriez obtenir un résultat semblable à celui-ci :

    OPERATION_ID     DONE  @TYPE                 BACKUP          SOURCE_DATABASE       START_TIME                   END_TIME
    _auto_op_123456  True  CreateBackupMetadata  example-db-backup-7  example-db       2020-02-04T02:12:38.075515Z  2020-02-04T02:22:40.581170Z
    _auto_op_234567  True  CreateBackupMetadata  example-db-backup-6  example-db       2020-02-04T02:05:43.920377Z  2020-02-04T02:07:59.089820Z
    

    Consignes d'utilisation :

    • Pour limiter la liste, spécifiez l'option --filter. Exemple :

      • --filter="metadata.name:example-db" ne répertorie que les opérations sur une base de données spécifique.
      • --filter="error:*" ne répertorie que les opérations de sauvegarde qui ont échoué.

      Pour en savoir plus sur la syntaxe des filtres, consultez gcloud topic filters. Pour en savoir plus sur le filtrage des opérations de sauvegarde, consultez le champ filter dans ListBackupOperationsRequest.

    • L'option --type n'est pas sensible à la casse.

  2. Utiliser gcloud spanner operations cancel pour annuler une opération de sauvegarde.

    Avant d'utiliser les données de la commande ci-dessous, effectuez les remplacements suivants :

    • OPERATION_ID : ID de l'opération.
    • INSTANCE_NAME: nom de l'instance Spanner.
    • DATABASE_NAME: nom de la base de données Spanner.
    • BACKUP_NAME : nom de la sauvegarde Spanner.

    Exécutez la commande suivante :

    Linux, macOS ou Cloud Shell

    gcloud spanner operations cancel OPERATION_ID --instance=INSTANCE_NAME \
     --database=DATABASE_NAME --backup=BACKUP_NAME

    Windows (PowerShell)

    gcloud spanner operations cancel OPERATION_ID --instance=INSTANCE_NAME `
     --database=DATABASE_NAME --backup=BACKUP_NAME

    Windows (cmd.exe)

    gcloud spanner operations cancel OPERATION_ID --instance=INSTANCE_NAME ^
     --database=DATABASE_NAME --backup=BACKUP_NAME

Bibliothèques clientes

L'exemple de code suivant crée une sauvegarde, annule l'opération de sauvegarde et attend que l'opération de sauvegarde ait l'état done. Si l'opération a bien été annulée, elle renvoie cancelTime et un message d'erreur. Si le l'opération de sauvegarde a été effectuée avant son annulation, la sauvegarde existe et vous pouvez la supprimer.

C++

void CreateBackupAndCancel(
    google::cloud::spanner_admin::DatabaseAdminClient client,
    std::string const& project_id, std::string const& instance_id,
    std::string const& database_id, std::string const& backup_id,
    google::cloud::spanner::Timestamp expire_time) {
  google::cloud::spanner::Database database(project_id, instance_id,
                                            database_id);
  google::spanner::admin::database::v1::CreateBackupRequest request;
  request.set_parent(database.instance().FullName());
  request.set_backup_id(backup_id);
  request.mutable_backup()->set_database(database.FullName());
  *request.mutable_backup()->mutable_expire_time() =
      expire_time.get<google::protobuf::Timestamp>().value();
  auto f = client.CreateBackup(request);
  f.cancel();
  auto backup = f.get();
  if (backup) {
    auto status = client.DeleteBackup(backup->name());
    if (!status.ok()) throw std::move(status);
    std::cout << "Backup " << backup->name() << " was deleted.\n";
  } else {
    std::cout << "CreateBackup operation was cancelled with the message '"
              << backup.status().message() << "'.\n";
  }
}

C#


using Google.Cloud.Spanner.Admin.Database.V1;
using Google.Cloud.Spanner.Common.V1;
using Google.LongRunning;
using Google.Protobuf.WellKnownTypes;
using System;

public class CancelBackupOperationSample
{
    public Operation<Backup, CreateBackupMetadata> CancelBackupOperation(string projectId, string instanceId, string databaseId, string backupId)
    {
        // Create the DatabaseAdminClient instance.
        DatabaseAdminClient databaseAdminClient = DatabaseAdminClient.Create();

        // Initialize backup request parameters.
        Backup backup = new Backup
        {
            DatabaseAsDatabaseName = DatabaseName.FromProjectInstanceDatabase(projectId, instanceId, databaseId),
            ExpireTime = DateTime.UtcNow.AddDays(14).ToTimestamp()
        };
        InstanceName parentAsInstanceName = InstanceName.FromProjectInstance(projectId, instanceId);

        // Make the CreateBackup request.
        Operation<Backup, CreateBackupMetadata> operation = databaseAdminClient.CreateBackup(parentAsInstanceName, backup, backupId);

        // Cancel the operation.
        operation.Cancel();

        // Poll until the long-running operation is completed in case the backup was
        // created before the operation was cancelled.
        Console.WriteLine("Waiting for the operation to finish.");
        Operation<Backup, CreateBackupMetadata> completedOperation = operation.PollUntilCompleted();

        if (completedOperation.IsFaulted)
        {
            Console.WriteLine($"Create backup operation cancelled: {operation.Name}");
        }
        else
        {
            Console.WriteLine("The backup was created before the operation was cancelled. Backup needs to be deleted.");
            BackupName backupAsBackupName = BackupName.FromProjectInstanceBackup(projectId, instanceId, backupId);
            databaseAdminClient.DeleteBackup(backupAsBackupName);
        }

        return completedOperation;
    }
}

Go


import (
	"context"
	"fmt"
	"io"
	"regexp"
	"time"

	longrunning "cloud.google.com/go/longrunning/autogen/longrunningpb"
	database "cloud.google.com/go/spanner/admin/database/apiv1"
	adminpb "cloud.google.com/go/spanner/admin/database/apiv1/databasepb"
	pbt "github.com/golang/protobuf/ptypes/timestamp"
	"google.golang.org/grpc/codes"
	"google.golang.org/grpc/status"
)

func cancelBackup(ctx context.Context, w io.Writer, db, backupID string) error {
	matches := regexp.MustCompile("^(.+)/databases/(.+)$").FindStringSubmatch(db)
	if matches == nil || len(matches) != 3 {
		return fmt.Errorf("cancelBackup: invalid database id %q", db)
	}

	adminClient, err := database.NewDatabaseAdminClient(ctx)
	if err != nil {
		return fmt.Errorf("cancelBackup.NewDatabaseAdminClient: %w", err)
	}
	defer adminClient.Close()

	expireTime := time.Now().AddDate(0, 0, 14)
	// Create a backup.
	req := adminpb.CreateBackupRequest{
		Parent:   matches[1],
		BackupId: backupID,
		Backup: &adminpb.Backup{
			Database:   db,
			ExpireTime: &pbt.Timestamp{Seconds: expireTime.Unix(), Nanos: int32(expireTime.Nanosecond())},
		},
	}
	op, err := adminClient.CreateBackup(ctx, &req)
	if err != nil {
		return fmt.Errorf("cancelBackup.CreateBackup: %w", err)
	}

	// Cancel backup creation.
	err = adminClient.LROClient.CancelOperation(ctx, &longrunning.CancelOperationRequest{Name: op.Name()})
	if err != nil {
		return fmt.Errorf("cancelBackup.CancelOperation: %w", err)
	}

	// Cancel operations are best effort so either it will complete or be
	// cancelled.
	backup, err := op.Wait(ctx)
	if err != nil {
		if waitStatus, ok := status.FromError(err); !ok || waitStatus.Code() != codes.Canceled {
			return fmt.Errorf("cancelBackup.Wait: %w", err)
		}
	} else {
		// Backup was completed before it could be cancelled so delete the
		// unwanted backup.
		err = adminClient.DeleteBackup(ctx, &adminpb.DeleteBackupRequest{Name: backup.Name})
		if err != nil {
			return fmt.Errorf("cancelBackup.DeleteBackup: %w", err)
		}
	}

	fmt.Fprintf(w, "Backup cancelled.\n")
	return nil
}

Java

static void cancelCreateBackup(
    DatabaseAdminClient dbAdminClient, String projectId, String instanceId,
    String databaseId, String backupId) {
  // Set expire time to 14 days from now.
  Timestamp expireTime =
      Timestamp.newBuilder().setSeconds(TimeUnit.MILLISECONDS.toSeconds((
          System.currentTimeMillis() + TimeUnit.DAYS.toMillis(14)))).build();
  BackupName backupName = BackupName.of(projectId, instanceId, backupId);
  Backup backup = Backup.newBuilder()
      .setName(backupName.toString())
      .setDatabase(DatabaseName.of(projectId, instanceId, databaseId).toString())
      .setExpireTime(expireTime).build();

  try {
    // Start the creation of a backup.
    System.out.println("Creating backup [" + backupId + "]...");
    OperationFuture<Backup, CreateBackupMetadata> op = dbAdminClient.createBackupAsync(
        InstanceName.of(projectId, instanceId), backup, backupId);

    // Try to cancel the backup operation.
    System.out.println("Cancelling create backup operation for [" + backupId + "]...");
    dbAdminClient.getOperationsClient().cancelOperation(op.getName());

    // Get a polling future for the running operation. This future will regularly poll the server
    // for the current status of the backup operation.
    RetryingFuture<OperationSnapshot> pollingFuture = op.getPollingFuture();

    // Wait for the operation to finish.
    // isDone will return true when the operation is complete, regardless of whether it was
    // successful or not.
    while (!pollingFuture.get().isDone()) {
      System.out.println("Waiting for the cancelled backup operation to finish...");
      Thread.sleep(TimeUnit.MILLISECONDS.convert(5, TimeUnit.SECONDS));
    }
    if (pollingFuture.get().getErrorCode() == null) {
      // Backup was created before it could be cancelled. Delete the backup.
      dbAdminClient.deleteBackup(backupName);
      System.out.println("Backup operation for [" + backupId
          + "] successfully finished before it could be cancelled");
    } else if (pollingFuture.get().getErrorCode().getCode() == StatusCode.Code.CANCELLED) {
      System.out.println("Backup operation for [" + backupId + "] successfully cancelled");
    }
  } catch (ExecutionException e) {
    throw SpannerExceptionFactory.newSpannerException(e.getCause());
  } catch (InterruptedException e) {
    throw SpannerExceptionFactory.propagateInterrupt(e);
  }
}

Node.js


// Imports the Google Cloud client library and precise date library
const {Spanner, protos} = require('@google-cloud/spanner');
/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// const projectId = 'my-project-id';
// const instanceId = 'my-instance';
// const databaseId = 'my-database';
// const backupId = 'my-backup';

// Creates a client
const spanner = new Spanner({
  projectId: projectId,
});

// Gets a reference to a Cloud Spanner Database Admin Client object
const databaseAdminClient = spanner.getDatabaseAdminClient();

// Creates a new backup of the database
try {
  console.log(
    `Creating backup of database ${databaseAdminClient.databasePath(
      projectId,
      instanceId,
      databaseId
    )}.`
  );

  // Expire backup one day in the future
  const expireTime = Date.now() + 1000 * 60 * 60 * 24;
  const [operation] = await databaseAdminClient.createBackup({
    parent: databaseAdminClient.instancePath(projectId, instanceId),
    backupId: backupId,
    backup: (protos.google.spanner.admin.database.v1.Backup = {
      database: databaseAdminClient.databasePath(
        projectId,
        instanceId,
        databaseId
      ),
      expireTime: Spanner.timestamp(expireTime).toStruct(),
      name: databaseAdminClient.backupPath(projectId, instanceId, backupId),
    }),
  });

  // Cancel the backup
  await operation.cancel();

  console.log('Backup cancelled.');
} catch (err) {
  console.error('ERROR:', err);
} finally {
  // Delete backup in case it got created before the cancel operation
  await databaseAdminClient.deleteBackup({
    name: databaseAdminClient.backupPath(projectId, instanceId, backupId),
  });
  // Close the spanner client when finished.
  // The databaseAdminClient does not require explicit closure. The closure of the Spanner client will automatically close the databaseAdminClient.
  spanner.close();
}

PHP

use Google\ApiCore\ApiException;
use Google\Cloud\Spanner\Admin\Database\V1\Backup;
use Google\Cloud\Spanner\Admin\Database\V1\Client\DatabaseAdminClient;
use Google\Cloud\Spanner\Admin\Database\V1\CreateBackupRequest;
use Google\Cloud\Spanner\Admin\Database\V1\DeleteBackupRequest;
use Google\Cloud\Spanner\Admin\Database\V1\GetBackupRequest;
use Google\Protobuf\Timestamp;

/**
 * Cancel a backup operation.
 * Example:
 * ```
 * cancel_backup($projectId, $instanceId, $databaseId);
 * ```
 *
 * @param string $projectId The Google Cloud project ID.
 * @param string $instanceId The Spanner instance ID.
 * @param string $databaseId The Spanner database ID.
 */
function cancel_backup(string $projectId, string $instanceId, string $databaseId): void
{
    $databaseAdminClient = new DatabaseAdminClient();
    $databaseFullName = DatabaseAdminClient::databaseName($projectId, $instanceId, $databaseId);
    $instanceFullName = DatabaseAdminClient::instanceName($projectId, $instanceId);
    $expireTime = new Timestamp();
    $expireTime->setSeconds((new \DateTime('+14 days'))->getTimestamp());
    $backupId = uniqid('backup-' . $databaseId . '-cancel');
    $request = new CreateBackupRequest([
        'parent' => $instanceFullName,
        'backup_id' => $backupId,
        'backup' => new Backup([
            'database' => $databaseFullName,
            'expire_time' => $expireTime
        ])
    ]);

    $operation = $databaseAdminClient->createBackup($request);
    $operation->cancel();

    // Cancel operations are always successful regardless of whether the operation is
    // still in progress or is complete.
    printf('Cancel backup operation complete.' . PHP_EOL);

    // Operation may succeed before cancel() has been called. So we need to clean up created backup.
    try {
        $request = new GetBackupRequest();
        $request->setName($databaseAdminClient->backupName($projectId, $instanceId, $backupId));
        $info = $databaseAdminClient->getBackup($request);
    } catch (ApiException $ex) {
        return;
    }
    $databaseAdminClient->deleteBackup(new DeleteBackupRequest([
        'name' => $databaseAdminClient->backupName($projectId, $instanceId, $backupId)
    ]));
}

Python

def cancel_backup(instance_id, database_id, backup_id):
    from google.cloud.spanner_admin_database_v1.types import \
        backup as backup_pb

    spanner_client = spanner.Client()
    database_admin_api = spanner_client.database_admin_api

    expire_time = datetime.utcnow() + timedelta(days=30)

    # Create a backup.
    request = backup_pb.CreateBackupRequest(
        parent=database_admin_api.instance_path(spanner_client.project, instance_id),
        backup_id=backup_id,
        backup=backup_pb.Backup(
            database=database_admin_api.database_path(
                spanner_client.project, instance_id, database_id
            ),
            expire_time=expire_time,
        ),
    )

    operation = database_admin_api.create_backup(request)
    # Cancel backup creation.
    operation.cancel()

    # Cancel operations are the best effort so either it will complete or
    # be cancelled.
    while not operation.done():
        time.sleep(300)  # 5 mins

    try:
        database_admin_api.get_backup(
            backup_pb.GetBackupRequest(
                name=database_admin_api.backup_path(
                    spanner_client.project, instance_id, backup_id
                ),
            )
        )
    except NotFound:
        print("Backup creation was successfully cancelled.")
        return
    print("Backup was created before the cancel completed.")
    database_admin_api.delete_backup(
        backup_pb.DeleteBackupRequest(
            name=database_admin_api.backup_path(
                spanner_client.project, instance_id, backup_id
            ),
        )
    )
    print("Backup deleted.")

Ruby

# project_id  = "Your Google Cloud project ID"
# instance_id = "Your Spanner instance ID"
# database_id = "Your Spanner database ID"
# backup_id = "Your Spanner backup ID"

require "google/cloud/spanner"
require "google/cloud/spanner/admin/database"

database_admin_client = Google::Cloud::Spanner::Admin::Database.database_admin

instance_path = database_admin_client.instance_path project: project_id, instance: instance_id

db_path = database_admin_client.database_path project: project_id,
                                              instance: instance_id,
                                              database: database_id

backup_path = database_admin_client.backup_path project: project_id,
                                                instance: instance_id,
                                                backup: backup_id

expire_time = Time.now + (14 * 24 * 3600) # 14 days from now

job = database_admin_client.create_backup parent: instance_path,
                                          backup_id: backup_id,
                                          backup: {
                                            database: db_path,
                                              expire_time: expire_time
                                          }

puts "Backup operation in progress"

job.cancel
job.wait_until_done!

begin
  backup = database_admin_client.get_backup name: backup_path
  database_admin_client.delete_backup name: backup_path if backup
rescue StandardError
  nil # no cleanup needed when a backup is not created
end
puts "#{backup_id} creation job cancelled"

Obtenir des informations sur la sauvegarde

Console

  1. Dans la console Google Cloud, accédez à la page Instances de Spanner.

    Accéder à la page "Instances"

  2. Cliquez sur l'instance contenant la base de données pour laquelle vous souhaitez afficher les informations de sauvegarde.

  3. Cliquez sur la base de données pour ouvrir la page Présentation correspondante.

  4. Dans le volet de navigation, cliquez sur Sauvegarder/Restaurer. Vous pouvez afficher la sauvegarde les informations de la sauvegarde sélectionnée dans la base de données.

gcloud

Pour obtenir des informations sur une sauvegarde, utilisez gcloud spanner backups describe.

Avant d'utiliser les données de la commande ci-dessous, effectuez les remplacements suivants :

  • PROJECT_ID : ID du projet.
  • INSTANCE_ID : ID de l'instance Spanner.
  • DATABASE_ID: ID de la base de données Spanner.
  • BACKUP_NAME: nom de la sauvegarde Spanner.

Exécutez la commande suivante :

Linux, macOS ou Cloud Shell

gcloud spanner backups describe BACKUP_NAME --instance=INSTANCE_ID

Windows (PowerShell)

gcloud spanner backups describe BACKUP_NAME --instance=INSTANCE_ID

Windows (cmd.exe)

gcloud spanner backups describe BACKUP_NAME --instance=INSTANCE_ID

Vous devriez obtenir un résultat semblable à celui-ci :

createTime: '2020-02-04T02:05:43.920377Z'
database: projects/PROJECT_ID/instances/INSTANCE_ID/databases/DATABASE_ID
expireTime: '2021-02-04T02:05:43.268327Z'
name: projects/PROJECT_ID/instances/INSTANCE_ID/backups/BACKUP_NAME
sizeBytes: '1000000000'
state: READY

Bibliothèques clientes

Les bibliothèques clientes ne permettent pas d'obtenir des informations de sauvegarde pour une seule sauvegarde. Cependant, vous pouvez répertorier toutes les sauvegardes et leurs informations dans une instance. Pour en savoir plus, consultez la section Lister les sauvegardes d'une instance.

Répertorier les sauvegardes d'une instance

Console

  1. Dans la console Google Cloud, accédez à la page Instances de Spanner.

    Accéder à la page "Instances"

  2. Cliquez sur votre instance pour afficher toutes les sauvegardes disponibles et leurs informations.

  3. Dans le volet de navigation, cliquez sur Sauvegarder/Restaurer.

gcloud

Pour répertorier toutes les sauvegardes d'une instance, utilisez gcloud spanner backups list.

Avant d'utiliser les données de la commande ci-dessous, effectuez les remplacements suivants :

  • INSTANCE_ID : ID de l'instance Spanner.

Exécutez la commande suivante :

Linux, macOS ou Cloud Shell

gcloud spanner backups list --instance=INSTANCE_ID

Windows (PowerShell)

gcloud spanner backups list --instance=INSTANCE_ID

Windows (cmd.exe)

gcloud spanner backups list --instance=INSTANCE_ID

Vous devriez obtenir un résultat semblable à celui-ci :

  BACKUP               SOURCE_DATABASE  CREATION_TIME                EXPIRATION_TIME              STATE  BACKUP_SIZE_IN_BYTES  IN_USE_BY
  example-db-backup-6  example-db       2020-02-04T02:05:43.920377Z  2021-02-04T02:05:43.268327Z  CREATING
  example-db-backup-4  example-db       2020-02-04T01:21:20.873839Z  2021-02-04T01:21:20.530151Z  READY  32
  example-db-backup-3  example-db       2020-02-03T23:59:18.936433Z  2021-02-03T23:59:18.203083Z  READY  32
  example-db-backup-5  example-db       2020-02-03T23:48:06.259296Z  2021-02-03T23:48:05.830937Z  READY  32
  example-db-backup-2  example-db       2020-01-30T19:49:00.616338Z  2021-01-30T19:49:00.283917Z  READY  32
  example-db-backup-1  example-db       2020-01-30T19:47:09.492551Z  2021-01-30T19:47:09.097804Z  READY  32

Pour limiter la liste, spécifiez l'option --filter. Par exemple, pour filtrer la liste afin d'inclure uniquement les sauvegardes toujours en cours de création, ajoutez --filter="state:creating". Pour en savoir plus sur la syntaxe des filtres, consultez gcloud topic filters. Pour en savoir plus sur le filtrage des sauvegardes, consultez le champ filter dans ListBackupsRequest.

Bibliothèques clientes

L'exemple de code suivant répertorie les sauvegardes d'une instance donnée.

Vous pouvez filtrer la liste des sauvegardes renvoyées (par exemple, en filtrant par nom, version ou l'heure d'expiration de la sauvegarde) en fournissant une expression de filtre. Pour en savoir plus sur la syntaxe de filtrage, consultez le paramètre filter dans List Backups.

C++

void ListBackups(google::cloud::spanner_admin::DatabaseAdminClient client,
                 std::string const& project_id,
                 std::string const& instance_id) {
  google::cloud::spanner::Instance in(project_id, instance_id);
  std::cout << "All backups:\n";
  for (auto& backup : client.ListBackups(in.FullName())) {
    if (!backup) throw std::move(backup).status();
    std::cout << "Backup " << backup->name() << " on database "
              << backup->database() << " with size : " << backup->size_bytes()
              << " bytes.\n";
  }
}

C#


using Google.Cloud.Spanner.Admin.Database.V1;
using Google.Cloud.Spanner.Common.V1;
using System;
using System.Collections.Generic;
using System.Linq;

public class ListBackupsSample
{
    public IEnumerable<Backup> ListBackups(string projectId, string instanceId, string databaseId, string backupId)
    {
        // Create the DatabaseAdminClient instance.
        DatabaseAdminClient databaseAdminClient = DatabaseAdminClient.Create();

        InstanceName parentAsInstanceName = InstanceName.FromProjectInstance(projectId, instanceId);

        // List all backups.
        Console.WriteLine("All backups:");
        var allBackups = databaseAdminClient.ListBackups(parentAsInstanceName);
        PrintBackups(allBackups);

        ListBackupsRequest request = new ListBackupsRequest
        {
            ParentAsInstanceName = parentAsInstanceName,
        };

        // List backups containing backup name.
        Console.WriteLine($"Backups with backup name containing {backupId}:");
        request.Filter = $"name:{backupId}";
        var backupsWithName = databaseAdminClient.ListBackups(request);
        PrintBackups(backupsWithName);

        // List backups on a database containing name.
        Console.WriteLine($"Backups with database name containing {databaseId}:");
        request.Filter = $"database:{databaseId}";
        var backupsWithDatabaseName = databaseAdminClient.ListBackups(request);
        PrintBackups(backupsWithDatabaseName);

        // List backups that expire within 30 days.
        Console.WriteLine("Backups expiring within 30 days:");
        string expireTime = DateTime.UtcNow.AddDays(30).ToString("O");
        request.Filter = $"expire_time < \"{expireTime}\"";
        var expiringBackups = databaseAdminClient.ListBackups(request);
        PrintBackups(expiringBackups);

        // List backups with a size greater than 100 bytes.
        Console.WriteLine("Backups with size > 100 bytes:");
        request.Filter = "size_bytes > 100";
        var backupsWithSize = databaseAdminClient.ListBackups(request);
        PrintBackups(backupsWithSize);

        // List backups created in the last day that are ready.
        Console.WriteLine("Backups created within last day that are ready:");
        string createTime = DateTime.UtcNow.AddDays(-1).ToString("O");
        request.Filter = $"create_time >= \"{createTime}\" AND state:READY";
        var recentReadyBackups = databaseAdminClient.ListBackups(request);
        PrintBackups(recentReadyBackups);

        // List backups in pages of 500 elements each
        foreach (var page in databaseAdminClient.ListBackups(parentAsInstanceName, pageSize: 500).AsRawResponses())
        {
            PrintBackups(page);
        }

        return allBackups;
    }

    private static void PrintBackups(IEnumerable<Backup> backups)
    {
        // We print the first 5 elements each time for demonstration purposes.
        // You can print all backups in the sequence by removing the call to Take(5).
        // If the sequence has been returned by a paginated operation it will lazily
        // fetch elements in pages as needed.
        foreach (Backup backup in backups.Take(5))
        {
            Console.WriteLine($"Backup Name : {backup.Name}");
        };
    }
}

Go


import (
	"context"
	"fmt"
	"io"
	"regexp"
	"time"

	database "cloud.google.com/go/spanner/admin/database/apiv1"
	adminpb "cloud.google.com/go/spanner/admin/database/apiv1/databasepb"
	"google.golang.org/api/iterator"
)

func listBackups(ctx context.Context, w io.Writer, db, backupID string) error {
	adminClient, err := database.NewDatabaseAdminClient(ctx)
	if err != nil {
		return err
	}
	defer adminClient.Close()

	matches := regexp.MustCompile("^(.*)/databases/(.*)$").FindStringSubmatch(db)
	if matches == nil || len(matches) != 3 {
		return fmt.Errorf("Invalid database id %s", db)
	}
	instanceName := matches[1]

	printBackups := func(iter *database.BackupIterator) error {
		for {
			resp, err := iter.Next()
			if err == iterator.Done {
				return nil
			}
			if err != nil {
				return err
			}
			fmt.Fprintf(w, "Backup %s\n", resp.Name)
		}
	}

	var iter *database.BackupIterator
	var filter string
	// List all backups.
	iter = adminClient.ListBackups(ctx, &adminpb.ListBackupsRequest{
		Parent: instanceName,
	})
	if err := printBackups(iter); err != nil {
		return err
	}

	// List all backups that contain a name.
	iter = adminClient.ListBackups(ctx, &adminpb.ListBackupsRequest{
		Parent: instanceName,
		Filter: "name:" + backupID,
	})
	if err := printBackups(iter); err != nil {
		return err
	}

	// List all backups that expire before a timestamp.
	expireTime := time.Now().AddDate(0, 0, 30)
	filter = fmt.Sprintf(`expire_time < "%s"`, expireTime.Format(time.RFC3339))
	iter = adminClient.ListBackups(ctx, &adminpb.ListBackupsRequest{
		Parent: instanceName,
		Filter: filter,
	})
	if err := printBackups(iter); err != nil {
		return err
	}

	// List all backups for a database that contains a name.
	iter = adminClient.ListBackups(ctx, &adminpb.ListBackupsRequest{
		Parent: instanceName,
		Filter: "database:" + db,
	})
	if err := printBackups(iter); err != nil {
		return err
	}

	// List all backups with a size greater than some bytes.
	iter = adminClient.ListBackups(ctx, &adminpb.ListBackupsRequest{
		Parent: instanceName,
		Filter: "size_bytes > 100",
	})
	if err := printBackups(iter); err != nil {
		return err
	}

	// List backups that were created after a timestamp that are also ready.
	createTime := time.Now().AddDate(0, 0, -1)
	filter = fmt.Sprintf(
		`create_time >= "%s" AND state:READY`,
		createTime.Format(time.RFC3339),
	)
	iter = adminClient.ListBackups(ctx, &adminpb.ListBackupsRequest{
		Parent: instanceName,
		Filter: filter,
	})
	if err := printBackups(iter); err != nil {
		return err
	}

	// List backups with pagination.
	request := &adminpb.ListBackupsRequest{
		Parent:   instanceName,
		PageSize: 10,
	}
	for {
		iter = adminClient.ListBackups(ctx, request)
		if err := printBackups(iter); err != nil {
			return err
		}
		pageToken := iter.PageInfo().Token
		if pageToken == "" {
			break
		} else {
			request.PageToken = pageToken
		}
	}

	fmt.Fprintf(w, "Backups listed.\n")
	return nil
}

Java

static void listBackups(
    DatabaseAdminClient dbAdminClient, String projectId,
    String instanceId, String databaseId, String backupId) {
  InstanceName instanceName = InstanceName.of(projectId, instanceId);
  // List all backups.
  System.out.println("All backups:");
  for (Backup backup : dbAdminClient.listBackups(
      instanceName.toString()).iterateAll()) {
    System.out.println(backup);
  }

  // List all backups with a specific name.
  System.out.println(
      String.format("All backups with backup name containing \"%s\":", backupId));
  ListBackupsRequest listBackupsRequest =
      ListBackupsRequest.newBuilder().setParent(instanceName.toString())
          .setFilter(String.format("name:%s", backupId)).build();
  for (Backup backup : dbAdminClient.listBackups(listBackupsRequest).iterateAll()) {
    System.out.println(backup);
  }

  // List all backups for databases whose name contains a certain text.
  System.out.println(
      String.format(
          "All backups for databases with a name containing \"%s\":", databaseId));
  listBackupsRequest =
      ListBackupsRequest.newBuilder().setParent(instanceName.toString())
          .setFilter(String.format("database:%s", databaseId)).build();
  for (Backup backup : dbAdminClient.listBackups(listBackupsRequest).iterateAll()) {
    System.out.println(backup);
  }

  // List all backups that expire before a certain time.
  com.google.cloud.Timestamp expireTime = com.google.cloud.Timestamp.ofTimeMicroseconds(
      TimeUnit.MICROSECONDS.convert(
          System.currentTimeMillis() + TimeUnit.DAYS.toMillis(30), TimeUnit.MILLISECONDS));

  System.out.println(String.format("All backups that expire before %s:", expireTime));
  listBackupsRequest =
      ListBackupsRequest.newBuilder().setParent(instanceName.toString())
          .setFilter(String.format("expire_time < \"%s\"", expireTime)).build();

  for (Backup backup : dbAdminClient.listBackups(listBackupsRequest).iterateAll()) {
    System.out.println(backup);
  }

  // List all backups with size greater than a certain number of bytes.
  listBackupsRequest =
      ListBackupsRequest.newBuilder().setParent(instanceName.toString())
          .setFilter("size_bytes > 100").build();

  System.out.println("All backups with size greater than 100 bytes:");
  for (Backup backup : dbAdminClient.listBackups(listBackupsRequest).iterateAll()) {
    System.out.println(backup);
  }

  // List all backups with a create time after a certain timestamp and that are also ready.
  com.google.cloud.Timestamp createTime = com.google.cloud.Timestamp.ofTimeMicroseconds(
      TimeUnit.MICROSECONDS.convert(
          System.currentTimeMillis() - TimeUnit.DAYS.toMillis(1), TimeUnit.MILLISECONDS));

  System.out.println(
      String.format(
          "All databases created after %s and that are ready:", createTime.toString()));
  listBackupsRequest =
      ListBackupsRequest.newBuilder().setParent(instanceName.toString())
          .setFilter(String.format(
              "create_time >= \"%s\" AND state:READY", createTime.toString())).build();
  for (Backup backup : dbAdminClient.listBackups(listBackupsRequest).iterateAll()) {
    System.out.println(backup);
  }

  // List backups using pagination.
  System.out.println("All backups, listed using pagination:");
  listBackupsRequest =
      ListBackupsRequest.newBuilder().setParent(instanceName.toString()).setPageSize(10).build();
  while (true) {
    ListBackupsPagedResponse response = dbAdminClient.listBackups(listBackupsRequest);
    for (Backup backup : response.getPage().iterateAll()) {
      System.out.println(backup);
    }
    String nextPageToken = response.getNextPageToken();
    if (!Strings.isNullOrEmpty(nextPageToken)) {
      listBackupsRequest = listBackupsRequest.toBuilder().setPageToken(nextPageToken).build();
    } else {
      break;
    }
  }
}

Node.js


// Imports the Google Cloud client library
const {Spanner} = require('@google-cloud/spanner');

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// const projectId = 'my-project-id';
// const instanceId = 'my-instance';
// const databaseId = 'my-database';
// const backupId = 'my-backup';

// Creates a client
const spanner = new Spanner({
  projectId: projectId,
});

// Gets a reference to a Cloud Spanner Database Admin Client object
const databaseAdminClient = spanner.getDatabaseAdminClient();

try {
  // Get the parent(instance) of the database
  const parent = databaseAdminClient.instancePath(projectId, instanceId);

  // List all backups
  const [allBackups] = await databaseAdminClient.listBackups({
    parent: parent,
  });

  console.log('All backups:');
  allBackups.forEach(backups => {
    if (backups.name) {
      const backup = backups.name;
      const delimiter =
        'projects/' + projectId + '/instances/' + instanceId + '/backups/';
      const result = backup.substring(delimiter.length);
      console.log(result);
    }
  });

  // List backups filtered by backup name
  const [backupsByName] = await databaseAdminClient.listBackups({
    parent: parent,
    filter: `Name:${backupId}`,
  });
  console.log('Backups matching backup name:');
  backupsByName.forEach(backup => {
    if (backup.name) {
      const backupName = backup.name;
      const delimiter =
        'projects/' + projectId + '/instances/' + instanceId + '/backups/';
      const result = backupName.substring(delimiter.length);
      console.log(result);
    }
  });

  // List backups expiring within 30 days
  const expireTime = new Date();
  expireTime.setDate(expireTime.getDate() + 30);
  const [backupsByExpiry] = await databaseAdminClient.listBackups({
    parent: parent,
    filter: `expire_time < "${expireTime.toISOString()}"`,
  });
  console.log('Backups expiring within 30 days:');
  backupsByExpiry.forEach(backup => {
    if (backup.name) {
      const backupName = backup.name;
      const delimiter =
        'projects/' + projectId + '/instances/' + instanceId + '/backups/';
      const result = backupName.substring(delimiter.length);
      console.log(result);
    }
  });

  // List backups filtered by database name
  const [backupsByDbName] = await databaseAdminClient.listBackups({
    parent: parent,
    filter: `Database:${databaseId}`,
  });
  console.log('Backups matching database name:');
  backupsByDbName.forEach(backup => {
    if (backup.name) {
      const backupName = backup.name;
      const delimiter =
        'projects/' + projectId + '/instances/' + instanceId + '/backups/';
      const result = backupName.substring(delimiter.length);
      console.log(result);
    }
  });

  // List backups filtered by backup size
  const [backupsBySize] = await databaseAdminClient.listBackups({
    parent: parent,
    filter: 'size_bytes > 100',
  });
  console.log('Backups filtered by size:');
  backupsBySize.forEach(backup => {
    if (backup.name) {
      const backupName = backup.name;
      const delimiter =
        'projects/' + projectId + '/instances/' + instanceId + '/backups/';
      const result = backupName.substring(delimiter.length);
      console.log(result);
    }
  });

  // List backups that are ready that were created after a certain time
  const createTime = new Date();
  createTime.setDate(createTime.getDate() - 1);
  const [backupsByCreateTime] = await databaseAdminClient.listBackups({
    parent: parent,
    filter: `(state:READY) AND (create_time >= "${createTime.toISOString()}")`,
  });
  console.log('Ready backups filtered by create time:');
  backupsByCreateTime.forEach(backup => {
    if (backup.name) {
      const backupName = backup.name;
      const delimiter =
        'projects/' + projectId + '/instances/' + instanceId + '/backups/';
      const result = backupName.substring(delimiter.length);
      console.log(result);
    }
  });

  // List backups using pagination
  console.log('Get backups paginated:');
  const [backups] = await databaseAdminClient.listBackups({
    parent: parent,
    pageSize: 3,
  });
  backups.forEach(backup => {
    if (backup.name) {
      const backupName = backup.name;
      const delimiter =
        'projects/' + projectId + '/instances/' + instanceId + '/backups/';
      const result = backupName.substring(delimiter.length);
      console.log(result);
    }
  });
} catch (err) {
  console.error('ERROR:', err);
}

PHP

use Google\Cloud\Spanner\Admin\Database\V1\Client\DatabaseAdminClient;
use Google\Cloud\Spanner\Admin\Database\V1\ListBackupsRequest;

/**
 * List backups in an instance.
 * Example:
 * ```
 * list_backups($projectId, $instanceId);
 * ```
 *
 * @param string $projectId The Google Cloud project ID.
 * @param string $instanceId The Spanner instance ID.
 */
function list_backups(string $projectId, string $instanceId): void
{
    $databaseAdminClient = new DatabaseAdminClient();
    $parent = DatabaseAdminClient::instanceName($projectId, $instanceId);

    // List all backups.
    print('All backups:' . PHP_EOL);
    $request = new ListBackupsRequest([
        'parent' => $parent
    ]);
    $backups = $databaseAdminClient->listBackups($request)->iterateAllElements();
    foreach ($backups as $backup) {
        print('  ' . basename($backup->getName()) . PHP_EOL);
    }

    // List all backups that contain a name.
    $backupName = 'backup-test-';
    print("All backups with name containing \"$backupName\":" . PHP_EOL);
    $filter = "name:$backupName";
    $request = new ListBackupsRequest([
        'parent' => $parent,
        'filter' => $filter
    ]);
    $backups = $databaseAdminClient->listBackups($request)->iterateAllElements();
    foreach ($backups as $backup) {
        print('  ' . basename($backup->getName()) . PHP_EOL);
    }

    // List all backups for a database that contains a name.
    $databaseId = 'test-';
    print("All backups for a database which name contains \"$databaseId\":" . PHP_EOL);
    $filter = "database:$databaseId";
    $request = new ListBackupsRequest([
        'parent' => $parent,
        'filter' => $filter
    ]);
    $backups = $databaseAdminClient->listBackups($request)->iterateAllElements();
    foreach ($backups as $backup) {
        print('  ' . basename($backup->getName()) . PHP_EOL);
    }

    // List all backups that expire before a timestamp.
    $expireTime = (new \DateTime('+30 days'))->format('c');
    print("All backups that expire before $expireTime:" . PHP_EOL);
    $filter = "expire_time < \"$expireTime\"";
    $request = new ListBackupsRequest([
        'parent' => $parent,
        'filter' => $filter
    ]);
    $backups = $databaseAdminClient->listBackups($request)->iterateAllElements();
    foreach ($backups as $backup) {
        print('  ' . basename($backup->getName()) . PHP_EOL);
    }

    // List all backups with a size greater than some bytes.
    $size = 500;
    print("All backups with size greater than $size bytes:" . PHP_EOL);
    $filter = "size_bytes > $size";
    $request = new ListBackupsRequest([
        'parent' => $parent,
        'filter' => $filter
    ]);
    $backups = $databaseAdminClient->listBackups($request)->iterateAllElements();
    foreach ($backups as $backup) {
        print('  ' . basename($backup->getName()) . PHP_EOL);
    }

    // List backups that were created after a timestamp that are also ready.
    $createTime = (new \DateTime('-1 day'))->format('c');
    print("All backups created after $createTime:" . PHP_EOL);
    $filter = "create_time >= \"$createTime\" AND state:READY";
    $request = new ListBackupsRequest([
        'parent' => $parent,
        'filter' => $filter
    ]);
    $backups = $databaseAdminClient->listBackups($request)->iterateAllElements();
    foreach ($backups as $backup) {
        print('  ' . basename($backup->getName()) . PHP_EOL);
    }

    // List backups with pagination.
    print('All backups with pagination:' . PHP_EOL);
    $request = new ListBackupsRequest([
        'parent' => $parent,
        'page_size' => 2
    ]);
    $pages = $databaseAdminClient->listBackups($request)->iteratePages();
    foreach ($pages as $pageNumber => $page) {
        print("All backups, page $pageNumber:" . PHP_EOL);
        foreach ($page as $backup) {
            print('  ' . basename($backup->getName()) . PHP_EOL);
        }
    }
}

Python

def list_backups(instance_id, database_id, backup_id):
    from google.cloud.spanner_admin_database_v1.types import \
        backup as backup_pb

    spanner_client = spanner.Client()
    database_admin_api = spanner_client.database_admin_api

    # List all backups.
    print("All backups:")
    request = backup_pb.ListBackupsRequest(
        parent=database_admin_api.instance_path(spanner_client.project, instance_id),
        filter="",
    )
    operations = database_admin_api.list_backups(request)
    for backup in operations:
        print(backup.name)

    # List all backups that contain a name.
    print('All backups with backup name containing "{}":'.format(backup_id))
    request = backup_pb.ListBackupsRequest(
        parent=database_admin_api.instance_path(spanner_client.project, instance_id),
        filter="name:{}".format(backup_id),
    )
    operations = database_admin_api.list_backups(request)
    for backup in operations:
        print(backup.name)

    # List all backups for a database that contains a name.
    print('All backups with database name containing "{}":'.format(database_id))
    request = backup_pb.ListBackupsRequest(
        parent=database_admin_api.instance_path(spanner_client.project, instance_id),
        filter="database:{}".format(database_id),
    )
    operations = database_admin_api.list_backups(request)
    for backup in operations:
        print(backup.name)

    # List all backups that expire before a timestamp.
    expire_time = datetime.utcnow().replace(microsecond=0) + timedelta(days=30)
    print(
        'All backups with expire_time before "{}-{}-{}T{}:{}:{}Z":'.format(
            *expire_time.timetuple()
        )
    )
    request = backup_pb.ListBackupsRequest(
        parent=database_admin_api.instance_path(spanner_client.project, instance_id),
        filter='expire_time < "{}-{}-{}T{}:{}:{}Z"'.format(*expire_time.timetuple()),
    )
    operations = database_admin_api.list_backups(request)
    for backup in operations:
        print(backup.name)

    # List all backups with a size greater than some bytes.
    print("All backups with backup size more than 100 bytes:")
    request = backup_pb.ListBackupsRequest(
        parent=database_admin_api.instance_path(spanner_client.project, instance_id),
        filter="size_bytes > 100",
    )
    operations = database_admin_api.list_backups(request)
    for backup in operations:
        print(backup.name)

    # List backups that were created after a timestamp that are also ready.
    create_time = datetime.utcnow().replace(microsecond=0) - timedelta(days=1)
    print(
        'All backups created after "{}-{}-{}T{}:{}:{}Z" and are READY:'.format(
            *create_time.timetuple()
        )
    )
    request = backup_pb.ListBackupsRequest(
        parent=database_admin_api.instance_path(spanner_client.project, instance_id),
        filter='create_time >= "{}-{}-{}T{}:{}:{}Z" AND state:READY'.format(
            *create_time.timetuple()
        ),
    )
    operations = database_admin_api.list_backups(request)
    for backup in operations:
        print(backup.name)

    print("All backups with pagination")
    # If there are multiple pages, additional ``ListBackup``
    # requests will be made as needed while iterating.
    paged_backups = set()
    request = backup_pb.ListBackupsRequest(
        parent=database_admin_api.instance_path(spanner_client.project, instance_id),
        page_size=2,
    )
    operations = database_admin_api.list_backups(request)
    for backup in operations:
        paged_backups.add(backup.name)
    for backup in paged_backups:
        print(backup)

Ruby

# project_id  = "Your Google Cloud project ID"
# instance_id = "Your Spanner instance ID"
# backup_id = "Your Spanner database backup ID"
# database_id = "Your Spanner databaseID"

require "google/cloud/spanner"
require "google/cloud/spanner/admin/database"

database_admin_client = Google::Cloud::Spanner::Admin::Database.database_admin
instance_path = database_admin_client.instance_path project: project_id, instance: instance_id

puts "All backups"
database_admin_client.list_backups(parent: instance_path).each do |backup|
  puts backup.name
end

puts "All backups with backup name containing \"#{backup_id}\":"
database_admin_client.list_backups(parent: instance_path, filter: "name:#{backup_id}").each do |backup|
  puts backup.name
end

puts "All backups for databases with a name containing \"#{database_id}\":"
database_admin_client.list_backups(parent: instance_path, filter: "database:#{database_id}").each do |backup|
  puts backup.name
end

puts "All backups that expire before a timestamp:"
expire_time = Time.now + (30 * 24 * 3600) # 30 days from now
database_admin_client.list_backups(parent: instance_path, filter: "expire_time < \"#{expire_time.iso8601}\"").each do |backup|
  puts backup.name
end

puts "All backups with a size greater than 500 bytes:"
database_admin_client.list_backups(parent: instance_path, filter: "size_bytes >= 500").each do |backup|
  puts backup.name
end

puts "All backups that were created after a timestamp that are also ready:"
create_time = Time.now - (24 * 3600) # From 1 day ago
database_admin_client.list_backups(parent: instance_path, filter: "create_time >= \"#{create_time.iso8601}\" AND state:READY").each do |backup|
  puts backup.name
end

puts "All backups with pagination:"
list = database_admin_client.list_backups parent: instance_path, page_size: 5
list.each do |backup|
  puts backup.name
end

Mettre à jour le délai d'expiration de la sauvegarde

Console

  1. Accédez à la page "Instances Spanner" dans la console Google Cloud.

    Accéder à la page "Instances Spanner"

  2. Cliquez sur l'instance contenant la base de données pour ouvrir la page Présentation correspondante.

  3. Cliquez sur la base de données pour ouvrir la page Présentation correspondante.

  4. Dans le volet de navigation, cliquez sur Sauvegarder/Restaurer.

  5. Cliquez sur le bouton Actions de la sauvegarde sélectionnée, puis sélectionnez Mettre à jour les métadonnées.

  6. Sélectionnez la nouvelle date d'expiration.

  7. Cliquez sur Mettre à jour.

gcloud

Pour mettre à jour la date de la période d'expiration d'une sauvegarde, utilisez gcloud spanner backups update-metadata :

Avant d'utiliser les données de la commande ci-dessous, effectuez les remplacements suivants :

  • PROJECT_ID : ID du projet.
  • BACKUP_ID: ID de sauvegarde Spanner.
  • INSTANCE_ID : ID de l'instance Spanner
  • EXPIRATION_DATE: date et heure d'expiration
  • DATABASE_ID: ID de la base de données Spanner.

Exécutez la commande suivante :

Linux, macOS ou Cloud Shell

gcloud spanner backups update-metadata BACKUP_ID \
--instance=INSTANCE_ID \
--expiration-date=EXPIRATION_DATE

Windows (PowerShell)

gcloud spanner backups update-metadata BACKUP_ID `
--instance=INSTANCE_ID `
--expiration-date=EXPIRATION_DATE

Windows (cmd.exe)

gcloud spanner backups update-metadata BACKUP_ID ^
--instance=INSTANCE_ID ^
--expiration-date=EXPIRATION_DATE

Vous devriez obtenir un résultat semblable à celui-ci :

createTime: '2020-02-04T02:05:43.920377Z'
database: projects/PROJECT_ID/instances/INSTANCE_ID/databases/DATABASE_ID
expireTime: '2020-05-05T00:00:00Z'
name: projects/PROJECT_ID/instances/INSTANCE_ID/backups/BACKUP_ID
sizeBytes: '1000000000'
state: READY

Bibliothèques clientes

L'exemple de code suivant récupère le délai d'expiration d'une sauvegarde et l'étend.

C++

void UpdateBackup(google::cloud::spanner_admin::DatabaseAdminClient client,
                  std::string const& project_id, std::string const& instance_id,
                  std::string const& backup_id,
                  absl::Duration expiry_extension) {
  google::cloud::spanner::Backup backup_name(
      google::cloud::spanner::Instance(project_id, instance_id), backup_id);
  auto backup = client.GetBackup(backup_name.FullName());
  if (!backup) throw std::move(backup).status();
  auto expire_time =
      google::cloud::spanner::MakeTimestamp(backup->expire_time())
          .value()
          .get<absl::Time>()
          .value();
  expire_time += expiry_extension;
  auto max_expire_time =
      google::cloud::spanner::MakeTimestamp(backup->max_expire_time())
          .value()
          .get<absl::Time>()
          .value();
  if (expire_time > max_expire_time) expire_time = max_expire_time;
  google::spanner::admin::database::v1::UpdateBackupRequest request;
  request.mutable_backup()->set_name(backup_name.FullName());
  *request.mutable_backup()->mutable_expire_time() =
      google::cloud::spanner::MakeTimestamp(expire_time)
          .value()
          .get<google::protobuf::Timestamp>()
          .value();
  request.mutable_update_mask()->add_paths("expire_time");
  backup = client.UpdateBackup(request);
  if (!backup) throw std::move(backup).status();
  std::cout
      << "Backup " << backup->name() << " updated to expire at "
      << google::cloud::spanner::MakeTimestamp(backup->expire_time()).value()
      << ".\n";
}

C#


using Google.Cloud.Spanner.Admin.Database.V1;
using Google.Protobuf.WellKnownTypes;
using System;

public class UpdateBackupSample
{
    public Backup UpdateBackup(string projectId, string instanceId, string backupId)
    {
        // Create the DatabaseAdminClient instance.
        DatabaseAdminClient databaseAdminClient = DatabaseAdminClient.Create();

        // Retrieve existing backup.
        BackupName backupName = BackupName.FromProjectInstanceBackup(projectId, instanceId, backupId);
        Backup backup = databaseAdminClient.GetBackup(backupName);

        // Add 1 hour to the existing ExpireTime.
        backup.ExpireTime = backup.ExpireTime.ToDateTime().AddHours(1).ToTimestamp();

        UpdateBackupRequest backupUpdateRequest = new UpdateBackupRequest
        {
            UpdateMask = new FieldMask
            {
                Paths = { "expire_time" }
            },
            Backup = backup
        };

        // Make the UpdateBackup requests.
        var updatedBackup = databaseAdminClient.UpdateBackup(backupUpdateRequest);

        Console.WriteLine($"Updated Backup ExpireTime: {updatedBackup.ExpireTime}");

        return updatedBackup;
    }
}

Go


import (
	"context"
	"fmt"
	"io"
	"regexp"
	"time"

	database "cloud.google.com/go/spanner/admin/database/apiv1"
	adminpb "cloud.google.com/go/spanner/admin/database/apiv1/databasepb"
	"google.golang.org/genproto/protobuf/field_mask"
	"google.golang.org/protobuf/types/known/timestamppb"
)

// updateBackup updates the expiration time of a pending or completed backup.
func updateBackup(w io.Writer, db string, backupID string) error {
	// db := "projects/my-project/instances/my-instance/databases/my-database"
	// backupID := "my-backup"

	// Add timeout to context.
	ctx, cancel := context.WithTimeout(context.Background(), time.Hour)
	defer cancel()

	adminClient, err := database.NewDatabaseAdminClient(ctx)
	if err != nil {
		return err
	}
	defer adminClient.Close()

	matches := regexp.MustCompile("^(.*)/databases/(.*)$").FindStringSubmatch(db)
	if matches == nil || len(matches) != 3 {
		return fmt.Errorf("invalid database id %s", db)
	}
	backupName := matches[1] + "/backups/" + backupID

	// Get the backup instance.
	backup, err := adminClient.GetBackup(ctx, &adminpb.GetBackupRequest{Name: backupName})
	if err != nil {
		return err
	}

	// Expire time must be within 366 days of the create time of the backup.
	maxExpireTime := time.Unix(backup.MaxExpireTime.Seconds, int64(backup.MaxExpireTime.Nanos))
	expireTime := time.Unix(backup.ExpireTime.Seconds, int64(backup.ExpireTime.Nanos)).AddDate(0, 0, 30)

	// Ensure that new expire time is less than the max expire time.
	if expireTime.After(maxExpireTime) {
		expireTime = maxExpireTime
	}
	expireTimepb := timestamppb.New(expireTime)

	// Make the update backup request.
	_, err = adminClient.UpdateBackup(ctx, &adminpb.UpdateBackupRequest{
		Backup: &adminpb.Backup{
			Name:       backupName,
			ExpireTime: expireTimepb,
		},
		UpdateMask: &field_mask.FieldMask{Paths: []string{"expire_time"}},
	})
	if err != nil {
		return err
	}

	fmt.Fprintf(w, "Updated backup %s with expire time %s\n", backupName, expireTime)

	return nil
}

Java

static void updateBackup(DatabaseAdminClient dbAdminClient, String projectId,
    String instanceId, String backupId) {
  BackupName backupName = BackupName.of(projectId, instanceId, backupId);

  // Get current backup metadata.
  Backup backup = dbAdminClient.getBackup(backupName);
  // Add 30 days to the expire time.
  // Expire time must be within 366 days of the create time of the backup.
  Timestamp currentExpireTime = backup.getExpireTime();
  com.google.cloud.Timestamp newExpireTime =
      com.google.cloud.Timestamp.ofTimeMicroseconds(
          TimeUnit.SECONDS.toMicros(currentExpireTime.getSeconds())
              + TimeUnit.NANOSECONDS.toMicros(currentExpireTime.getNanos())
              + TimeUnit.DAYS.toMicros(30L));

  // New Expire Time must be less than Max Expire Time
  newExpireTime =
      newExpireTime.compareTo(com.google.cloud.Timestamp.fromProto(backup.getMaxExpireTime()))
          < 0 ? newExpireTime : com.google.cloud.Timestamp.fromProto(backup.getMaxExpireTime());

  System.out.println(String.format(
      "Updating expire time of backup [%s] to %s...",
      backupId.toString(),
      java.time.OffsetDateTime.ofInstant(
          Instant.ofEpochSecond(newExpireTime.getSeconds(),
              newExpireTime.getNanos()), ZoneId.systemDefault())));

  // Update expire time.
  backup = backup.toBuilder().setExpireTime(newExpireTime.toProto()).build();
  dbAdminClient.updateBackup(backup,
      FieldMask.newBuilder().addAllPaths(Lists.newArrayList("expire_time")).build());
  System.out.println("Updated backup [" + backupId + "]");
}

Node.js


// Imports the Google Cloud client library and precise date library
const {Spanner, protos} = require('@google-cloud/spanner');
const {PreciseDate} = require('@google-cloud/precise-date');

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// const projectId = 'my-project-id';
// const instanceId = 'my-instance';
// const backupId = 'my-backup';

// Creates a client
const spanner = new Spanner({
  projectId: projectId,
});

// Gets a reference to a Cloud Spanner Database Admin Client object
const databaseAdminClient = spanner.getDatabaseAdminClient();

// Read backup metadata and update expiry time
try {
  const [metadata] = await databaseAdminClient.getBackup({
    name: databaseAdminClient.backupPath(projectId, instanceId, backupId),
  });

  const currentExpireTime = metadata.expireTime;
  const maxExpireTime = metadata.maxExpireTime;
  const wantExpireTime = new PreciseDate(currentExpireTime);
  wantExpireTime.setDate(wantExpireTime.getDate() + 1);

  // New expire time should be less than the max expire time
  const min = (currentExpireTime, maxExpireTime) =>
    currentExpireTime < maxExpireTime ? currentExpireTime : maxExpireTime;
  const newExpireTime = new PreciseDate(min(wantExpireTime, maxExpireTime));
  console.log(
    `Backup ${backupId} current expire time: ${Spanner.timestamp(
      currentExpireTime
    ).toISOString()}`
  );
  console.log(
    `Updating expire time to ${Spanner.timestamp(
      newExpireTime
    ).toISOString()}`
  );

  await databaseAdminClient.updateBackup({
    backup: {
      name: databaseAdminClient.backupPath(projectId, instanceId, backupId),
      expireTime: Spanner.timestamp(newExpireTime).toStruct(),
    },
    updateMask: (protos.google.protobuf.FieldMask = {
      paths: ['expire_time'],
    }),
  });
  console.log('Expire time updated.');
} catch (err) {
  console.error('ERROR:', err);
}

PHP

use Google\Cloud\Spanner\Admin\Database\V1\Backup;
use Google\Cloud\Spanner\Admin\Database\V1\UpdateBackupRequest;
use Google\Cloud\Spanner\Admin\Database\V1\Client\DatabaseAdminClient;
use Google\Protobuf\Timestamp;

/**
 * Update the backup expire time.
 * Example:
 * ```
 * update_backup($projectId, $instanceId, $backupId);
 * ```
 * @param string $projectId The Google Cloud project ID.
 * @param string $instanceId The Spanner instance ID.
 * @param string $backupId The Spanner backup ID.
 */
function update_backup(string $projectId, string $instanceId, string $backupId): void
{
    $databaseAdminClient = new DatabaseAdminClient();
    $backupName = DatabaseAdminClient::backupName($projectId, $instanceId, $backupId);
    $newExpireTime = new Timestamp();
    $newExpireTime->setSeconds((new \DateTime('+30 days'))->getTimestamp());
    $request = new UpdateBackupRequest([
        'backup' => new Backup([
            'name' => $backupName,
            'expire_time' => $newExpireTime
        ]),
        'update_mask' => new \Google\Protobuf\FieldMask(['paths' => ['expire_time']])
    ]);

    $info = $databaseAdminClient->updateBackup($request);
    printf('Backup %s new expire time: %d' . PHP_EOL, basename($info->getName()), $info->getExpireTime()->getSeconds());
}

Python

def update_backup(instance_id, backup_id):
    from google.cloud.spanner_admin_database_v1.types import \
        backup as backup_pb

    spanner_client = spanner.Client()
    database_admin_api = spanner_client.database_admin_api

    backup = database_admin_api.get_backup(
        backup_pb.GetBackupRequest(
            name=database_admin_api.backup_path(
                spanner_client.project, instance_id, backup_id
            ),
        )
    )

    # Expire time must be within 366 days of the create time of the backup.
    old_expire_time = backup.expire_time
    # New expire time should be less than the max expire time
    new_expire_time = min(backup.max_expire_time, old_expire_time + timedelta(days=30))
    database_admin_api.update_backup(
        backup_pb.UpdateBackupRequest(
            backup=backup_pb.Backup(name=backup.name, expire_time=new_expire_time),
            update_mask={"paths": ["expire_time"]},
        )
    )
    print(
        "Backup {} expire time was updated from {} to {}.".format(
            backup.name, old_expire_time, new_expire_time
        )
    )

Ruby

# project_id  = "Your Google Cloud project ID"
# instance_id = "Your Spanner instance ID"
# backup_id = "Your Spanner backup ID"

require "google/cloud/spanner"
require "google/cloud/spanner/admin/database"

database_admin_client = Google::Cloud::Spanner::Admin::Database.database_admin
instance_path = database_admin_client.instance_path project: project_id, instance: instance_id
backup_path = database_admin_client.backup_path project: project_id,
                                                instance: instance_id,
                                                backup: backup_id
backup = database_admin_client.get_backup name: backup_path
backup.expire_time = Time.now + (60 * 24 * 3600) # Extending the expiry time by 60 days from now.
database_admin_client.update_backup backup: backup,
                                    update_mask: { paths: ["expire_time"] }

puts "Expiration time updated: #{backup.expire_time}"

Supprimer une sauvegarde

Lorsque vous supprimez une sauvegarde, Spanner libère l'espace de stockage et toutes les autres ressources associées à cette sauvegarde.

Si vous supprimez une sauvegarde en cours de création, Spanner annule également l'opération de sauvegarde de longue durée.

La suppression d'une sauvegarde incrémentielle peut ne pas libérer de l'espace de stockage si une sauvegarde incrémentielle plus jeune en dépend. Pour en savoir plus, consultez Présentation des sauvegardes

Console

  1. Accédez à la page "Instances Spanner" dans la console Google Cloud.

    Accéder à la page "Instances Spanner"

  2. Cliquez sur l'instance contenant la base de données pour ouvrir la page Présentation correspondante.

  3. Cliquez sur la base de données pour ouvrir la page Présentation correspondante.

  4. Dans le volet de navigation, cliquez sur Sauvegarder/Restaurer.

  5. Cliquez sur le bouton Actions de la sauvegarde sélectionnée, puis sélectionnez Supprimer :

  6. Saisissez l'ID de sauvegarde.

  7. Cliquez sur Supprimer.

gcloud

Pour supprimer une sauvegarde, utilisez gcloud spanner backups delete.

Avant d'utiliser les données de la commande ci-dessous, effectuez les remplacements suivants :

  • INSTANCE_ID: ID de l'instance Spanner.
  • BACKUP_NAME: nom de la sauvegarde Spanner.

Exécutez la commande suivante :

Linux, macOS ou Cloud Shell

gcloud spanner backups delete BACKUP_NAME --instance=INSTANCE_ID

Windows (PowerShell)

gcloud spanner backups delete BACKUP_NAME --instance=INSTANCE_ID

Windows (cmd.exe)

gcloud spanner backups delete BACKUP_NAME --instance=INSTANCE_ID

Vous devriez obtenir un résultat semblable à celui-ci :

You are about to delete backup BACKUP_NAME

Do you want to continue (Y/n)?  Y

Deleted backup BACKUP_NAME.

Bibliothèques clientes

L'exemple de code suivant supprime une sauvegarde et s'assure qu'elle a été supprimée.

C++

void DeleteBackup(google::cloud::spanner_admin::DatabaseAdminClient client,
                  std::string const& project_id, std::string const& instance_id,
                  std::string const& backup_id) {
  google::cloud::spanner::Backup backup(
      google::cloud::spanner::Instance(project_id, instance_id), backup_id);
  auto status = client.DeleteBackup(backup.FullName());
  if (!status.ok()) throw std::move(status);
  std::cout << "Backup " << backup.FullName() << " was deleted.\n";
}

C#


using Google.Cloud.Spanner.Admin.Database.V1;
using System;

public class DeleteBackupSample
{
    public void DeleteBackup(string projectId, string instanceId, string backupId)
    {
        // Create the DatabaseAdminClient instance.
        DatabaseAdminClient databaseAdminClient = DatabaseAdminClient.Create();

        // Make the DeleteBackup request.
        BackupName backupName = BackupName.FromProjectInstanceBackup(projectId, instanceId, backupId);
        databaseAdminClient.DeleteBackup(backupName);

        Console.WriteLine("Backup deleted successfully.");
    }
}

Go


import (
	"context"
	"fmt"
	"io"
	"regexp"

	database "cloud.google.com/go/spanner/admin/database/apiv1"
	adminpb "cloud.google.com/go/spanner/admin/database/apiv1/databasepb"
)

func deleteBackup(ctx context.Context, w io.Writer, db, backupID string) error {
	adminClient, err := database.NewDatabaseAdminClient(ctx)
	if err != nil {
		return err
	}
	defer adminClient.Close()

	matches := regexp.MustCompile("^(.*)/databases/(.*)$").FindStringSubmatch(db)
	if matches == nil || len(matches) != 3 {
		return fmt.Errorf("Invalid database id %s", db)
	}
	backupName := matches[1] + "/backups/" + backupID
	// Delete the backup.
	err = adminClient.DeleteBackup(ctx, &adminpb.DeleteBackupRequest{Name: backupName})
	if err != nil {
		return err
	}
	fmt.Fprintf(w, "Deleted backup %s\n", backupID)
	return nil
}

Java

static void deleteBackup(DatabaseAdminClient dbAdminClient,
    String project, String instance, String backupId) {
  BackupName backupName = BackupName.of(project, instance, backupId);

  // Delete the backup.
  System.out.println("Deleting backup [" + backupId + "]...");
  dbAdminClient.deleteBackup(backupName);
  // Verify that the backup is deleted.
  try {
    dbAdminClient.getBackup(backupName);
  } catch (NotFoundException e) {
    if (e.getStatusCode().getCode() == Code.NOT_FOUND) {
      System.out.println("Deleted backup [" + backupId + "]");
    } else {
      System.out.println("Delete backup [" + backupId + "] failed");
      throw new RuntimeException("Delete backup [" + backupId + "] failed", e);
    }
  }
}

Node.js


// Imports the Google Cloud client library
const {Spanner} = require('@google-cloud/spanner');

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// const projectId = 'my-project-id';
// const instanceId = 'my-instance';
// const databaseId = 'my-database';
// const backupId = 'my-backup';

// Creates a client
const spanner = new Spanner({
  projectId: projectId,
});

// Gets a reference to a Cloud Spanner Database Admin Client object
const databaseAdminClient = spanner.getDatabaseAdminClient();

// Delete the backup
console.log(`Deleting backup ${backupId}.`);
await databaseAdminClient.deleteBackup({
  name: databaseAdminClient.backupPath(projectId, instanceId, backupId),
});
console.log('Backup deleted.');

// Verify backup no longer exists
try {
  await databaseAdminClient.getBackup({
    name: databaseAdminClient.backupPath(projectId, instanceId, backupId),
  });
  console.error('Error: backup still exists.');
} catch (err) {
  console.log('Backup deleted.');
}

PHP

use Google\Cloud\Spanner\Admin\Database\V1\Client\DatabaseAdminClient;
use Google\Cloud\Spanner\Admin\Database\V1\DeleteBackupRequest;

/**
 * Delete a backup.
 * Example:
 * ```
 * delete_backup($projectId, $instanceId, $backupId);
 * ```
 * @param string $projectId The Google Cloud project ID.
 * @param string $instanceId The Spanner instance ID.
 * @param string $backupId The Spanner backup ID.
 */
function delete_backup(string $projectId, string $instanceId, string $backupId): void
{
    $databaseAdminClient = new DatabaseAdminClient();

    $backupName = DatabaseAdminClient::backupName($projectId, $instanceId, $backupId);

    $request = new DeleteBackupRequest();
    $request->setName($backupName);
    $databaseAdminClient->deleteBackup($request);

    print("Backup $backupName deleted" . PHP_EOL);
}

Python

def delete_backup(instance_id, backup_id):
    from google.cloud.spanner_admin_database_v1.types import \
        backup as backup_pb

    spanner_client = spanner.Client()
    database_admin_api = spanner_client.database_admin_api
    backup = database_admin_api.get_backup(
        backup_pb.GetBackupRequest(
            name=database_admin_api.backup_path(
                spanner_client.project, instance_id, backup_id
            ),
        )
    )

    # Wait for databases that reference this backup to finish optimizing.
    while backup.referencing_databases:
        time.sleep(30)
        backup = database_admin_api.get_backup(
            backup_pb.GetBackupRequest(
                name=database_admin_api.backup_path(
                    spanner_client.project, instance_id, backup_id
                ),
            )
        )

    # Delete the backup.
    database_admin_api.delete_backup(backup_pb.DeleteBackupRequest(name=backup.name))

    # Verify that the backup is deleted.
    try:
        backup = database_admin_api.get_backup(
            backup_pb.GetBackupRequest(name=backup.name)
        )
    except NotFound:
        print("Backup {} has been deleted.".format(backup.name))
        return

Ruby

# project_id  = "Your Google Cloud project ID"
# instance_id = "Your Spanner instance ID"
# backup_id = "Your Spanner backup ID"

require "google/cloud/spanner"
require "google/cloud/spanner/admin/database"

database_admin_client = Google::Cloud::Spanner::Admin::Database.database_admin
instance_path = database_admin_client.instance_path project: project_id, instance: instance_id
backup_path = database_admin_client.backup_path project: project_id,
                                                instance: instance_id,
                                                backup: backup_id

database_admin_client.delete_backup name: backup_path
puts "Backup #{backup_id} deleted"

Étape suivante