[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[[["\u003cp\u003eGoogle Distributed Cloud (GDC) air-gapped provides backup capabilities for customer workloads to compatible object storage buckets, using incremental backups after the initial full backup.\u003c/p\u003e\n"],["\u003cp\u003eDisaster recovery in GDC requires sufficient compute and storage capacity on separate instances to run failed workloads, potentially involving the temporary shutdown of less critical applications to free up resources.\u003c/p\u003e\n"],["\u003cp\u003eRestoring user workloads in GDC requires custom logic in workloads, such as using \u003ccode\u003einit\u003c/code\u003e containers and self-validation to ensure dependencies are met before starting long-running processes.\u003c/p\u003e\n"],["\u003cp\u003eGDC clusters must run the same version for disaster recovery, as backward compatibility is supported, but not forward compatibility, meaning older backups can be restored on newer versions, but not the reverse.\u003c/p\u003e\n"],["\u003cp\u003eGDC supports backup and restoration of Kubernetes cluster workloads, Harbor registry instances, and virtual machine (VM) instances to S3-compatible object storage, with customizable backup plans and pre-configured recovery scenarios.\u003c/p\u003e\n"]]],[],null,["# Overview\n\nThis page provides an overview of disaster recovery operations in Google Distributed Cloud (GDC) air-gapped.\n\nDisaster recovery planning\n--------------------------\n\nYou must have sufficient capacity to back up workload data, given the size of\nthe data and its retention period. GDC provides backup\ncapabilities for customer workloads to\n\ncompatible object storage buckets.\n\n\nThese backups are incremental, with the first backup being a full backup and all subsequent backups being incremental, capturing only the changes made since the last backup. The storage containing the backups must have enough capacity to accommodate the initial full backup and to accommodate the accumulated changes (deltas) between backups for the given backup frequency and retention periods. You must account for potential growth of the data being backed up to ensure the backups don't exceed the storage capacity.\n\nIf you are planning on using object storage on a remote GDC instance, you must create the storage buckets within the same organization on the target instance where the workloads are running in the source instance, so billing functions correctly. The organization control plane on one GDC instance must be backed up into buckets hosted in the same organization on another instance. You can also create buckets for backing up workloads in the same organization on the control plane backup instance.\n\nYou must also have sufficient capacity such as compute and storage to run any failed workloads on a separate GDC instance that you want to be able to restore to in the case of a disaster. For example, if you have two GDC instances and you want to be able to run all workloads while only one of the instances is available, the capacity of both instances individually must be greater than or equal to the overall capacity needed to run all of the workloads. However, if only some workloads must be recovered during a system failure, your planning can include temporarily shutting down less critical applications to free up resources until the failing systems are functioning again.\n\nRestore user workloads\n----------------------\n\nUser workloads are a collection of services that interact together based on business logic that you define. GDC is not able to provide complete automation out-of-the-box to restore user workloads. However, the `Backup4GDC` service can recover an entire cluster at once, or recover one namespace at a time if more granularity is needed.\n\nTo automate the restoration of a workload, an end user can design pods as `init` containers, which are specialized containers that run before app containers in a pods. A pod then validates dependencies before starting its long-running container that results in a self-orchestrated structure.\n\nAdd your startup logic directly into your workloads as code to restore an entire cluster at once and have your application self-check and validate prerequisites before it starts. For example, start the web server after confirming that your database and credit card servers are serving traffic.\n\nManage version differences\n--------------------------\n\nThe GDC versions of different clusters can be different. Disaster recovery requires both sites to be running the same GDC version for all corresponding clusters. You must manually control the synchronization process.\n\nThe clusters being restored might have different versions when a disaster occurs. Version difference needs to be considered when transferring workloads. GDC is backward compatible, but not forward compatible.\n\nA backup with an older version can be restored in a cluster with a newer version.\nHowever, a backup with a newer version can't be restored in a cluster with an older version. You must find compatible backups when you transfer workloads.\n\nBack up and restore\n-------------------\n\nBackup and restore in GDC enables the backup and restoration of Kubernetes cluster workloads, Harbor registry instances, and virtual machine (VM) instances to S3-compatible objects storage buckets.\n\n### Cluster backup overview\n\nKubernetes cluster backups safeguard your data by capturing the state of your applications, providing both crash consistency and application consistency. You can customize the backup process using pre and post execution hooks and multiple protected application strategies.\n\nBackups are stored in S3-compatible repositories and managed through backup plans, which define their scope and schedule. Restore plans offer pre-configured recovery scenarios, allowing for quick and efficient cluster restoration.\n\nFor more information, see [Cluster backup overview](/distributed-cloud/hosted/docs/latest/gdch/platform-application/pa-ao-operations/cluster-backup/cluster-backup-overview).\n\n### VM backup overview\n\nGDC VM backups let you back up virtual machine workloads, including their configurations, disk images, and persistent volumes. Manage backups through backup plans, scheduling them regularly or creating them on-demand. Restore VMs to a previous state or recover individual disk snapshots.\n\nFor more information, see [VM backup overview](/distributed-cloud/hosted/docs/latest/gdch/platform-application/pa-ao-operations/vm-backup/vm-backup-overview).\n\n### Harbor backup overview\n\nHarbor backups provide comprehensive protection for your Harbor registry instances, safeguarding against data loss and ensuring business continuity. Schedule automatic backups or create them manually.\n\nDefine retention policies for long-term data management. In the event of a disaster, restore your Harbor instance, including all artifacts and metadata, from a previously created backup.\n\nFor more information, see [Harbor backup\noverview](/distributed-cloud/hosted/docs/latest/gdch/platform-application/pa-ao-operations/harbor-backup/harbor-backup-overview)."]]