endpoint의 액세스 사용자 인증 정보가 포함된 보안 비밀을 참조하는 NamespacedName
endpoint
스토리지 시스템의 정규화된 도메인 이름입니다.
type
백업 저장소의 유형입니다. S3 유형만 지원됩니다.
s3Options
S3 엔드포인트의 구성입니다. type이 S3인 경우에 필요합니다.
bucket: 버킷의 정규화된 이름입니다. GDC 버킷 커스텀 리소스의 상태에서 확인할 수 있습니다.
region: 지정된 엔드포인트의 리전입니다. 리전은 스토리지 시스템에 따라 다릅니다.
forcePathStyle: 이 옵션은 객체에 대해 경로 스타일 URL을 강제할지 여부를 결정합니다.
importPolicy
다음 중 하나로 설정합니다.
ReadWrite: 이 저장소를 사용하여 백업, 백업 계획, 복원을 예약하거나 만들 수 있습니다.
ReadOnly: 이 저장소는 백업을 가져오고 보는 데만 사용할 수 있습니다. 이 저장소에서는 새 백업이나 리소스를 만들 수 없지만 복원에서는 읽기 전용 백업을 사용하여 복원할 수 있습니다. 백업 저장소를 ReadOnly.로 사용할 수 있는 횟수에는 제한이 없습니다.
백업 저장소 가져오기 정책
백업 및 복원 기능을 사용하려면 모든 클러스터에 하나 이상의 ReadWrite 저장소가 있어야 합니다. ReadOnly 저장소는 선택사항이며 제한이 없고 교차 클러스터 복원을 위해 다른 클러스터 백업의 공개 상태를 확인하는 데 사용됩니다.
ReadOnly 저장소는 추가 백업의 저장 위치 또는 가져온 클러스터 내의 백업 계획으로 사용할 수 없습니다.
저장소를 ReadWrite로 가져오면 해당 클러스터에 저장소가 클레임되어 다른 클러스터가 동일한 저장소를 ReadWrite로 가져올 수 없습니다. ReadWrite 저장소를 가져온 후 해당 저장소의 이전 백업, 백업 계획, 복원의 모든 레코드가 타겟 클러스터로 로컬 맞춤 리소스로 가져옵니다.
저장소를 ReadOnly로 가져와도 저장소가 소유권이 주장되지 않으며 백업, 백업 계획, 복원, 복원 계획만 가져옵니다. 읽기 전용 저장소의 백업 계획은 백업을 예약하지 않으며 가져오는 클러스터에 어떤 백업 계획이 있는지 표시하기 위해 존재합니다. ReadOnly 저장소를 삭제하면 클러스터에서 가져온 리소스가 정리되며 읽기 전용 저장소의 객체 스토리지에 쓰기 작업이 발생하지 않으므로 스토리지 위치의 리소스에는 영향을 미치지 않습니다.
ReadWrite 저장소가 클러스터에서 삭제되면 다음 사항이 적용됩니다.
백업 및 복원과 같이 해당 저장소와 연결된 모든 로컬 맞춤 리소스가 현재 클러스터에서 삭제됩니다.
저장소에 대한 해당 클러스터의 소유권이 삭제되어 다른 클러스터에서 저장소를 ReadWrite로 사용할 수 있습니다. 하지만 이러한 리소스는 스토리지 엔드포인트에서 삭제되지 않습니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[[["\u003cp\u003eThis guide details the process of creating a backup repository in Google Distributed Cloud (GDC) air-gapped environments for storing cluster workload backups and related records.\u003c/p\u003e\n"],["\u003cp\u003eCreating a backup repository requires a compatible endpoint, a pre-existing object storage bucket, granted access to the bucket, and the appropriate identity and access role, like the Organization Backup Admin.\u003c/p\u003e\n"],["\u003cp\u003eRepositories can be created through either the GDC console, involving the input of the main cluster, linked cluster(s), fully qualified domain name, bucket name, region, access key ID and access key; or via API, where a \u003ccode\u003eClusterBackupRepository\u003c/code\u003e custom resource is defined with the relevant credentials and storage details.\u003c/p\u003e\n"],["\u003cp\u003eBackup repositories have two import policies, \u003ccode\u003eReadWrite\u003c/code\u003e for creating new backups and resources, and \u003ccode\u003eReadOnly\u003c/code\u003e for viewing backups without the ability to create new ones, with \u003ccode\u003eReadWrite\u003c/code\u003e repositories being unique to a single cluster, and \u003ccode\u003eReadOnly\u003c/code\u003e available to many.\u003c/p\u003e\n"],["\u003cp\u003eRemoving a \u003ccode\u003eReadWrite\u003c/code\u003e repository from a cluster removes the associated custom resources from the cluster and releases the claim on the repository, while removing a \u003ccode\u003eReadOnly\u003c/code\u003e repository only removes imported resources without affecting the storage location.\u003c/p\u003e\n"]]],[],null,["# Add a backup repository\n\nThis page describes how to create a backup repository for cluster workloads in Google Distributed Cloud (GDC) air-gapped.\n\nA backup repository represents a compatible storage location for your\nbackups. A backup repository is also used to store records of\nbackups, backup plans, restore plans, and restores.\n\nBefore you begin\n----------------\n\nTo create a backup repository, you must have the following:\n\n\u003cbr /\u003e\n\n- A compatible endpoint available.\n- A previously [created bucket](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/create-storage-buckets) to use as the backup repository.\n- You have [granted access](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/grant-obtain-storage-access) for the object storage bucket.\n- The necessary identity and access role:\n\n - Organization Backup Admin: manages backup resources such as backup and restore plans in user clusters. Ask your Organization IAM Admin to grant you the Organization Backup Admin (`organization-backup-admin`) role. For more information, see [Role\n definitions](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/iam/role-definitions).\n\nCreate a repository\n-------------------\n\nCreate a repository by using the GDC console or the API. \n\n### Console\n\n1. Sign in to the GDC console.\n2. In the navigation menu, click **Backup for Clusters**. Ensure no project is selected in the project selector.\n3. Click **Create repository**.\n4. Enter a repository name. The description is optional.\n5. In the **Main cluster (read/write)** list, choose a cluster.\n6. In the **Linked clusters (read only)** list, choose the linked clusters.\n7. In the **S3 URI endpoint** field, enter an endpoint containing the fully-qualified domain name of your object storage site.\n8. In the **Bucket name** field, enter the name of the fully qualified name of the bucket, which can be found from the status of the GDC bucket custom resource.\n9. In the **Bucket region** field, enter the region where the bucket was created.\n10. In the **Access Key ID** list, enter the access key ID.\n11. In the **Access key** field, enter the access key.\n12. Click **Create**.\n\n### API\n\n\nTo use the backup and restore APIs, you must configure a valid\n`ClusterBackupRepository` custom resource to be the location of your\nbackups, and supply the required credentials.\n\n1. Fetch the secret generated in [Grant and obtain storage bucket access](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/grant-obtain-storage-access#getting_bucket_access_credentials).\n\n2. Add a `ClusterBackupRepository` custom resource to use these credentials\n and apply the new resource to the Management API server.\n Backup repositories are cluster-scoped:\n\n apiVersion: backup.gdc.goog/v1\n kind: ClusterBackupRepository\n metadata:\n name: user-1-user\n namespace: user-1-user-cluster\n spec:\n secretReference:\n namespace: \"object-storage-secret-ns\"\n name: \"object-storage-secret\"\n endpoint: \"https://objectstorage.google.gdch.test\"\n type: \"S3\"\n s3Options:\n bucket: \"fully-qualified-bucket-name\"\n region: \"us-east-1\"\n forcePathStyle: true\n importPolicy: \"ReadWrite\"\n\n This example includes the following values:\n\nBackup repository import policies\n---------------------------------\n\nAll clusters must have at least one `ReadWrite` repository to successfully use backup and restore features. `ReadOnly` repositories are optional, have no\nlimit, and are used to gain visibility into other cluster backups for\ncross-cluster restores.\n\n`ReadOnly` repositories cannot be used as storage locations for additional\nbackups or for backup plans within the cluster they were imported.\n\nImporting a repository as `ReadWrite` claims the repository for that cluster,\npreventing other clusters from importing the same repository as\n`ReadWrite`. After importing a `ReadWrite` repository, all records of previous\nbackups, backup plans, and restores in that repository are imported into the\ntarget cluster as local custom resources.\n\nImporting a repository as `ReadOnly` does not claim the repository, it only\nimports the backups, backup plans, restores, and restore plans. Backup plans in read-only repositories don't schedule backups,\nthey exist to provide visibility into what backup plans exist in the cluster you are importing from. Removing a `ReadOnly` repository cleans up any imported resources from\nthe cluster and has no effect on the resources in the storage location as no write operations occur to object storage for read-only repositories.\n\nWhen a `ReadWrite` repository is removed from the cluster:\n\n- All local custom resources associated with that repository, such as backups and restores, are removed from the current cluster.\n- That cluster's claim on the repository is removed, allowing the repository to be used as `ReadWrite` by another cluster. However, these resources are not removed from the storage endpoint.\n\nWhat's next\n-----------\n\n- [Customize backup and restore for an application](/distributed-cloud/hosted/docs/latest/gdch/platform-application/pa-ao-operations/cluster-backup/customize-backup-restore)\n- [Plan a set of backups](/distributed-cloud/hosted/docs/latest/gdch/platform-application/pa-ao-operations/cluster-backup/plan-backups)"]]