이 페이지에서는 Google Distributed Cloud (GDC) 에어갭 소버린 유니버스에서 컨테이너 워크로드의 영구 스토리지를 만들고 관리하는 방법을 설명합니다. 영구 스토리지는 워크로드가 예약된 위치와 관계없이 애플리케이션에 일관된 ID와 안정적인 호스트 이름을 제공합니다.
이 페이지는 조직의 애플리케이션 워크로드를 만드는 애플리케이션 운영자 그룹 내 개발자를 위한 페이지입니다. 자세한 내용은 GDC 오프라인 문서 대상을 참고하세요.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[[["\u003cp\u003eGoogle Distributed Cloud (GDC) air-gapped offers persistent block storage for VM and container workloads within a sovereign, air-gapped environment.\u003c/p\u003e\n"],["\u003cp\u003eGDC utilizes Kubernetes \u003ccode\u003ePersistentVolumeClaim\u003c/code\u003e (PVC) objects to manage persistent storage, which are dynamically provisioned and persist independently of pods.\u003c/p\u003e\n"],["\u003cp\u003eTwo pre-installed \u003ccode\u003eStorageClass\u003c/code\u003e options are available in GDC: \u003ccode\u003estandard-rwo\u003c/code\u003e with 3 IOPS per GiB, and \u003ccode\u003esystem-performance-rwo\u003c/code\u003e with 30 IOPS per GiB, both being \u003ccode\u003eReadWriteOnce\u003c/code\u003e block storage.\u003c/p\u003e\n"],["\u003cp\u003eTo use persistent volumes, users need to obtain the Namespace Admin role and must configure their workloads to refer to a defined \u003ccode\u003ePersistentVolumeClaim\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eThe capacity of existing \u003ccode\u003ePersistentVolumeClaim\u003c/code\u003e objects can be expanded by updating the \u003ccode\u003espec.resources.storage\u003c/code\u003e field, with a maximum supported volume size of 14.5 Ti.\u003c/p\u003e\n"]]],[],null,["# Access persistent storage\n\nThis page explains how to create and manage persistent storage for container\nworkloads in your Google Distributed Cloud (GDC) air-gapped sovereign universe. Persistent\nstorage provides your application with consistent identities and stable\nhostnames, regardless of where its workloads are scheduled.\n\nThis page is for developers within the application operator group, who are\nresponsible for creating application workloads for their organization. For more\ninformation, see\n[Audiences for GDC air-gapped documentation](/distributed-cloud/hosted/docs/latest/gdch/resources/audiences).\n\nBefore you begin\n----------------\n\nTo run commands against a\n[Kubernetes cluster](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/clusters#cluster-architecture),\nmake sure you have the following resources:\n\n1. Locate the Kubernetes cluster name, or ask your Platform Administrator what\n the cluster name is.\n\n2. [Sign in and generate](/distributed-cloud/hosted/docs/latest/gdch/application/ao-user/iam/sign-in#zonal-cluster-kubeconfig)\n the kubeconfig file for the Kubernetes cluster if you don't have one.\n\n3. Use the kubeconfig path of the Kubernetes cluster to replace\n \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_KUBECONFIG\u003c/var\u003e in these instructions.\n\nTo get the required permissions to create a persistent volume, ask your\nOrganization IAM Admin to grant you the Namespace Admin role (`namespace-admin`)\nin your project namespace.\n\nCreate a persistent volume\n--------------------------\n\nThe following instructions show how to create a volume using the\nGDC `standard-rwo` `StorageClass`. For more information\non the available `StorageClass` resources in GDC, see\n[Persistent storage for containers](/distributed-cloud/hosted/docs/latest/gdch/application/ao-user/containers/containers-intro#persistent-storage).\n\n1. Create a `PersistentVolumeClaim` and configure it with a\n `ReadWriteOnce` access mode and a `standard-rwo` storage class:\n\n kubectl --kubeconfig \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_KUBECONFIG\u003c/var\u003e \\\n --namespace \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e apply -f - \u003c\u003cEOF\n apiVersion: v1\n kind: PersistentVolumeClaim\n metadata:\n name: \u003cvar translate=\"no\"\u003ePVC_NAME\u003c/var\u003e\n spec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi\n storageClassName: standard-rwo\n EOF\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_KUBECONFIG\u003c/var\u003e: the kubeconfig\n file for the cluster.\n\n - \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e: the project namespace in which to\n create the PVC.\n\n - \u003cvar translate=\"no\"\u003ePVC_NAME\u003c/var\u003e: the name of the `PersistentVolumeClaim`\n object.\n\n2. The `PersistentVolume` (PV) objects are dynamically provisioned. Check the\n status of the new PVs in your Kubernetes cluster:\n\n kubectl get pv --kubeconfig \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_KUBECONFIG\u003c/var\u003e\n\n The output is similar to the following: \n\n NAME CAPACITY ACCESS MODES STATUS CLAIM STORAGECLASS AGE\n pvc-uuidd 10Gi RWO Bound pvc-name standard-rwo 60s\n\n3. Configure your container workloads to use the PVC. The\n following is an example `nginx` pod that uses a `standard-rwo` PVC:\n\n kubectl --kubeconfig \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_KUBECONFIG\u003c/var\u003e \\\n --namespace \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e apply -f - \u003c\u003cEOF\n apiVersion: apps/v1\n kind: Pod\n metadata:\n name: web-server-deployment\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx\n volumeMounts:\n - mountPath: /usr/share/nginx/html\n name: data\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: \u003cvar translate=\"no\"\u003ePVC_NAME\u003c/var\u003e\n EOF\n\n Replace \u003cvar translate=\"no\"\u003ePVC_NAME\u003c/var\u003e with the PVC you created.\n\nExpand volume capacity\n----------------------\n\nTo increase the capacity of a `PersistentVolumeClaim` object, update the\n`spec.resources.storage` field to the new capacity. The maximum supported\nvolume size is 14.5 Ti.\n\n1. Update the volume to a larger size in the manifest file of the\n `PersistentVolumeClaim` object:\n\n kubectl --kubeconfig \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_KUBECONFIG\u003c/var\u003e \\\n --namespace \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e apply -f - \u003c\u003cEOF\n apiVersion: v1\n kind: PersistentVolumeClaim\n metadata:\n name: \u003cvar translate=\"no\"\u003ePVC_NAME\u003c/var\u003e\n spec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: \u003cvar translate=\"no\"\u003eVOLUME_STORAGE_SIZE\u003c/var\u003e\n EOF\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_KUBECONFIG\u003c/var\u003e: the kubeconfig\n file for the cluster.\n\n - \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e: the project namespace in which the PVC\n resource exists.\n\n - \u003cvar translate=\"no\"\u003ePVC_NAME\u003c/var\u003e: the name of the PVC for which you are\n increasing the storage size.\n\n - \u003cvar translate=\"no\"\u003eVOLUME_SNAPSHOT_SIZE\u003c/var\u003e: the storage size amount to\n increase, such as `50Gi`.\n\n2. Check the status of the updated PVs in your cluster:\n\n kubectl get pv --kubeconfig \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_KUBECONFIG\u003c/var\u003e\n\nWhat's next\n-----------\n\n- [Container workloads overview](/distributed-cloud/hosted/docs/latest/gdch/application/ao-user/containers/containers-intro)\n- [Create stateful workloads](/distributed-cloud/hosted/docs/latest/gdch/application/ao-user/containers/create-stateful-workloads)\n- [Create volume snapshots](/distributed-cloud/hosted/docs/latest/gdch/application/ao-user/containers/create-volume-snapshots)"]]