이 페이지에서는 스토리지 전송 도구를 사용하여 Google Distributed Cloud (GDC) 오프라인 어플라이언스의 감사 및 운영 로그를 원격 버킷으로 내보내는 방법을 설명합니다.
IAM 역할 획득
로그를 내보내는 데 필요한 권한을 얻으려면 조직 IAM 관리자에게 인프라 클러스터의 obs-system 네임스페이스에 있는 로그 전송 관리자 (logs-transfer-admin) 역할과 관리 평면의 obs-system 네임스페이스에 있는 로그 버킷 뷰어 (logs-bucket-viewer)) 역할을 부여해 달라고 요청하세요.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[],[],null,["# Export logs to a remote bucket\n\nThis page describes how to export the audit and operational logs in Google Distributed Cloud (GDC) air-gapped appliance to a remote bucket by using the storage transfer tool.\n\nObtain IAM roles\n----------------\n\nTo get the permissions that you need to export logs, ask your Organization IAM Admin to grant you the Logs Transfer Admin (`logs-transfer-admin`) role in the `obs-system` namespace in the infra cluster and the Logs Bucket Viewer (`logs-bucket-viewer)` role in the `obs-system` namespace in the management plane.\n\nFor more information about these roles, see [Prepare IAM permissions](/distributed-cloud/hosted/docs/latest/appliance/platform/pa-user/obs-iam-permissions).\n\nObtain the endpoint and fully qualified name of the source bucket\n-----------------------------------------------------------------\n\n1. Set `KUBECONFIG` to the Org Management API:\n\n export KUBECONFIG=\u003cvar translate=\"no\"\u003eMANAGEMENT_API_KUBECONFIG_PATH\u003c/var\u003e\n\n2. Get the endpoint of the source bucket:\n\n - For audit logs:\n\n kubectl get bucket audit-logs-loki-all -n obs-system -o json | jq '.status.endpoint'\n\n - For operational logs:\n\n kubectl get bucket ops-logs-loki-all -n obs-system -o json | jq '.status.endpoint'\n\n3. Get the fully qualified name of the source bucket:\n\n - For audit logs:\n\n kubectl get bucket audit-logs-loki-all -n obs-system -o json | jq '.status.fullyQualifiedName'\n\n - For operational logs:\n\n kubectl get bucket ops-logs-loki-all -n obs-system -o json | jq '.status.fullyQualifiedName'\n\nObtain source bucket's access credentials\n-----------------------------------------\n\n1. Set KUBECONFIG to the Org Infra cluster:\n\n export KUBECONFIG=\u003cvar translate=\"no\"\u003eINFRA_CLUSTER_KUBECONFIG_PATH\u003c/var\u003e\n\n2. Obtain the access key ID of the source bucket:\n\n - For audit logs:\n\n kubectl get secret audit-logs-loki-all-s3-auth -n obs-system -o json | jq -r '.data.\"access-key-id\"' | base64 -di\n\n - For operational logs:\n\n kubectl get secret ops-logs-loki-all-s3-auth -n obs-system -o json | jq -r '.data.\"access-key-id\"' | base64 -di\n\n3. Get the secret access key of the source bucket:\n\n - For audit logs:\n\n kubectl get secret audit-logs-loki-all-s3-auth -n obs-system -o json | jq -r '.data.\"secret-access-key\"' | base64 -di\n\n - For operational logs:\n\n kubectl get secret ops-logs-loki-all-s3-auth -n obs-system -o json | jq -r '.data.\"secret-access-key\"' | base64 -di\n\nTransfer logs\n-------------\n\n1. Set `KUBECONFIG` to the Org Infra cluster:\n\n export KUBECONFIG=\u003cvar translate=\"no\"\u003eINFRA_CLUSTER_KUBECONFIG_PATH\u003c/var\u003e\n\n2. Create a secret with the access credentials of the source bucket:\n\n kubectl create secret generic -n obs-system \u003cvar translate=\"no\"\u003eSRC_BUCKET_SECRET_NAME\u003c/var\u003e\n --from-literal=access-key-id=\u003cvar translate=\"no\"\u003eSRC_BUCKET_ACCESS_KEY_ID\u003c/var\u003e\n --from-literal=secret-access-key=\u003cvar translate=\"no\"\u003eSRC_BUCKET_SECRET_ACCESS_KEY\u003c/var\u003e\n\n3. Create a secret with the access credentials of the destination bucket:\n\n kubectl create secret generic -n obs-system \u003cvar translate=\"no\"\u003eDST_BUCKET_SECRET_NAME\u003c/var\u003e \n --from-literal=access-key-id=\u003cvar translate=\"no\"\u003eDST_BUCKET_ACCESS_KEY_ID\u003c/var\u003e \n --from-literal=secret-access-key=\u003cvar translate=\"no\"\u003eDST_BUCKET_SECRET_ACCESS_KEY\u003c/var\u003e\n\n4. Create a secret with certificate authority for authenticating the endpoint of the destination bucket:\n\n kubectl create secret generic -n obs-system \u003cvar translate=\"no\"\u003eDST_BUCKET_CA_SECRET_NAME\u003c/var\u003e \n --from-file=\"ca.crt\"=\u003cvar translate=\"no\"\u003eCA_FILE\u003c/var\u003e\n\n5. Create a log transfer job:\n\n apiVersion: batch/v1\n kind: Job\n metadata:\n name: \u003cvar translate=\"no\"\u003eJOB_NAME\u003c/var\u003e\n namespace: obs-system\n spec:\n template:\n spec:\n serviceAccountName: logs-transfer-sa\n containers:\n - name: storage-transfer-pod\n image: gcr.io/private-cloud-staging/storage-transfer:latest\n imagePullPolicy: Always\n command:\n - /storage-transfer\n args:\n - '--src_endpoint=\u003cvar translate=\"no\"\u003eSRC_BUCKET_ENDPOINT\u003c/var\u003e'\n - '--dst_endpoint=\u003cvar translate=\"no\"\u003eDST_BUCKET_ENDPOINT\u003c/var\u003e'\n - '--src_path=\u003cvar translate=\"no\"\u003eSRC_BUCKET_FULLY_QUALIFIED_NAME\u003c/var\u003e'\n - '--dst_path=\u003cvar translate=\"no\"\u003eDST_BUCKET_FULLY_QUALIFIED_NAME\u003c/var\u003e'\n - '--src_credentials=obs-system/\u003cvar translate=\"no\"\u003eSRC_BUCKET_SECRET_NAME\u003c/var\u003e'\n - '--dst_credentials=obs-system/\u003cvar translate=\"no\"\u003eDST_BUCKET_SECRET_NAME\u003c/var\u003e'\n - '--dst_ca_certificate_reference=obs-system/\u003cvar translate=\"no\"\u003eDST_BUCKET_CA_SECRET_NAME\u003c/var\u003e'\n - '--src_ca_certificate_reference=obs-system/trust-store-root-ext'\n - '--src_type=s3'\n - '--dst_type=s3'\n - '--bandwidth_limit=1G'\n restartPolicy: OnFailure.\n ---\n\n6. Wait for the transfer job to complete:\n\n kubectl wait --for=condition=complete job/\u003cvar translate=\"no\"\u003eJOB_NAME\u003c/var\u003e -n obs-system"]]