This page describes how to export the audit and operational logs in Google Distributed Cloud (GDC) air-gapped appliance to a remote bucket by using the storage transfer tool.
Obtain IAM roles
To get the permissions that you need to export logs, ask your Organization IAM Admin to grant you the Logs Transfer Admin (logs-transfer-admin
) role in the obs-system
namespace in the infra cluster and the Logs Bucket Viewer (logs-bucket-viewer)
role in the obs-system
namespace in the management plane.
For more information about these roles, see Prepare IAM permissions.
Obtain the endpoint and fully qualified name of the source bucket
Set
KUBECONFIG
to the Org Management API:export KUBECONFIG=MANAGEMENT_API_KUBECONFIG_PATH
Get the endpoint of the source bucket:
For audit logs:
kubectl get bucket audit-logs-loki-all -n obs-system -o json | jq '.status.endpoint'
For operational logs:
kubectl get bucket ops-logs-loki-all -n obs-system -o json | jq '.status.endpoint'
Get the fully qualified name of the source bucket:
For audit logs:
kubectl get bucket audit-logs-loki-all -n obs-system -o json | jq '.status.fullyQualifiedName'
For operational logs:
kubectl get bucket ops-logs-loki-all -n obs-system -o json | jq '.status.fullyQualifiedName'
Obtain source bucket's access credentials
Set KUBECONFIG to the Org Infra cluster:
export KUBECONFIG=INFRA_CLUSTER_KUBECONFIG_PATH
Obtain the access key ID of the source bucket:
For audit logs:
kubectl get secret audit-logs-loki-all-s3-auth -n obs-system -o json | jq -r '.data."access-key-id"' | base64 -di
For operational logs:
kubectl get secret ops-logs-loki-all-s3-auth -n obs-system -o json | jq -r '.data."access-key-id"' | base64 -di
Get the secret access key of the source bucket:
For audit logs:
kubectl get secret audit-logs-loki-all-s3-auth -n obs-system -o json | jq -r '.data."secret-access-key"' | base64 -di
For operational logs:
kubectl get secret ops-logs-loki-all-s3-auth -n obs-system -o json | jq -r '.data."secret-access-key"' | base64 -di
Transfer logs
Set
KUBECONFIG
to the Org Infra cluster:export KUBECONFIG=INFRA_CLUSTER_KUBECONFIG_PATH
Create a secret with the access credentials of the source bucket:
kubectl create secret generic -n obs-system SRC_BUCKET_SECRET_NAME --from-literal=access-key-id=SRC_BUCKET_ACCESS_KEY_ID --from-literal=secret-access-key=SRC_BUCKET_SECRET_ACCESS_KEY
Create a secret with the access credentials of the destination bucket:
kubectl create secret generic -n obs-system DST_BUCKET_SECRET_NAME --from-literal=access-key-id=DST_BUCKET_ACCESS_KEY_ID --from-literal=secret-access-key=DST_BUCKET_SECRET_ACCESS_KEY
Create a secret with certificate authority for authenticating the endpoint of the destination bucket:
kubectl create secret generic -n obs-system DST_BUCKET_CA_SECRET_NAME --from-file="ca.crt"=CA_FILE
Create a log transfer job:
apiVersion: batch/v1 kind: Job metadata: name: JOB_NAME namespace: obs-system spec: template: spec: serviceAccountName: logs-transfer-sa containers: - name: storage-transfer-pod image: gcr.io/private-cloud-staging/storage-transfer:latest imagePullPolicy: Always command: - /storage-transfer args: - '--src_endpoint=SRC_BUCKET_ENDPOINT' - '--dst_endpoint=DST_BUCKET_ENDPOINT' - '--src_path=SRC_BUCKET_FULLY_QUALIFIED_NAME' - '--dst_path=DST_BUCKET_FULLY_QUALIFIED_NAME' - '--src_credentials=obs-system/SRC_BUCKET_SECRET_NAME' - '--dst_credentials=obs-system/DST_BUCKET_SECRET_NAME' - '--dst_ca_certificate_reference=obs-system/DST_BUCKET_CA_SECRET_NAME' - '--src_ca_certificate_reference=obs-system/trust-store-root-ext' - '--src_type=s3' - '--dst_type=s3' - '--bandwidth_limit=1G' restartPolicy: OnFailure. ---
Wait for the transfer job to complete:
kubectl wait --for=condition=complete job/JOB_NAME -n obs-system