Restez organisé à l'aide des collections
Enregistrez et classez les contenus selon vos préférences.
Cette page explique comment exporter les journaux d'audit et opérationnels de l'appliance Google Distributed Cloud (GDC) isolée vers un bucket distant à l'aide de l'outil de transfert de stockage.
Obtenir des rôles IAM
Pour obtenir les autorisations nécessaires pour exporter des journaux, demandez à votre administrateur IAM de l'organisation de vous accorder le rôle Administrateur du transfert de journaux (logs-transfer-admin) dans l'espace de noms obs-system du cluster d'infrastructure et le rôle Lecteur du bucket de journaux (logs-bucket-viewer)) dans l'espace de noms obs-system du plan de gestion.
Sauf indication contraire, le contenu de cette page est régi par une licence Creative Commons Attribution 4.0, et les échantillons de code sont régis par une licence Apache 2.0. Pour en savoir plus, consultez les Règles du site Google Developers. Java est une marque déposée d'Oracle et/ou de ses sociétés affiliées.
Dernière mise à jour le 2025/09/04 (UTC).
[[["Facile à comprendre","easyToUnderstand","thumb-up"],["J'ai pu résoudre mon problème","solvedMyProblem","thumb-up"],["Autre","otherUp","thumb-up"]],[["Difficile à comprendre","hardToUnderstand","thumb-down"],["Informations ou exemple de code incorrects","incorrectInformationOrSampleCode","thumb-down"],["Il n'y a pas l'information/les exemples dont j'ai besoin","missingTheInformationSamplesINeed","thumb-down"],["Problème de traduction","translationIssue","thumb-down"],["Autre","otherDown","thumb-down"]],["Dernière mise à jour le 2025/09/04 (UTC)."],[],[],null,["# Export logs to a remote bucket\n\nThis page describes how to export the audit and operational logs in Google Distributed Cloud (GDC) air-gapped appliance to a remote bucket by using the storage transfer tool.\n\nObtain IAM roles\n----------------\n\nTo get the permissions that you need to export logs, ask your Organization IAM Admin to grant you the Logs Transfer Admin (`logs-transfer-admin`) role in the `obs-system` namespace in the infra cluster and the Logs Bucket Viewer (`logs-bucket-viewer)` role in the `obs-system` namespace in the management plane.\n\nFor more information about these roles, see [Prepare IAM permissions](/distributed-cloud/hosted/docs/latest/appliance/platform/pa-user/obs-iam-permissions).\n\nObtain the endpoint and fully qualified name of the source bucket\n-----------------------------------------------------------------\n\n1. Set `KUBECONFIG` to the Org Management API:\n\n export KUBECONFIG=\u003cvar translate=\"no\"\u003eMANAGEMENT_API_KUBECONFIG_PATH\u003c/var\u003e\n\n2. Get the endpoint of the source bucket:\n\n - For audit logs:\n\n kubectl get bucket audit-logs-loki-all -n obs-system -o json | jq '.status.endpoint'\n\n - For operational logs:\n\n kubectl get bucket ops-logs-loki-all -n obs-system -o json | jq '.status.endpoint'\n\n3. Get the fully qualified name of the source bucket:\n\n - For audit logs:\n\n kubectl get bucket audit-logs-loki-all -n obs-system -o json | jq '.status.fullyQualifiedName'\n\n - For operational logs:\n\n kubectl get bucket ops-logs-loki-all -n obs-system -o json | jq '.status.fullyQualifiedName'\n\nObtain source bucket's access credentials\n-----------------------------------------\n\n1. Set KUBECONFIG to the Org Infra cluster:\n\n export KUBECONFIG=\u003cvar translate=\"no\"\u003eINFRA_CLUSTER_KUBECONFIG_PATH\u003c/var\u003e\n\n2. Obtain the access key ID of the source bucket:\n\n - For audit logs:\n\n kubectl get secret audit-logs-loki-all-s3-auth -n obs-system -o json | jq -r '.data.\"access-key-id\"' | base64 -di\n\n - For operational logs:\n\n kubectl get secret ops-logs-loki-all-s3-auth -n obs-system -o json | jq -r '.data.\"access-key-id\"' | base64 -di\n\n3. Get the secret access key of the source bucket:\n\n - For audit logs:\n\n kubectl get secret audit-logs-loki-all-s3-auth -n obs-system -o json | jq -r '.data.\"secret-access-key\"' | base64 -di\n\n - For operational logs:\n\n kubectl get secret ops-logs-loki-all-s3-auth -n obs-system -o json | jq -r '.data.\"secret-access-key\"' | base64 -di\n\nTransfer logs\n-------------\n\n1. Set `KUBECONFIG` to the Org Infra cluster:\n\n export KUBECONFIG=\u003cvar translate=\"no\"\u003eINFRA_CLUSTER_KUBECONFIG_PATH\u003c/var\u003e\n\n2. Create a secret with the access credentials of the source bucket:\n\n kubectl create secret generic -n obs-system \u003cvar translate=\"no\"\u003eSRC_BUCKET_SECRET_NAME\u003c/var\u003e\n --from-literal=access-key-id=\u003cvar translate=\"no\"\u003eSRC_BUCKET_ACCESS_KEY_ID\u003c/var\u003e\n --from-literal=secret-access-key=\u003cvar translate=\"no\"\u003eSRC_BUCKET_SECRET_ACCESS_KEY\u003c/var\u003e\n\n3. Create a secret with the access credentials of the destination bucket:\n\n kubectl create secret generic -n obs-system \u003cvar translate=\"no\"\u003eDST_BUCKET_SECRET_NAME\u003c/var\u003e \n --from-literal=access-key-id=\u003cvar translate=\"no\"\u003eDST_BUCKET_ACCESS_KEY_ID\u003c/var\u003e \n --from-literal=secret-access-key=\u003cvar translate=\"no\"\u003eDST_BUCKET_SECRET_ACCESS_KEY\u003c/var\u003e\n\n4. Create a secret with certificate authority for authenticating the endpoint of the destination bucket:\n\n kubectl create secret generic -n obs-system \u003cvar translate=\"no\"\u003eDST_BUCKET_CA_SECRET_NAME\u003c/var\u003e \n --from-file=\"ca.crt\"=\u003cvar translate=\"no\"\u003eCA_FILE\u003c/var\u003e\n\n5. Create a log transfer job:\n\n apiVersion: batch/v1\n kind: Job\n metadata:\n name: \u003cvar translate=\"no\"\u003eJOB_NAME\u003c/var\u003e\n namespace: obs-system\n spec:\n template:\n spec:\n serviceAccountName: logs-transfer-sa\n containers:\n - name: storage-transfer-pod\n image: gcr.io/private-cloud-staging/storage-transfer:latest\n imagePullPolicy: Always\n command:\n - /storage-transfer\n args:\n - '--src_endpoint=\u003cvar translate=\"no\"\u003eSRC_BUCKET_ENDPOINT\u003c/var\u003e'\n - '--dst_endpoint=\u003cvar translate=\"no\"\u003eDST_BUCKET_ENDPOINT\u003c/var\u003e'\n - '--src_path=\u003cvar translate=\"no\"\u003eSRC_BUCKET_FULLY_QUALIFIED_NAME\u003c/var\u003e'\n - '--dst_path=\u003cvar translate=\"no\"\u003eDST_BUCKET_FULLY_QUALIFIED_NAME\u003c/var\u003e'\n - '--src_credentials=obs-system/\u003cvar translate=\"no\"\u003eSRC_BUCKET_SECRET_NAME\u003c/var\u003e'\n - '--dst_credentials=obs-system/\u003cvar translate=\"no\"\u003eDST_BUCKET_SECRET_NAME\u003c/var\u003e'\n - '--dst_ca_certificate_reference=obs-system/\u003cvar translate=\"no\"\u003eDST_BUCKET_CA_SECRET_NAME\u003c/var\u003e'\n - '--src_ca_certificate_reference=obs-system/trust-store-root-ext'\n - '--src_type=s3'\n - '--dst_type=s3'\n - '--bandwidth_limit=1G'\n restartPolicy: OnFailure.\n ---\n\n6. Wait for the transfer job to complete:\n\n kubectl wait --for=condition=complete job/\u003cvar translate=\"no\"\u003eJOB_NAME\u003c/var\u003e -n obs-system"]]