本頁說明如何透過 Google Distributed Cloud air-gapped 資料中心機構,為 Google Distributed Cloud (GDC) air-gapped 裝置設定中央記錄伺服器。
如要建立中央記錄位置,GDC 設備必須在 GDC 資料中心機構中具備下列元件:
- 專案
- 稽核記錄的 bucket
- 作業記錄 bucket
建立專案
您必須在要匯出記錄檔的 GDC 資料中心機構中執行下列步驟。
將
KUBECONFIG
設為 Org Management API:export KUBECONFIG=ORG_MANAGEMENT_API_KUBECONFIG_PATH
如要取得匯出記錄所需的權限,請要求機構 IAM 管理員授予您 ClusterRole 專案建立者 (
ClusterRole project-creator
) 角色。如要進一步瞭解這些角色,請參閱「準備 IAM 權限」。套用專案自訂資源,為要匯出記錄的 GDC 設備建立專屬專案:
kubectl apply -f - <<EOF apiVersion: resourcemanager.gdc.goog/v1 kind: Project metadata: namespace: platform name: APPLIANCE_PROJECT_NAME labels: object.gdc.goog/tenant-category: user EOF
確認新專案是否可在 GDC 設備中使用:
kubectl get namespace APPLIANCE_PROJECT_NAME
將新專案連結至帳單帳戶。如要追蹤專案資源費用,您必須將相關聯的帳單帳戶連結至專案。
如要取得匯出記錄所需的權限,請要求機構 IAM 管理員在
APPLIANCE_PROJECT_NAME
命名空間中授予您專案 IAM 管理員 (project-iam-admin
) 角色。
建立值區
平台管理員 (PA) 必須在記錄檔匯出目標 GDC 資料中心機構中執行下列步驟。
將
KUBECONFIG
設為 Org Management API:export KUBECONFIG=ORG_MANAGEMENT_API_KUBECONFIG_PATH
如要取得匯出記錄所需的權限,請要求機構 IAM 管理員在
APPLIANCE_PROJECT_NAME
命名空間中授予您專案 Bucket 管理員 (project-bucket-admin
) 角色。套用 bucket 自訂資源來建立 bucket:
apiVersion: object.gdc.goog/v1 kind: Bucket metadata: name: BUCKET_NAME namespace: APPLIANCE_PROJECT_NAME labels: object.gdc.goog/bucket-type: normal object.gdc.goog/encryption-version: v2 object.gdc.goog/tenant-category: user spec: description: Bucket for storing appliance xyz audit logs location: zone1 storageClass: Standard
建立 bucket 後,請執行下列指令來確認並檢查 bucket 的詳細資料:
kubectl describe buckets BUCKET_NAME -n APPLIANCE_PROJECT_NAME
建立
ProjectServiceAccount
,以便存取 bucket 中的物件。kubectl apply -f - <<EOF --- apiVersion: resourcemanager.gdc.goog/v1 kind: ProjectServiceAccount metadata: name: BUCKET_NAME-read-write-sa namespace: APPLIANCE_PROJECT_NAME spec: {} EOF
確認
ProjectServiceAccount
已傳播:kubectl get projectserviceaccount BUCKET_NAME-read-write-sa -n APPLIANCE_PROJECT_NAME -o json | jq '.status'
確認已為值區新增
ServiceAccount
的read
和write
權限。kubectl apply -f - <<EOF --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: BUCKET_NAME-read-write-role namespace: APPLIANCE_PROJECT_NAME rules: - apiGroups: - object.gdc.goog resourceNames: - BUCKET_NAME resources: - buckets verbs: - read-object - write-object --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: BUCKET_NAME-read-write-rolebinding namespace: APPLIANCE_PROJECT_NAME roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: BUCKET_NAME-read-write-role subjects: - kind: ServiceAccount name: BUCKET_NAME-read-write-sa namespace: APPLIANCE_PROJECT_NAME EOF
取得包含值區存取憑證的密鑰:
kubectl get secret -n APPLIANCE_PROJECT_NAME -o json| jq --arg jq_src BUCKET_NAME-read-write-sa '.items[].metadata|select(.annotations."object.gdc.goog/subject"==$jq_src)|.name'
輸出內容必須如下列範例所示,顯示值區的密鑰名稱:
"object-storage-key-sysstd-sa-olxv4dnwrwul4bshu37ikebgovrnvl773owaw3arx225rfi56swa"
將值匯出至變數:
export BUCKET_RW_SECRET_NAME=BUCKET_RW_SECRET_NAME
取得值區存取權的金鑰 ID:
kubectl get secret $BUCKET_RW_SECRET_NAME -n appliance-xyz -o json | jq -r '.data."access-key-id"' | base64 -di
輸出內容必須如下列範例所示:
PCEW2HU47Y8ACUWQO4SK
取得值區的存取密鑰:
kubectl get secret $BUCKET_RW_SECRET_NAME -n appliance-xyz -o json | jq -r '.data."secret-access-key"' | base64 -di
輸出內容必須如下列範例所示:
TzGdAbgp4h2i5UeiYa9k09rNPFQ2tkYADs67+65E
取得 bucket 的端點:
kubectl get bucket BUCKET_NAME -n APPLIANCE_PROJECT_NAME -o json | jq '.status.endpoint'
輸出內容必須如下列範例所示:
https://objectstorage.org-1.zone1.google.gdch.test
取得 bucket 的完整名稱:
kubectl get bucket BUCKET_NAME -n APPLIANCE_PROJECT_NAME -o json | jq '.status.fullyQualifiedName'
輸出內容必須如下列範例所示:
aaaoa9a-logs-bucket
從 GDC 轉移資料
請按照「將記錄匯出至遠端值區」一文的說明,使用值區的端點、完整合格名稱、存取金鑰 ID 和私密存取金鑰,將記錄從 GDC 裝置傳輸至先前在 GDC 氣隙資料中心建立的值區。
在 Google Distributed Cloud 實體隔離資料中心中設定 Loki 和 Grafana
下列步驟必須由記錄檔匯出至其中的 GDC 氣隙資料中心機構的基礎架構營運人員 (IO) 執行。
取得 IAM 角色
如要取得匯出記錄的權限,請要求機構 IAM 管理員在基礎架構叢集的命名空間 obs-system
中,授予您「記錄還原管理員」(logs-restore-admin
) 角色,並在管理層的命名空間 obs-system
中,授予您「資料來源檢視者」(datasource-viewer
) 和「資料來源編輯者」(datasource-editor
) 角色。
設定 Loki
將
KUBECONFIG
設為 Org Infra 叢集:export KUBECONFIG=ORG_INFRA_CLUSTER_KUBECONFIG_PATH
從 PA 取得設備記錄值區的存取金鑰 ID 和存取密鑰,並在
obs-system
命名空間中建立包含憑證的密鑰:kubectl create secret generic -n obs-system APPLIANCE_LOGS_BUCKET_SECRET_NAME --from-literal=access-key-id=APPLIANCE_LOGS_BUCKET_ACCESS_KEY_ID --from-literal=secret-access-key=APPLIANCE_LOGS_BUCKET_SECRET_ACCESS_KEY
從 PA 取得設備記錄儲存空間的端點和完整名稱,然後建立 Loki
configmap
:kubectl apply -f - <<EOF --- apiVersion: v1 kind: ConfigMap metadata: name: CONFIGMAP_NAME namespace: obs-system data: loki.yaml: |- auth_enabled: true common: ring: kvstore: store: inmemory compactor: working_directory: /data/loki/compactor compaction_interval: 10m retention_enabled: true retention_delete_delay: 2h retention_delete_worker_count: 150 delete_request_store: s3 ingester: chunk_target_size: 1572864 chunk_encoding: snappy max_chunk_age: 2h chunk_idle_period: 90m chunk_retain_period: 30s autoforget_unhealthy: true lifecycler: ring: kvstore: store: inmemory replication_factor: 1 heartbeat_timeout: 10m wal: enabled: false limits_config: discover_service_name: [] retention_period: 48h reject_old_samples: false ingestion_rate_mb: 256 ingestion_burst_size_mb: 256 max_streams_per_user: 20000 max_global_streams_per_user: 20000 max_line_size: 0 per_stream_rate_limit: 256MB per_stream_rate_limit_burst: 256MB shard_streams: enabled: false desired_rate: 3MB schema_config: configs: - from: "2020-10-24" index: period: 24h prefix: index_ object_store: s3 schema: v13 store: tsdb server: http_listen_port: 3100 grpc_server_max_recv_msg_size: 104857600 grpc_server_max_send_msg_size: 104857600 graceful_shutdown_timeout: 60s analytics: reporting_enabled: false storage_config: tsdb_shipper: active_index_directory: /tsdb/index cache_location: /tsdb/index-cache cache_ttl: 24h aws: endpoint: APPLIANCE_LOGS_BUCKET_ENDPOINT bucketnames: APPLIANCE_LOGS_BUCKET_FULLY_QUALIFIED_NAME access_key_id: ${S3_ACCESS_KEY_ID} secret_access_key: ${S3_SECRET_ACCESS_KEY} s3forcepathstyle: true --- EOF
建立 Loki
statefulset
和服務:kubectl apply -f - <<EOF --- apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: STATEFULSET_NAME name: STATEFULSET_NAME namespace: obs-system spec: persistentVolumeClaimRetentionPolicy: whenDeleted: Retain whenScaled: Retain podManagementPolicy: OrderedReady replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: STATEFULSET_NAME serviceName: STATEFULSET_NAME template: metadata: labels: app: STATEFULSET_NAME istio.io/rev: default spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - preference: matchExpressions: - key: node-role.kubernetes.io/control-plane operator: DoesNotExist - key: node-role.kubernetes.io/master operator: DoesNotExist weight: 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - STATEFULSET_NAME topologyKey: kubernetes.io/hostname weight: 100 containers: - args: - -config.file=/etc/loki/loki.yaml - -config.expand-env=true - -target=all env: - name: S3_ACCESS_KEY_ID valueFrom: secretKeyRef: key: access-key-ID name: APPLIANCE_LOGS_BUCKET_SECRET_NAME optional: false - name: S3_SECRET_ACCESS_KEY valueFrom: secretKeyRef: key: secret-access-key name: APPLIANCE_LOGS_BUCKET_SECRET_NAME optional: false image: gcr.io/private-cloud-staging/loki:v3.0.1-gke.1 imagePullPolicy: Always livenessProbe: failureThreshold: 3 httpGet: path: /ready port: loki-server scheme: HTTP initialDelaySeconds: 330 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: STATEFULSET_NAME ports: - containerPort: 3100 name: loki-server protocol: TCP - containerPort: 7946 name: gossip-ring protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /ready port: loki-server scheme: HTTP initialDelaySeconds: 45 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: ephemeral-storage: 2000Mi memory: 8000Mi requests: cpu: 300m ephemeral-storage: 2000Mi memory: 1000Mi securityContext: readOnlyRootFilesystem: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/loki name: config - mountPath: /data name: loki-storage - mountPath: /tsdb name: loki-tsdb-storage - mountPath: /tmp name: temp - mountPath: /tmp/loki/rules-temp name: tmprulepath - mountPath: /etc/ssl/certs name: trust-bundle readOnly: true dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 10001 runAsGroup: 10001 runAsUser: 10001 terminationGracePeriodSeconds: 4800 volumes: - emptyDir: {} name: temp - emptyDir: {} name: tmprulepath - configMap: defaultMode: 420 name: trust-store-root-ext optional: true name: trust-bundle - configMap: defaultMode: 420 name: CONFIGMAP_NAME name: config updateStrategy: type: RollingUpdate volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: null name: loki-storage spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: standard-rwo volumeMode: Filesystem - apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: null name: loki-tsdb-storage spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: standard-rwo volumeMode: Filesystem --- apiVersion: v1 kind: Service metadata: name: STATEFULSET_NAME namespace: obs-system spec: internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: loki-server port: 3100 protocol: TCP targetPort: loki-server selector: app: STATEFULSET_NAME sessionAffinity: None type: ClusterIP --- EOF
設定 Grafana DataSource
將
KUBECONFIG
設為 Org Management API:export KUBECONFIG=ORG_MANAGEMENT_API_KUBECONFIG_PATH
為基礎架構和平台記錄建立
DataSources
:kubectl apply -f - <<EOF --- apiVersion: monitoring.private.gdc.goog/v1alpha1 kind: Datasource metadata: name: INFRA_DATASOURCE_NAME namespace: APPLIANCE_PROJECT_NAME-obs-system spec: datasource: access: proxy isDefault: false jsonData: httpHeaderName1: X-Scope-OrgID name: UI_FRIENDLY_NAME orgId: 1 readOnly: true secureJsonData: httpHeaderValue1: infra-obs type: loki uid: INFRA_DATASOURCE_NAME url: http://STATEFULSET_NAME.obs-system.svc:3100 version: 1 withCredentials: false --- apiVersion: monitoring.private.gdc.goog/v1alpha1 kind: Datasource metadata: name: PLATFORM_DATASOURCE_NAME namespace: APPLIANCE_PROJECT_NAME-obs-system spec: datasource: access: proxy isDefault: false jsonData: httpHeaderName1: X-Scope-OrgID name: UI_FRIENDLY_NAME orgId: 1 readOnly: true secureJsonData: httpHeaderValue1: platform-obs type: loki uid: PLATFORM_DATASOURCE_NAME url: http://STATEFULSET_NAME.obs-system.svc:3100 version: 1 withCredentials: false --- EOF
在 Google Distributed Cloud 實體隔離資料中心的 Grafana 中查看記錄
匯出至 Google Distributed Cloud 氣隙資料中心 bucket 的記錄檔,可在 GDC 裝置專案的 Grafana 執行個體中查看。