Halaman ini menjelaskan cara menyiapkan server log pusat untuk perangkat perlengkapan air-gapped Google Distributed Cloud (GDC) melalui organisasi pusat data air-gapped Google Distributed Cloud.
Untuk membuat lokasi logging pusat, perlengkapan GDC harus memiliki komponen berikut di organisasi pusat data GDC:
- project unik
- bucket untuk log audit
- bucket untuk log operasional
Membuat project
Langkah-langkah berikut harus dilakukan di organisasi pusat data GDC tempat log akan diekspor.
Setel
KUBECONFIG
ke Org Management API:export KUBECONFIG=ORG_MANAGEMENT_API_KUBECONFIG_PATH
Untuk mendapatkan izin yang diperlukan untuk mengekspor log, minta Admin IAM Organisasi Anda untuk memberi Anda peran ClusterRole Project Creator (
ClusterRole project-creator
). Untuk mengetahui informasi selengkapnya tentang peran ini, lihat Siapkan izin IAM.Terapkan resource kustom project untuk membuat project unik bagi appliance GDC yang akan mengekspor log:
kubectl apply -f - <<EOF apiVersion: resourcemanager.gdc.goog/v1 kind: Project metadata: namespace: platform name: APPLIANCE_PROJECT_NAME labels: object.gdc.goog/tenant-category: user EOF
Verifikasi apakah project baru tersedia di appliance GDC:
kubectl get namespace APPLIANCE_PROJECT_NAME
Tautkan project baru Anda ke akun penagihan. Untuk melacak biaya resource project, Anda harus memiliki akun penagihan terkait yang ditautkan ke project Anda.
Untuk mendapatkan izin yang diperlukan untuk mengekspor log, minta Admin IAM Organisasi Anda untuk memberi Anda peran Project IAM Admin (
project-iam-admin
) di namespaceAPPLIANCE_PROJECT_NAME
.
Membuat bucket
Langkah-langkah berikut harus dilakukan oleh Administrator Platform (PA) di organisasi pusat data GDC tempat log akan diekspor.
Setel
KUBECONFIG
ke Org Management API:export KUBECONFIG=ORG_MANAGEMENT_API_KUBECONFIG_PATH
Untuk mendapatkan izin yang diperlukan untuk mengekspor log, minta Admin IAM Organisasi Anda untuk memberi Anda peran Project Bucket Admin (
project-bucket-admin
) di namespaceAPPLIANCE_PROJECT_NAME
.Terapkan resource kustom bucket untuk membuat bucket:
apiVersion: object.gdc.goog/v1 kind: Bucket metadata: name: BUCKET_NAME namespace: APPLIANCE_PROJECT_NAME labels: object.gdc.goog/bucket-type: normal object.gdc.goog/encryption-version: v2 object.gdc.goog/tenant-category: user spec: description: Bucket for storing appliance xyz audit logs location: zone1 storageClass: Standard
Setelah bucket dibuat, jalankan perintah berikut untuk mengonfirmasi dan memeriksa detail bucket:
kubectl describe buckets BUCKET_NAME -n APPLIANCE_PROJECT_NAME
Buat
ProjectServiceAccount
untuk mengakses objek dalam bucket.kubectl apply -f - <<EOF --- apiVersion: resourcemanager.gdc.goog/v1 kind: ProjectServiceAccount metadata: name: BUCKET_NAME-read-write-sa namespace: APPLIANCE_PROJECT_NAME spec: {} EOF
Verifikasi bahwa
ProjectServiceAccount
telah disebarkan:kubectl get projectserviceaccount BUCKET_NAME-read-write-sa -n APPLIANCE_PROJECT_NAME -o json | jq '.status'
Pastikan izin
ServiceAccount
denganread
danwrite
ditambahkan ke bucket.kubectl apply -f - <<EOF --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: BUCKET_NAME-read-write-role namespace: APPLIANCE_PROJECT_NAME rules: - apiGroups: - object.gdc.goog resourceNames: - BUCKET_NAME resources: - buckets verbs: - read-object - write-object --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: BUCKET_NAME-read-write-rolebinding namespace: APPLIANCE_PROJECT_NAME roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: BUCKET_NAME-read-write-role subjects: - kind: ServiceAccount name: BUCKET_NAME-read-write-sa namespace: APPLIANCE_PROJECT_NAME EOF
Dapatkan secret yang berisi kredensial akses untuk bucket:
kubectl get secret -n APPLIANCE_PROJECT_NAME -o json| jq --arg jq_src BUCKET_NAME-read-write-sa '.items[].metadata|select(.annotations."object.gdc.goog/subject"==$jq_src)|.name'
Output harus terlihat seperti contoh berikut, dengan nama rahasia bucket ditampilkan:
"object-storage-key-sysstd-sa-olxv4dnwrwul4bshu37ikebgovrnvl773owaw3arx225rfi56swa"
Mengekspor nilai ke variabel:
export BUCKET_RW_SECRET_NAME=BUCKET_RW_SECRET_NAME
Dapatkan ID kunci untuk akses bucket:
kubectl get secret $BUCKET_RW_SECRET_NAME -n appliance-xyz -o json | jq -r '.data."access-key-id"' | base64 -di
Output harus terlihat seperti contoh berikut:
PCEW2HU47Y8ACUWQO4SK
Dapatkan kunci akses rahasia untuk bucket:
kubectl get secret $BUCKET_RW_SECRET_NAME -n appliance-xyz -o json | jq -r '.data."secret-access-key"' | base64 -di
Output harus terlihat seperti contoh berikut:
TzGdAbgp4h2i5UeiYa9k09rNPFQ2tkYADs67+65E
Dapatkan endpoint bucket:
kubectl get bucket BUCKET_NAME -n APPLIANCE_PROJECT_NAME -o json | jq '.status.endpoint'
Output harus terlihat seperti contoh berikut:
https://objectstorage.org-1.zone1.google.gdch.test
Dapatkan nama bucket yang sepenuhnya memenuhi syarat:
kubectl get bucket BUCKET_NAME -n APPLIANCE_PROJECT_NAME -o json | jq '.status.fullyQualifiedName'
Output harus terlihat seperti contoh berikut:
aaaoa9a-logs-bucket
Mentransfer data dari GDC
Ikuti Mengekspor log ke bucket jarak jauh ke bucket jarak jauh untuk mentransfer log dari appliance GDC ke bucket yang dibuat sebelumnya di pusat data air-gapped GDC menggunakan endpoint bucket, nama yang sepenuhnya memenuhi syarat, ID kunci akses, dan kunci akses rahasia.
Menyiapkan Loki dan Grafana di pusat data dengan air gap Google Distributed Cloud
Langkah-langkah berikut harus dilakukan oleh Operator Infrastruktur (IO) di organisasi pusat data air-gapped GDC tempat log telah diekspor.
Mendapatkan peran IAM
Untuk mendapatkan izin yang diperlukan untuk mengekspor log, minta Admin IAM Organisasi Anda untuk memberi Anda peran Logs Restore Admin (logs-restore-admin
) di namespace obs-system
di cluster infrastruktur dan peran Datasource Viewer (datasource-viewer
) dan Datasource Editor (datasource-editor
) di namespace obs-system
di management plane.
Menyiapkan Loki
Tetapkan
KUBECONFIG
ke cluster Org Infra:export KUBECONFIG=ORG_INFRA_CLUSTER_KUBECONFIG_PATH
Dapatkan ID kunci akses dan kunci akses rahasia untuk bucket log appliance dari PA dan buat secret yang berisi kredensial di namespace
obs-system
:kubectl create secret generic -n obs-system APPLIANCE_LOGS_BUCKET_SECRET_NAME --from-literal=access-key-id=APPLIANCE_LOGS_BUCKET_ACCESS_KEY_ID --from-literal=secret-access-key=APPLIANCE_LOGS_BUCKET_SECRET_ACCESS_KEY
Dapatkan endpoint dan nama lengkap bucket log appliance dari PA, lalu buat
configmap
Loki:kubectl apply -f - <<EOF --- apiVersion: v1 kind: ConfigMap metadata: name: CONFIGMAP_NAME namespace: obs-system data: loki.yaml: |- auth_enabled: true common: ring: kvstore: store: inmemory compactor: working_directory: /data/loki/compactor compaction_interval: 10m retention_enabled: true retention_delete_delay: 2h retention_delete_worker_count: 150 delete_request_store: s3 ingester: chunk_target_size: 1572864 chunk_encoding: snappy max_chunk_age: 2h chunk_idle_period: 90m chunk_retain_period: 30s autoforget_unhealthy: true lifecycler: ring: kvstore: store: inmemory replication_factor: 1 heartbeat_timeout: 10m wal: enabled: false limits_config: discover_service_name: [] retention_period: 48h reject_old_samples: false ingestion_rate_mb: 256 ingestion_burst_size_mb: 256 max_streams_per_user: 20000 max_global_streams_per_user: 20000 max_line_size: 0 per_stream_rate_limit: 256MB per_stream_rate_limit_burst: 256MB shard_streams: enabled: false desired_rate: 3MB schema_config: configs: - from: "2020-10-24" index: period: 24h prefix: index_ object_store: s3 schema: v13 store: tsdb server: http_listen_port: 3100 grpc_server_max_recv_msg_size: 104857600 grpc_server_max_send_msg_size: 104857600 graceful_shutdown_timeout: 60s analytics: reporting_enabled: false storage_config: tsdb_shipper: active_index_directory: /tsdb/index cache_location: /tsdb/index-cache cache_ttl: 24h aws: endpoint: APPLIANCE_LOGS_BUCKET_ENDPOINT bucketnames: APPLIANCE_LOGS_BUCKET_FULLY_QUALIFIED_NAME access_key_id: ${S3_ACCESS_KEY_ID} secret_access_key: ${S3_SECRET_ACCESS_KEY} s3forcepathstyle: true --- EOF
Buat
statefulset
dan layanan Loki:kubectl apply -f - <<EOF --- apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: STATEFULSET_NAME name: STATEFULSET_NAME namespace: obs-system spec: persistentVolumeClaimRetentionPolicy: whenDeleted: Retain whenScaled: Retain podManagementPolicy: OrderedReady replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: STATEFULSET_NAME serviceName: STATEFULSET_NAME template: metadata: labels: app: STATEFULSET_NAME istio.io/rev: default spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - preference: matchExpressions: - key: node-role.kubernetes.io/control-plane operator: DoesNotExist - key: node-role.kubernetes.io/master operator: DoesNotExist weight: 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - STATEFULSET_NAME topologyKey: kubernetes.io/hostname weight: 100 containers: - args: - -config.file=/etc/loki/loki.yaml - -config.expand-env=true - -target=all env: - name: S3_ACCESS_KEY_ID valueFrom: secretKeyRef: key: access-key-ID name: APPLIANCE_LOGS_BUCKET_SECRET_NAME optional: false - name: S3_SECRET_ACCESS_KEY valueFrom: secretKeyRef: key: secret-access-key name: APPLIANCE_LOGS_BUCKET_SECRET_NAME optional: false image: gcr.io/private-cloud-staging/loki:v3.0.1-gke.1 imagePullPolicy: Always livenessProbe: failureThreshold: 3 httpGet: path: /ready port: loki-server scheme: HTTP initialDelaySeconds: 330 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: STATEFULSET_NAME ports: - containerPort: 3100 name: loki-server protocol: TCP - containerPort: 7946 name: gossip-ring protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /ready port: loki-server scheme: HTTP initialDelaySeconds: 45 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: ephemeral-storage: 2000Mi memory: 8000Mi requests: cpu: 300m ephemeral-storage: 2000Mi memory: 1000Mi securityContext: readOnlyRootFilesystem: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/loki name: config - mountPath: /data name: loki-storage - mountPath: /tsdb name: loki-tsdb-storage - mountPath: /tmp name: temp - mountPath: /tmp/loki/rules-temp name: tmprulepath - mountPath: /etc/ssl/certs name: trust-bundle readOnly: true dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 10001 runAsGroup: 10001 runAsUser: 10001 terminationGracePeriodSeconds: 4800 volumes: - emptyDir: {} name: temp - emptyDir: {} name: tmprulepath - configMap: defaultMode: 420 name: trust-store-root-ext optional: true name: trust-bundle - configMap: defaultMode: 420 name: CONFIGMAP_NAME name: config updateStrategy: type: RollingUpdate volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: null name: loki-storage spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: standard-rwo volumeMode: Filesystem - apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: null name: loki-tsdb-storage spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: standard-rwo volumeMode: Filesystem --- apiVersion: v1 kind: Service metadata: name: STATEFULSET_NAME namespace: obs-system spec: internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: loki-server port: 3100 protocol: TCP targetPort: loki-server selector: app: STATEFULSET_NAME sessionAffinity: None type: ClusterIP --- EOF
Menyiapkan Grafana DataSource
Setel
KUBECONFIG
ke Org Management API:export KUBECONFIG=ORG_MANAGEMENT_API_KUBECONFIG_PATH
Buat
DataSources
untuk log Infrastruktur dan Platform:kubectl apply -f - <<EOF --- apiVersion: monitoring.private.gdc.goog/v1alpha1 kind: Datasource metadata: name: INFRA_DATASOURCE_NAME namespace: APPLIANCE_PROJECT_NAME-obs-system spec: datasource: access: proxy isDefault: false jsonData: httpHeaderName1: X-Scope-OrgID name: UI_FRIENDLY_NAME orgId: 1 readOnly: true secureJsonData: httpHeaderValue1: infra-obs type: loki uid: INFRA_DATASOURCE_NAME url: http://STATEFULSET_NAME.obs-system.svc:3100 version: 1 withCredentials: false --- apiVersion: monitoring.private.gdc.goog/v1alpha1 kind: Datasource metadata: name: PLATFORM_DATASOURCE_NAME namespace: APPLIANCE_PROJECT_NAME-obs-system spec: datasource: access: proxy isDefault: false jsonData: httpHeaderName1: X-Scope-OrgID name: UI_FRIENDLY_NAME orgId: 1 readOnly: true secureJsonData: httpHeaderValue1: platform-obs type: loki uid: PLATFORM_DATASOURCE_NAME url: http://STATEFULSET_NAME.obs-system.svc:3100 version: 1 withCredentials: false --- EOF
Melihat log di Grafana pusat data Google Distributed Cloud dengan air gap
Log yang diekspor ke bucket pusat data air-gapped Google Distributed Cloud dapat dilihat di instance Grafana project appliance GDC.