Follow the instructions in this section to set up your Central logs server.
Set up storage bucket
Create a new storage bucket to store the appliance logs:
gcloud storage buckets create gs://BUCKET_NAME
Enable the IAM API if not already enabled:
gcloud services enable iam.googleapis.com
Create a service account for accessing the bucket:
gcloud iam service-accounts create GSA_NAME --description="DESCRIPTION" --display-name="DISPLAY_NAME"
Transfer data from GDC to a storage bucket
Transfer data to export the data to storage buckets.
Copy Grafana and Loki Helm Charts from GDC air-gapped appliance to a local computer
On the GDC air-gapped appliance bootstrapper, the helm charts for Grafana and Loki are located in RELEASE_DIR/appliance/observability
. Copy them to the local computer on which you are running this setup:
scp
USERNAME@BOOTSTRAPPER:/RELEASE_DIR/appliance/observability/grafana.tgz WORKING_DIR/grafana.tgz
scp
USERNAME@BOOTSTRAPPER:/RELEASE_DIR/appliance/observability/loki.tgz WORKING_DIR/loki.tgz
Set up Grafana and Loki in a GKE cluster
Create a new GKE cluster: https://cloud.google.com/sdk/gcloud/reference/container/clusters/create. If using an existing cluster for this setup, proceed to the next step.
Install and configure kubectl to interact with the cluster: https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl
Enable Workload Identity on the cluster: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#enable-existing-cluster
Follow https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/cloud-storage-fuse-csi-driver#enable to enable the Cloud Storage FUSE CSI driver in your GKE cluster.
Follow https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/cloud-storage-fuse-csi-driver#authentication to configure access to Cloud Storage buckets using GKE Workload Identity Federation for GKE. Choose
roles/storage.objectAdmin
when setting up the IAM policy binding in step 5.Follow https://cloud.google.com/artifact-registry/docs/repositories/remote-repo to create an Artifact Registry remote repository which will acts as a proxy for Dockerhub, the external registry that contains the container images used by the Grafana and Loki helm charts.
Download and untar the Grafana and Loki helm charts:
tar -xzf WORKING_DIR/grafana.tgz tar -xzf WORKING_DIR/loki.tgz
Set the Loki helm chart values in WORKING_DIR/loki/values.yaml.in and install the helm chart in the cluster:
helm install LOKI_RELEASE_NAME WORKING_DIR/loki --namespace NAMESPACE
Set the Grafana helm chart values in WORKING_DIR/grafana/values.yaml.in and install the helm chart in the cluster:
helm install GRAFANA_RELEASE_NAME WORKING_DIR/grafana --namespace NAMESPACE
For example:
app: # name is the name that will used for creating kubernetes resources # like deployment, service, etc. associated with this grafana app. name: grafana # artifactRegistryRepository is the full name of the artifact registry remote # repository that proxies dockerhub. artifactRegistryRepository: us-east1-docker.pkg.dev/my-gcp-project/dockerhub loki: # serviceName is the name of the kubernetes service that exposes # the loki server. serviceName: loki # serviceNamespace is the namespace in which the loki service is present serviceNamespace: my-loki-namespace # tenantID is the tenant ID of the logs. tenantID: infra-obs
Access the Grafana UI
Access the Grafana UI by setting up port forwarding between your computer and the Grafana service from the Kubernetes cluster:
kubectl port-forward service/GRAFANA_APP_NAME
-n NAMESPACE 3000:3000
After running the preceding command, you can access the Grafana UI. If you need to expose the Grafana UI to multiple users in your organization, consider setting up GKE Ingress: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
Export logs to external storage
By default, the Loki instance running in a cluster aggregates and stores all logs. However, you can configure an additional Fluent Bit output to export logs to other destinations apart from that Loki instance on the root admin cluster.
This section contains the steps to configure an additional sink to route and export logs to external storage. It provides instructions for the following log types according to your use case:
For a complete list of the supported Fluent Bit destinations, see https://docs.fluentbit.io/manual/pipeline/outputs.
Export operational logs
Work through the following steps to export operational logs to external storage:
Create a
ConfigMap
object in theobs-system
namespace with thelogmon: system_logs
label. Add the additional output configuration in theoutput.conf
file of thedata
section. It must have the same syntax as the Fluent Bit output plugins.When creating the
ConfigMap
object, make sure to meet the following requirements:- Keep the name you assign to the
ConfigMap
object because it must match a value specified in a future step. - Add the customized Fluent Bit output plugin configurations in the Output block section of the object.
The following YAML file shows a template of the
ConfigMap
object to illustrate the previous requirements.apiVersion: v1 kind: ConfigMap metadata: # The name should match the configmap name specified in step 3. name: <descriptive configmap name> namespace: obs-system labels: # This label is required and must be system_logs for system logs logmon: system_logs data: # The file name must be output.conf output.conf: # ===== Output block ===== ### Add customized fluent-bit output plugin configurations here
- Keep the name you assign to the
Open your
ObservabilityPipeline
custom resource in a command-line editor:kubectl --kubeconfig=ROOT_ADMIN_CLUSTER_KUBECONFIG -n obs-system edit observabilitypipelines.observability.gdc.goog default
Replace ORG_ADMIN_CLUSTER_KUBECONFIG with the path of the kubeconfig file for the admin cluster.
In the
ObservabilityPipeline
custom resource, add thefluentbitConfigMaps
array to theadditionalSinks
field to the logging field of the spec section. The entry in thefluentbitConfigMaps
array must match the name you previously assigned to the ConfigMap object in step 1.apiVersion: observability.gdc.goog/v1alpha1 kind: ObservabilityPipeline metadata: # Don't change anything in this section ... spec: logging: # This section is for system logs and only needs to be edited if system logs have a custom output. additionalSink: fluentbitConfigMaps: # The name should match the configmap name created in step 1. - "<system logs output configmap name>" # Scheme: []v1.VolumeMount. Add volumeMounts if necessary volumeMounts: - ... - ... # Scheme: []v1.Volume. Add volumes if necessary volumes: - ... - ...
To apply the changes to the
ObservabilityPipeline
custom resource, save and exit your command-line editor.
Completing these steps starts a rollout of your changes and restarts the anthos-log-forwarder
DaemonSet.
Export audit logs
Work through the following steps to export audit logs to external storage:
Create a
ConfigMap
object in theobs-system
namespace with thelogmon: audit_logs
label. Add the additional output configuration in theoutput.conf
file of thedata
section. It must have the same syntax as the Fluent Bit output plugins.When creating the
ConfigMap
object, make sure to meet the following requirements:- Keep the name you assign to the
ConfigMap
object because it must match a value specified in a future step. - Add the customized Fluent Bit output plugin configurations in the Output block section of the object.
The following YAML file shows a template of the
ConfigMap
object to illustrate the previous requirements.apiVersion: v1 kind: ConfigMap metadata: # The name should match the configmap name specified in step 3. name: <descriptive configmap name> namespace: obs-system labels: # This label is required and must be audit_logs for audit logs logmon: audit_logs data: # The file name must be output.conf output.conf: | # ===== Output block ===== ### Add a customized fluent-bit output plugin configuration here
- Keep the name you assign to the
Open your
ObservabilityPipeline
custom resource in a command-line editor:kubectl --kubeconfig=ORG_ADMIN_CLUSTER_KUBECONFIG -n obs-system edit observabilitypipelines.observability.gdc.goog default
Replace ORG_ADMIN_CLUSTER_KUBECONFIG with the path of the kubeconfig file for the admin cluster.
In the
ObservabilityPipeline
custom resource, add thefluentbitConfigMaps
array to theadditionalSinks
field to the logging field of the spec section. The entry in thefluentbitConfigMaps
array must match the name you previously assigned to the ConfigMap object in step 1.apiVersion: observability.gdc.goog/v1alpha1 kind: ObservabilityPipeline metadata: # Don't change anything in this section ... spec: auditLogging: # This section is for audit logs and only needs to be edited if audit logs have a custom output. additionalSink: fluentbitConfigMaps: # The name should match the configmap name created in step 1. - "<audit logs output configmap name>" # Scheme: []v1.VolumeMount. Add volumeMounts if necessary volumeMounts: - ... - ... # Scheme: []v1.Volume. Add volumes if necessary volumes:
To apply the changes to the
ObservabilityPipeline
custom resource, save and exit your command-line editor.
Completing these steps starts a rollout of your changes and restarts the anthos-log-forwarder
DaemonSet.