This section describes how to collect operational logs from a service in Google Distributed Cloud (GDC) air-gapped appliance for system logging and data observability.
The Logging platform provides a custom Kubernetes API to collect project-level logs your services generate through system logging targets. You must deploy a LoggingTarget
custom resource to your project namespace in the org admin cluster. Based on this resource, the Logging platform starts collecting your system logging data. Access those logs using the user interface (UI) of the system monitoring tool
or the GDC Logging API, following the steps of Query and view logs.
For information on best practices to implement logging for components of Kubernetes, see https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md.
Before you begin
To get the permissions you need to collect operational logs or view collected operational logs, ask your Project IAM Admin to grant you one of the following roles in your project namespace:
- Logging Target Creator: creates
LoggingTarget
custom resources. Request the Logging Target Creator (loggingtarget-creator
) role. - Logging Target Editor: edits or modifies
LoggingTarget
custom resources. Request the Logging Target Editor (loggingtarget-editor
) role. - Logging Target Viewer: views
LoggingTarget
custom resources. Request the Logging Target Viewer (loggingtarget-viewer
) role.
Configure the collection of operational logs from a service
Operational logs record conditions, changes, and actions as you manage ongoing operations in applications and services on GDC. Deploy the LoggingTarget
custom resource to the org admin cluster to configure the system logging pipeline for collecting operational logs from specific services at the project level.
Complete the following steps to collect operational logs from a service:
- Configure the
LoggingTarget
CR, specifying the selected pods for collecting your operational logs, the project namespace, and any additional settings. For more information, see Configure theLoggingTarget
custom resource. - Deploy the
LoggingTarget
CR to your project namespace in the org admin cluster. The pipeline starts collecting logs from the additional components of your project. - Query your logs from the system monitoring instance of your project. For more information, see Query and view logs.
Use the built-in color coding feature for different log levels of the service. For more information on setting log-level values, see https://grafana.com/docs/grafana/latest/explore/logs-integration/.
Configure the LoggingTarget
custom resource
The LoggingTarget
custom resource instructs the system logging pipeline to collect logs from specific services of your project and provide data observability. You must deploy this resource into the namespace from where you want to collect logs.
The following YAML file shows a LoggingTarget
configuration example:
# Configures a log scraping job
apiVersion: logging.gdc.goog/v1
kind: LoggingTarget
metadata:
# Choose a namespace that matches the namespace of the workload pods
namespace: PROJECT_NAMESPACE
name: my-service-logging-target
spec:
# Choose a matching pattern that identifies the pods for this job
# Optional
# Relationship between different selectors: 'AND'
selector:
# The clusters to collect logs from.
# The default configuration is to collect logs from all clusters.
# The relationship between different clusters is an 'OR' relationship.
# For example, the value '["admin", "system"]' indicates to consider
# the admin cluster 'OR' the system cluster.
# Optional
matchClusters:
- CLUSTER_NAME
- CLUSTER_NAME
# The pod name prefixes to collect logs from.
# The Observability platform scrapes all pods with names
# that start with the specified prefixes.
# The values must contain '[a-z0-9-]' characters only.
# The relationship between different list elements is an 'OR' relationship.
# Optional
matchPodNames:
- POD_NAME
- POD_NAME
# The container name prefixes to collect logs from.
# The Observability platform scrapes all containers with names
# that start with the specified prefixes.
# The values must contain '[a-z0-9-]' characters only.
# The relationship between different list elements is an 'OR' relationship.
# Optional
matchContainerNames:
- CONTAINER_NAME
- CONTAINER_NAME
# Choose the predefined parser for log entries.
# Use parsers to map the log output to labels and extract fields.
# Specify the log format.
# Optional
# Options: klog_text, klog_json, klogr, gdch_json, json
parser: klog_text
# Specify an access level for log entries.
# The default value is 'ao'.
# Optional
# Options: ao, pa, io
logAccessLevel: ao
# Specify a service name to be applied as a label
# For user workloads consider this field as a workload name
# Required
serviceName: SERVICE_NAME
# The additional static fields to apply to log entries.
# The field is a key-value pair, where the field name is the key and
# the field value is the value.
# Optional
additionalFields:
app: workload2
key: value
Replace the following:
PROJECT_NAMESPACE
: the namespace of your projectCLUSTER_NAME
: the name of the clusterPOD_NAME
: the pod name prefixCONTAINER_NAME
: the container name prefixSERVICE_NAME
: the name of the service
The parser
, logAccessLevel
, and additionalFields
fields contain example values that you can modify.
By default, all operational log entries are saved for the tenant ID of the project namespace. To overwrite this behavior, provide a logAccessLevel
value in the LoggingTarget
custom resource.