This page explains how to enable verbose operating system audit logs on Google Kubernetes Engine nodes running Container-Optimized OS. This page also explains how to configure a fluent-bit logging agent to send logs to Cloud Logging. Enabling verbose logs can provide valuable information about the state of your cluster and workloads, such as error messages, login attempts, and binary executions. You can use this information to debug problems or investigate security incidents.
Enabling Linux auditd logging is not supported in GKE Autopilot clusters, because Google manages the nodes and underlying virtual machines (VMs).
This page is for Security specialists who review and analyze security logs. Use this information to understand the requirements and limitations of verbose OS logs and guide your implementation when enabling them on your GKE nodes. To learn more about common roles and example tasks that we reference in Google Cloud content, see Common GKE Enterprise user roles and tasks.
Before reading this page, ensure that you're familiar with Linux operating system audit logs.
Operating system audit logging is distinct from Cloud Audit Logs and Kubernetes Audit Logs.
Overview
To collect logs from each node in a cluster, use a
DaemonSet
which runs exactly one Pod on each cluster node where the DaemonSet is eligible
to be scheduled. This Pod configures the auditd
logging daemon on the host and
configures the logging agent to send the logs to Logging or any
other log-ingestion service.
By definition, auditing occurs after an event and is a retrospective security measure. auditd logs alone are probably not sufficient for conducting forensics on your cluster. Consider how to best use auditd logging as part of your overall security strategy.
Limitations
The logging mechanisms described on this page work only on nodes running Container-Optimized OS in GKE Standard clusters.
How the logging DaemonSet works
This section describes how the example logging DaemonSet works so that you can configure it to suit your needs. The next section explains how to deploy the DaemonSet.
The example manifest defines a DaemonSet, a ConfigMap, and a Namespace to contain them.
The DaemonSet deploys a Pod to each node in the cluster. The Pod contains two
containers. The first is an init
container
that starts the cloud-audit-setup systemd service available on
Container-Optimized OS nodes.
The second container,
cos-auditd-fluent-bit
, contains an instance of
fluent-bit which is configured
to collect the Linux audit logs from the node journal and export them to
Cloud Logging.
The example logging DaemonSet logs the following events:
auditd
system configuration modifications- AppArmor permission checks
execve()
,socket()
,setsockopt()
, andmmap()
executions- network connections
- user logins
- SSH session and all other TTYs (including
kubectl exec -t
sessions)
Configuring the logging DaemonSet
You configure the logging DaemonSet using a ConfigMap,
cos-auditd-fluent-bit-config
. The example provided sends audit logs to
Logging, but you can configure it to send logs to
other destinations.
The volume of logs produced by auditd
can be very large and may incur
additional costs because it consumes system resources and sends more logs than
the default logging configuration. You can set up filters to manage the logging
volume:
- You can set up filters in the
cos-auditd-fluent-bit-config
ConfigMap so that certain data isn't logged. Refer to the fluent-bit documentation for the Grep, Modify, Record Modifier, and other filters. - You can also configure Logging to filter incoming logs. For more details, see Configure and manage sinks.
Deploying the logging DaemonSet
You can use an existing cluster or create a new one.
Download the example manifests:
curl https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-node-tools/master/os-audit/cos-auditd-logging.yaml > cos-auditd-logging.yaml
Edit the example manifests to suit your needs. Refer to the previous section for details about how the DaemonSet works. Note that the
fluent-bit
image used in this sample manifest is for demonstration purposes only. As a best practice, replace the image with an image from a controlled source with SHA-256 digest.Initialize common variables:
export CLUSTER_NAME=CLUSTER_NAME export CLUSTER_LOCATION=COMPUTE_REGION
Replace the following:
CLUSTER_NAME
: the name of your cluster.COMPUTE_REGION
: the Compute Engine region for your cluster. For zonal clusters, use the zone instead.
Deploy the logging Namespace, DaemonSet, and ConfigMap:
envsubst '$CLUSTER_NAME,$CLUSTER_LOCATION' < cos-auditd-logging.yaml \ | kubectl apply -f -
Verify that the logging Pods have started. If you defined a different Namespace in your manifests, replace cos-auditd with the name of the namespace you're using.
kubectl get pods --namespace=cos-auditd
If the Pods are running, the output looks like this:
NAME READY STATUS RESTARTS AGE cos-auditd-logging-g5sbq 1/1 Running 0 27s cos-auditd-logging-l5p8m 1/1 Running 0 27s cos-auditd-logging-tgwz6 1/1 Running 0 27s
One Pod is deployed on each node in the cluster, in this case the cluster has three nodes.
You can now access the audit logs in Logging. In the Logs Explorer, filter the results using the following query:
LOG_ID("linux-auditd") resource.labels.cluster_name = "CLUSTER_NAME" resource.labels.location = "COMPUTE_REGION"
Alternatively, you can use gcloud CLI (use
--limit
because the result set can be very large):gcloud logging read --limit=100 "LOG_ID("linux-auditd") AND resource.labels.cluster_name = "${CLUSTER_NAME}" AND resource.labels.location = "${CLUSTER_LOCATION}""
Exporting logs
To learn how to route your logs to supported destinations, see Configure and manage sinks.
Cleanup
To disable auditd
logging, delete the logging DaemonSet and
reboot the nodes. The audit configuration is locked once enabled and can only be
changed by recreating the node.
Delete the DaemonSet, ConfigMap, and their Namespace from the cluster:
kubectl delete -f cos-auditd-logging.yaml
Reboot your cluster's nodes. First, get the instance group they belong to:
instance_group=$(gcloud compute instance-groups managed list \ --format="value(name)" \ --filter=${CLUSTER_NAME})
Then get the instances themselves:
instances=$(gcloud compute instance-groups managed list-instances ${instance_group} \ --format="csv(instance)[no-heading][terminator=',']")
Finally, recreate the instances:
gcloud compute instance-groups managed recreate-instances ${instance_group} \ --instances=${instances}
What's next
- Watch Cloud Forensics 101 to get started with cloud forensics.
- Learn about Kubernetes Audit Logging and audit policy.
- Read the Kubernetes Engine Security Overview.
- Learn about Cloud Audit Logs.