Enabling Linux auditd logs on GKE nodes

This page explains how to enable verbose operating system audit logs on Google Kubernetes Engine nodes running Container-Optimized OS. This page also explains how to configure the logging agent to send logs to Stackdriver.

Operating system audit logging is distinct from Cloud Audit Logs and Kubernetes Audit Logs.


Operating system logs on your nodes provide valuable information about the state of your cluster and workloads, such as error messages, login attempts, and binary executions. You can use this information to debug problems or investigate security incidents.

To collect logs from each node in a cluster, use a DaemonSet which runs exactly one Pod on each cluster node where the DaemonSet is eligible to be scheduled. This Pod configures the auditd logging daemon on the host and configures the logging agent to send the logs to Stackdriver or any other log-ingestion service.

By definition, auditing occurs after an event and is a postmortem security measure. auditd logs alone are probably not sufficient for conducting forensics on your cluster. Consider how to best use auditd logging as part of your overall security strategy.


The logging mechanisms described on this page work only on nodes running Container-Optimized OS.

How the logging DaemonSet works

This section describes how the example logging DaemonSet works so that you can configure it to suit your needs. The next section explains how to deploy the DaemonSet.

The example manifest defines a DaemonSet, a ConfigMap, and a Namespace to contain them.

The DaemonSet deploys a Pod to each node in the cluster. The Pod contains two containers. The first is an init container that starts the cloud-audit-setup systemd service. The second container, fluentd-gcp-cos-auditd, is for the Stackdriver logging agent, an application based on the fluentd.

The example logging DaemonSet logs the following events:

  • auditd system configuration modifications
  • AppArmor permission checks
  • execve(), socket(), setsockopt(), and mmap() executions
  • network connections
  • user logins
  • SSH session and all other TTYs (including kubectl exec -t sessions)

Configuring the logging DaemonSet

You configure the logging DaemonSet using a ConfigMap, fluentd-gcp-config-cos-auditd. The example provided sends audit logs to Stackdriver, but you can configure it to send logs to non-Stackdriver destinations.

The volume of logs produced by auditd can be very large and may incur additional costs because it consumes system resources and sends more logs than the default logging configuration. You can set up filters to manage the logging volume:

Deploying the logging DaemonSet

  1. You can use an existing cluster or create a new one.

  2. Download the [example manifests]:

    curl https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-node-tools/master/os-audit/cos-auditd-logging.yaml > cos-auditd-logging.yaml
  3. Edit the example manifests to suit your needs. Refer to the previous section for details about how the DaemonSet works.

  4. Deploy the logging DaemonSet and ConfigMap:

    kubectl apply -f cos-auditd-logging.yaml
  5. Verify that the logging Pods have started. If you defined a different Namespace in your manifests, replace cos-auditd with the name of the namespace you're using.

    kubectl get pods --namespace=cos-auditd

    If the Pods are running, the output looks like this:

    NAME                                             READY   STATUS    RESTARTS   AGE
    cos-auditd-logging-g5sbq                         1/1     Running   0          27s
    cos-auditd-logging-l5p8m                         1/1     Running   0          27s
    cos-auditd-logging-tgwz6                         1/1     Running   0          27s

    One Pod is deployed on each node in the cluster, in this case the cluster has three nodes.

  6. You can now access the audit logs in Stackdriver.


To disable auditd logging, delete the logging DaemonSet and reboot the nodes. The audit configuration is locked once enabled and can only be changed by recreating the node.

  1. Delete the DaemonSet, ConfigMap, and their Namespace from the cluster:

    kubectl delete -f cos-auditd-logging.yaml
  2. Reboot your cluster's nodes. First, get the instance group they belong to:

    instance_group=$(gcloud compute instance-groups managed list \
                        --format="value(name)" \

    Then get the instances themselves:

    instances=$(gcloud compute instance-groups managed list-instances ${instance_group} \

    Finally, recreate the instances:

    gcloud compute instance-groups managed recreate-instances ${instance_group} \

What's next

이 페이지가 도움이 되었나요? 평가를 부탁드립니다.

다음에 대한 의견 보내기...

Kubernetes Engine Documentation