Use Audit Logging

This document describes how to use Cloud Audit Logs for GKE on Bare Metal. GKE on Bare Metal uses Kubernetes Audit Logging, which keeps a chronological record of calls made to a cluster's Kubernetes API server. Audit logs are useful for investigating suspicious API requests and for collecting statistics.

About Cloud Audit Logs

Audit logs are written to Cloud Audit Logs in your Google Cloud project. Writing to Cloud Audit Logs has several benefits over writing to disk or capturing logs in an on-premises logging system:

  • Audit logs for all Anthos clusters can be centralized.
  • Log entries written to Cloud Audit Logs are immutable.
  • Cloud Audit Logs entries are retained for 400 days.
  • Cloud Audit Logs feature is included in the price of Anthos.
  • You can configure GKE on Bare Metal to write logs to disk or to Cloud Audit Logs.

Disk-based audit logging

If Cloud Audit Logs is disabled explicitly, audit logs in GKE on Bare Metal are written to a persistent disk so that cluster restarts and upgrades don't cause the logs to disappear. GKE on Bare Metal retains up to 1 GiB of audit log entries.

Access the disk-based audit logs by logging into control plane Nodes. The logs are located in the /var/log/apiserver/ directory.

Cloud Audit Logs

Admin Activity audit log entries from all Kubernetes API servers are sent to Google Cloud, using the project and location that you specify when you create a user cluster. To buffer and write log entries to Cloud Audit Logs, GKE on Bare Metal deploys an audit-proxy daemon set that runs on the control plane nodes.


Cloud Audit Logs for GKE on Bare Metal has the following limitations:

  • Data access logging is not supported.
  • Modifying the Kubernetes audit policy is not supported.
  • Cloud Audit Logs is not resilient to extended network outages. If the log entries cannot be exported to Google Cloud, they are cached in a 10 GiB disk buffer. If that buffer fills, then the oldest entries are dropped.

Creating a service account for Cloud Audit Logs

Before you can use Cloud Logging and Cloud Monitoring with GKE on Bare Metal, you must first configure the following:

  1. Create a Cloud Monitoring Workspace within the Google Cloud project, if you don't have one already.

    In the Google Cloud console, click the following button and follow the workflow.

    Go to Monitoring

  2. Click the following buttons to enable the required APIs:

    Enable the Anthos Audit API

    Enable the Stackdriver API

    Enable the Monitoring API

    Enable the Logging API

  3. Assign the following IAM roles to the service account used by the Stackdriver agents:

    • logging.logWriter
    • monitoring.metricWriter
    • stackdriver.resourceMetadata.writer
    • monitoring.dashboardEditor

Accessing Cloud Audit Logs


  1. In the Google Cloud console, go to the Logs Explorer page in the Logging menu.

    Go to the Logs Explorer

    If the Legacy Logs Viewer page opens, choose Upgrade to the new Logs Explorer from the Upgrade drop-down menu.

  2. Click Query to access the text box for submitting queries.

  3. Fill the text box with the following query:


    Replace PROJECT_ID with your project ID.

  4. Click Run query to display all audit logs from GKE on Bare Metal clusters that were configured to log in to this project.


List the first two log entries in your project's Admin Activity log that apply to the k8s_cluster resource type:

gcloud logging read \
    'logName="projects/PROJECT_ID/logs/" \
    AND resource.type="k8s_cluster" \
    AND protoPayload.serviceName="" ' \
    --limit 2 \
    --freshness 300d

Replace PROJECT_ID with your project ID.

The output shows two log entries. Notice that for each log entry, the logName field has the value projects/PROJECT_ID/logs/ and protoPayload.serviceName is equal to

Audit policy

The Kubernetes audit policy defines rules for which events are recorded as log entries and specifies what data the log entries should include. Changing this policy to modify Cloud Audit Logs behavior isn't supported currently.