This page explains how to use audit logging in your Kubernetes Engine clusters.
All Kubernetes-powered clusters have Kubernetes Audit Logging, which keeps a chronological record of all calls that have been made to the Kubernetes API server in that cluster. These logs are useful for investigating suspicious API requests, for collecting statistics, or for creating monitoring alerts for unwanted API calls.
Kubernetes Engine clusters integrate Kubernetes Audit Logging with Cloud Audit Logging on Google Cloud Platform, which lets you access the audit log information through Stackdriver Logging. The most important audit logs are exported automatically at no charge. If you need to audit all requests to your cluster's Kubernetes API server, you must enable Data Access logs for Kubernetes Engine in Cloud Audit Logging.
Before you begin
- Ensure you have an active cluster.
- Read about Kubernetes Audit Logging.
Accessing Audit Logs
You can locate and view your cluster's audit logs by using Google Cloud Platform Console
or by using the
gcloud command-line tool.
To view audit logs for your cluster, perform the following steps:
In the GCP Console, navigate to Logging.
Select the Logs view.
- From the drop-down menu, select Kubernetes Cluster and your cluster name.
You can view audit log records about specific resources by using the
logging read command in the
For example, to view audit log records about Deployment resources created on your cluster, use the following command:
gcloud logging read 'resource.type="k8s_cluster" AND protoPayload.methodName:"deployments.create"'
Stackdriver log entries are associated with monitored resource and service
types. For Kubernetes Engine audit
logs, the resource type is
Kubernetes Cluster, defined by a
Audit logs are structured objects consisting of an
operation subobject and a
operation specifies a request that the audit log event
belongs to, and the
protoPayload contains the audit log entry itself,
response objects (if applicable).
Once you've accessed your audit logs, you can use Stackdriver's filtering syntax to find the relevant logs you want. Some useful filters include:
To display all requests with failed authorization:
To display all write requests to a Secret resource:
resource.type="k8s_cluster" protoPayload.methodName:"io.k8s.core.v1.secrets" NOT protoPayload.methodName:"get" NOT protoPayload.methodName:"list" NOT protoPayload.methodName:"watch"
To display all requests to a Pod resource from a particular user:
resource.type="k8s_cluster" protoPayload.methodName:"io.k8s.core.v1.pods" protoPayload.authenticationInfo.principalEmail="email@example.com"
Other Stackdriver Features
You can use any of Stackdriver's logging features to perform common and useful tasks with your Kubernetes Engine cluster's audit logs. These might include:
- Setting up a log metric for a given filter
- Creating a Stackdriver alert policy based on a log metric
- Export your cluster's audit logs to a service such as Cloud Storage, BigQuery, or Cloud Pub/Sub.
The Kubernetes audit policy determines what audit log data is exported from the Kubernetes API server.
For read requests such as
watch, only the request object is
saved in the audit logs; the response object is not. For requests involving
sensitive data such as
ConfigMap, only the metadata is exported.
For all other requests, both request and response objects are saved in audit
Kubernetes omits certain high-volume calls made by system services. For complete information on which calls the Kubernetes audit policy includes or omits from the logs, see the latest audit policy configuration in the open-source Kubernetes repository.