This page provides an overview of how to find and use your Google Kubernetes Engine (GKE) in Cloud Logging.
Accessing your logs
There are several different ways to access your GKE logs in Logging:
Logs Explorer – You can see your logs directly from the Logs Explorer by using the logging filters to select the Kubernetes resources, such as cluster, node, namespace, pod, or container logs. Here are sample Kubernetes-related queries to help get you started.
GKE console – In the Google Kubernetes Engine section of Google Cloud Console, select the Kubernetes resources listed in Workloads, and then the Container or Audit Logs links.
Cloud Monitoring console – If you have enabled a Cloud Monitoring Workspace, in the Kubernetes Engine section of the Cloud Monitoring console, select your cluster, nodes, pod, or containers to view your logs.
gcloudcommand-line tool – Using the
gcloud logging readcommand, select the appropriate cluster, node, pod, and container logs.
For custom log aggregation, log analytics, or integration with third-party systems, you can also use the logging sinks feature to export logs to BigQuery, Cloud Storage, and Pub/Sub.
Understanding your logs
These are the resource types that are specific to GKE clusters:
|Resource type||Display name|
||Kubernetes Cluster logs|
||GKE Node Pool logs|
||GKE Pod logs|
||GKE Container logs|
When GKE writes your cluster's logs, each log entry includes the resource type. Understanding where your logs appear makes it easier to find your logs when you need them.
These logs include the audit logs for the cluster including the Admin Activity log, Data Access log, and the Events log.
System logs are captured for the following resource types:
cluster logs with
node logs with
system apps with
Your system audit logs will appear in Cloud Logging with the following names:
projects/[YOUR_PROJECT_ID]/logs/cloudaudit.googleapis.com%2Fdata_access – Data Access logs
projects/[YOUR_PROJECT_ID]/logs/cloudaudit.googleapis.com%2Factivity – Admin Activity logs
projects/[YOUR_PROJECT_ID]/logs/events – Events log
For detailed information about log entries that apply to the Kubernetes Cluster and GKE Cluster Operations resource types, refer to the Audit logging documentation.
There are additional system logs such as those for the kube-system that are written which are described in Controlling the collection of your application logs.
Kubernetes containers collect logs for your workloads written to
STDERR. You can find your workload application logs using the
gke_cluster resource types. Your logs will appear in
Logging with the following names:
projects/[YOUR_PROJECT_ID]/logs/stderr – logs written to standard error
projects/[YOUR_PROJECT_ID]/logs/stdout – logs written to standard out
Finding your logs in the Logging user interface
You can view your logs using the Logs Explorer in the Logging user interface.
Using the Query Builder, you can build a query either by selecting fields from a drop-down or by adding query parameters manually. For example, if you're using Cloud Operations for GKE in your GKE cluster, you can start with selecting or searching for the Kubernetes Cluster resource type and then select the location and cluster name. You can then refine your search by selecting the Activity logs in the Log Name selector.
The Logs Explorer offers an additional way to build your search queries using the Logs field explorer. It shows the count of log entries, sorted by decreasing count, for the given log field. Using the Logs field explorer is particularly useful for GKE logs because the Logs field explorer provides an easy way to select the Kubernetes values for your resources to build a query. For example, using the Logs field explorer, you can select logs for a specific cluster, namespace, pod name, and then container name.
You can find more details in the Logging documentation about using the Logs Explorer.
If you're looking for specific logs, you can find a detailed set of sample queries to help you find your GKE logs below:
Sample Kubernetes-related log queries
Sample Kubernetes Engine control plane log queries
If you are using Cloud Operations for GKE in your GKE cluster and are writing a high volume of logs from your GKE cluster, you might find that many of those logs consistently don't appear in Cloud Logging. A potential cause is that your logging volume exceeds the supported logging throughput of Cloud Operations for GKE.
Cloud Operations for GKE currently supports up to 100kb/s per node of logging throughput. If any of the nodes in your GKE cluster require greater logging throughput than this, then we recommend deploying and customizing your own FluentD to achieve greater throughput.