This page provides an overview of how to find and use your Google Kubernetes Engine (GKE) logs.
Accessing your logs
You can access your GKE logs in many ways:
From the Google Cloud console, you can view logs from the following pages:
Kubernetes Engine:
- Select a cluster on the Clusters page, and then select the Logs tab. This tab also offers suggested queries for your cluster logs.
- Select a workload on the Workloads page. You can then click the Container logs or Audit logs links on the Overview tab to view your logs in Logs Explorer, or select the Logs tab to view your logs in context.
Logging: Select Logs Explorer, and then use logging filters to select the Kubernetes resources, such as cluster, node, namespace, pod, or container logs. For example queries to help you get started, see Kubernetes-related queries.
- Monitoring: GKE dashboards display metrics and logs for GKE resources like clusters, nodes, and pods. For more information, see View observability metrics.
From the Google Cloud CLI: Query logs from clusters, nodes, pods, and containers by using the
gcloud logging read
command.
For custom log aggregation, log analytics, or integration with third-party systems, you can also use the logging sinks feature to export logs to BigQuery, Cloud Storage, and Pub/Sub.
Understanding your logs
A log in Cloud Logging is a collection of log entries, and each log entry applies to a certain type of logging resource.
Resource types
These are the resource types that are specific to GKE clusters:
Resource type | Display name |
---|---|
k8s_cluster |
Kubernetes Cluster logs |
k8s_node |
GKE Node Pool logs |
k8s_pod |
GKE Pod logs |
k8s_container |
GKE Container logs |
k8s_control_plane_component |
Kubernetes Control Plane Component |
When GKE writes your cluster's logs, each log entry includes the resource type. Understanding where your logs appear makes it easier to find your logs when you need them.
System logs
System logs include logs from the following sources:
All Pods running in namespaces
kube-system
,istio-system
,knative-serving
,gke-system
, andconfig-management-system
.Key services that are not containerized including
docker
/containerd
runtime,kubelet
,kubelet-monitor
,node-problem-detector
, andkube-container-runtime-monitor
.The node's serial ports' output, if the VM instance metadata
serial-port-logging-enable
is set to true. As of GKE 1.16-13-gke.400, serial port output for nodes is collected by the Logging agent. To disable serial port output logging, set--metadata serial-port-logging-enable=false
during cluster creation. Serial port output is useful for troubleshooting crashes, failed boots, startup issues, or shutdown issues with GKE nodes. Disabling these logs might limit troubleshooting.
System logs are captured for the following resource types:
cluster logs with
k8s_cluster
node logs with
k8s_node
pod with
k8s_pod
system apps with
k8s_container
Your system audit logs will appear in Cloud Logging with the following names:
projects/PROJECT_ID/logs/cloudaudit.googleapis.com%2Fdata_access – Data Access logs
projects/PROJECT_ID/logs/cloudaudit.googleapis.com%2Factivity – Admin Activity logs
projects/PROJECT_ID/logs/events – Events log
For detailed information about log entries that apply to the Kubernetes Cluster and GKE Cluster Operations resource types, refer to the Audit logging documentation.
There are additional system logs such as those for the kube-system that are written which are described in Controlling the collection of your application logs.Application logs
Kubernetes containers collect logs for your workloads written to STDOUT
and
STDERR
. You can find your workload application logs using the
k8s_container
or gke_cluster
resource types. Your logs will appear in
Logging with the following names:
projects/PROJECT_ID/logs/stderr – logs written to standard error
projects/PROJECT_ID/logs/stdout – logs written to standard out
Control Plane logs
If control plane logs are enabled for your GKE cluster, then logs emitted by certain Kubernetes control plane components (for instance, the API server, Scheduler, and Controller Manager) are exported to Cloud Logging.
These logs use the k8s_control_plane_component
resource type and appear in
Cloud Logging with the following names:
projects/PROJECT_ID/logs/container.googleapis.com%2Fapiserver
projects/PROJECT_ID/logs/container.googleapis.com%2Fscheduler
projects/PROJECT_ID/logs/container.googleapis.com%2Fcontroller-manager
Finding your logs in the Logging user interface
You can view your logs using the Logs Explorer in the Logging user interface.
Logs Explorer
Using the Query Builder, you can build a query either by selecting fields from a drop-down or by adding query parameters manually. For example, if you're reviewing logs for GKE clusters, you can start with selecting or searching for the Kubernetes Cluster resource type and then select the location and cluster name. You can then refine your search by selecting the Activity logs in the Log Name selector.
The Logs Explorer offers an additional way to build your search queries using the Logs field explorer. It shows the count of log entries, sorted by decreasing count, for the given log field. Using the Logs field explorer is particularly useful for GKE logs because the Logs field explorer provides a way to select the Kubernetes values for your resources to build a query. For example, using the Logs field explorer, you can select logs for a specific cluster, namespace, pod name, and then container name.
You can find more details in the Logging documentation about using the Logs Explorer.
Sample queries
If you're looking for specific logs, use the following sample queries to help you find your GKE logs:
Sample Kubernetes-related log queries
Sample Kubernetes Engine control plane log queries
Troubleshooting logs
If you're writing a high volume of logs from your GKE cluster, you might find that many of those logs consistently don't appear in Cloud Logging. A potential cause is that your logging volume exceeds the supported logging throughput for GKE.
Logging supports up to 100KB/s per node of logging
throughput. If any of the nodes in your GKE cluster require
greater logging throughput than this, then we recommend
increasing the logging agent throughput.