Migrating to Kubernetes Monitoring

This page explains the differences between the current Stackdriver support for Kubernetes (Monitoring and Logging) and Stackdriver Kubernetes Monitoring.

To install the Beta release of Stackdriver Kubernetes Monitoring when you create a new Kubernetes cluster, see Installing Stackdriver Kubernetes Monitoring instead.

Incompatibilities

The new Stackdriver Kubernetes Monitoring support has the following differences that affect you as you move away from the current support:

  • It uses a different set of metrics.
  • It uses a different set of monitored resource types.
  • It uses a different set of label names for its monitored resource types.

These changes have the following effects on Kubernetes-related activities in Stackdriver Logging and Monitoring:

  • Alerting policies. Your current alerting policies might not trigger or might trigger unexpectedly.

  • Logs-based metrics. Your current logs-based metrics might not recognize any matching log entries.

  • Logs exports. Your current logs exports might stop exporting log entries.

  • Custom charts. Your current custom charts and dashboards might stop displaying Kubernetes metrics. The Kubernetes metrics you see in Metrics Explorer are different.

  • Lost data. Your Kubernetes metrics and logs are written only to the active set of monitored resource types. For example, you do not see collected data in the old monitored resource types during the time you are opted into the Beta release.

For a list of changes you should make, see the following What to change section. The following sections provide more detail about the changes.

Metric changes

The names of the Kubernetes metrics have changed. You might be using the metric names in your custom charts and in your alerting policies. The changes are as follows:

  • The current support uses metrics that have the prefix container.googleapis.com/. For the full list of metrics, see container.
  • Stackdriver Kubernetes Monitoring uses metrics that have the prefix kubernetes.io/. For the full list of metrics, see kubernetes.io.

Resource type changes

The names of the monitored resource types used by Kubernetes have changed. You might be using the resource type names in your custom charts and in your alerting policies. The following table lists the old and new names:

Current (old) release Beta (new) release
GKE Container (gke_container)1
GKE Container (container)2
Kubernetes Container (k8s_container)3
(none) Kubernetes Cluster (k8s_cluster)3
GCE VM Instance (gce_instance)3 Kubernetes Node (k8s_node)3
(none) Kubernetes Pod (k8s_pod)3

1Used by Monitoring.
2Used by Logging.
3Used by both Monitoring and Logging.

Resource label changes

The labels in the new monitored resource types are slightly different from the labels used in the current support. You might be using the label names in your custom charts and alerting policies. The changes are detailed in the following sections.

For more information about the old and new resource labels, see Monitored Resource Types (for Monitoring) or Monitored Resource List (for Logging).

From container to k8s_container (Logging)

The following table lists the different labels in the two monitored resource types:

GKE Container
(container) label
Kubernetes Container
(k8s_container) label
Notes
project_id project_id No change
cluster_name cluster_name No change
namespace_id namespace_name No change in most cases
instance_id Use metadata.systemLabels.node_name1
pod_id pod_name
container_name container_name No change
zone location zone refers to the instance (node) location;
location refers to the cluster location.

1In log entries, the current instance_id field holds the numeric instance identifier. The metadata.systemLabels.node_name field holds the alphanumeric instance identifier. Both refer to the same instance.

From gke_container to k8s_container (Monitoring)

The following table lists the different labels in the two monitored resource types:

GKE Container
(gke_container) label
Kubernetes Container
(k8s_container) label
Notes
project_id project_id No change
cluster_name cluster_name No change
namespace_id namespace_name No change in most cases
instance_id Use metadata.systemLabels.node_name1
pod_id pod_name
container_name container_name No change
zone location zone refers to instance (node) location;
location refers to the cluster location.2

1In metric data, both the instance_id field and the metadata.systemLabels.node_name field hold same value—the alphanumeric instance identifier.
2In a "zonal" Kubernetes Engine cluster, these values are the same. In a "regional" Kubernetes Engine cluster, the values might be different.

From gce_instance to k8s_node

The following table lists the different labels in the two monitored resource types:

GCE VM Instance
(gce_instance) label
Kubernetes Node
(k8s_node) label
Notes
project_id project_id No change
(none) cluster_name
instance_id node_name
zone location zone refers to the instance (node) location;
location refers to the cluster location.2

1The current instance_id field might hold the numeric instance identifier, or it might hold the alphanumeric identifier. The metadata.systemLabels.node_name field always holds the alphanumeric instance identifier. Both refer to the same instance.
2In a "zonal" Kubernetes Engine cluster, these values are the same. In a "regional" Kubernetes Engine cluster, the values might be different.

Logging changes

The following changes occur in log entries associated with the Kubernetes monitored resource types:

  • Log names: The current log entries using resource container use a log ID that is the container name. The new log entries using resource k8s_container use a log ID that is either stdout or stderr.
  • Severity: The current log entries using the container and gce_instance resource types use the severity field in log entries. The new log entries using resource types k8s_container and k8s_node do not include a severity field.
  • Root labels: The current log entries using the container and gce_instance resource types populate the top-level labels field in LogEntry objects as well as the resource.labels. The new log entries, using resource types k8s_container and k8s_node, do not have root labels, only resource.labels.

Step-by-step

This section walks you through the steps required to migrate to the new Stackdriver Kubernetes Monitoring.

Update Kubernetes

You must update your existing cluster to Kubernetes v1.10.2-gke.0. This version is compatible with both the current support and Stackdriver Kubernetes Monitoring.

Change your logs, metrics, and alerts

Following are the Stackdriver Monitoring and Logging changes you should make when you upgrade an existing Kubernetes cluster to the Beta of Stackdriver Kubernetes Monitoring.

If you decide to revert your cluster from Stackdriver Kubernetes Monitoring back to the current support, then reverse the changes described below.

Alerting policies

If you have Kubernetes alerting policies whose conditions refer to Kubernetes metrics or monitored resource types, then do the following:

  • Change metric names from container.googleapis.com/ to kubernetes.io/.
  • Change monitored resource type names used with metrics from gke_container to k8s_container.
  • Change references to resource.labels according to the preceding section, Resource labels.
  • If you are alerting on logs-based metrics, see the following section on logs-based metrics.

Charts

If you use Metrics Explorer (Stackdriver Monitoring > Resources > Metrics Explorer), or if you have custom dashboards and charts (Stackdriver Monitoring > Dashboards), then you must adjust the definition of your charts to take into account the same changes listed in the preceding section for alerting policies.

Logs-based metrics

If you have logs-based metrics whose log filters involve Kubernetes monitored resource types, then do the following:

  • Change the container monitored resource type to k8s_container.
  • Change the gce_instance monitored resource type to k8s_node.
  • Change references to the labels log-entry field to resource.labels.
  • Change resource.label according to the preceding section, Resource labels.

Logs exports

If you have logs export sinks whose log filters involve Kubernetes monitored resource types, then do the following:

  • Change the container monitored resource type to k8s_container.
  • Change the gce_instance monitored resource type to k8s_node.
  • Change references to the labels log-entry field to resource.labels.
  • Change resource.label according to the preceding section, Resource labels.
  • Change any tools or applications that process exported logs to handle the changes in Kubernetes resource type names and resource label names.

Send feedback about...

Stackdriver Monitoring