Migrating to Kubernetes Monitoring

This page explains how Stackdriver Kubernetes Monitoring uses metrics, resource types, and logging. It also explains how this release is incompatible with the GA legacy Stackdriver support and how you can migrate from the older to the newer support.

Incompatibilities

The new Stackdriver Kubernetes Monitoring support has the following differences that affect the way you use Stackdriver.

  • The Stackdriver Kubernetes Monitoring user interface is different. You reach it through the Resources > Kubernetes Beta navigation menu in Stackdriver Monitoring. The only clusters you see are those that use Stackdriver Kubernetes Monitoring. You don't see the menu item if you don't have any such clusters.

  • Stackdriver Kubernetes Monitoring has a different set of metrics.

  • Stackdriver Kubernetes Monitoring has a different set of monitored resource types for Monitoring and Logging.

These changes take effect when you use Stackdriver Kubernetes Monitoring. For example, if you upgrade an existing cluster, then you have two sets of metrics and monitored resource types. The old metrics and types will contain data and logs from the time before you upgraded. The new metrics and types receive data and logs from the time after you upgraded.

These changes have the following effects on Kubernetes-related activities in Stackdriver Logging and Monitoring:

  • Alerting policies. Your current alerting policies might not trigger or might trigger unexpectedly, because they are referring to metrics or resource types that are not receiving new data.

  • Logs-based metrics. Your current logs-based metrics might not find any matching GKE log entries, because they are looking for new log entries with the old resource types or resource labels.

  • Logs exports. Your current export sinks might stop exporting GKE logs because the sinks' filters no longer match the new GKE log entries' resource types.

  • Logs exclusions. Your current exclusions might not eliminate the GKE logs you expected, because the exclusion filters no longer match the new GKE log entries' resource types.

  • Custom charts. Your custom charts and dashboards might stop displaying new GKE data because they refer to metrics that are no longer receiving data. If you use the Metrics Explorer, remember to choose the new GKE resource types and metric names.

  • Lost data. There is no way to join the log entries or monitoring data so that you can see an unbroken data stream spanning before and after your upgrade.

The following sections describe the changes in more detail. They include instructions for how to migrate existing Stackdriver monitoring and logging features to Stackdriver Kubernetes Monitoring.

GKE metrics

The names of the GKE metrics have changed. Click the names in the following table to get a list of metrics:

Legacy metric names Stackdriver Kubernetes Monitoring metric names
container.googleapis.com/* kubernetes.io/*

Migration from the legacy support: Look through all your custom charts and alerting policies. Change the legacy metric names to the corresponding Stackdriver Kubernetes Monitoring metric names.

GKE resource types

The names of the monitored resource types used for Kubernetes are listed below. Click the names to get a detailed description. The name changes to the resource type labels are listed after the migration instructions.

Legacy resource type Stackdriver Kubernetes Monitoring resource type
G​K​E Container (gke_container)M
G​K​E Container (container)L
Kubernetes Container (k8s_container)L+M
(none) Kubernetes Cluster (k8s_cluster)L+M
G​C​E VM Instance (gce_instance)L+M Kubernetes Node (k8s_node)L+M
(none) Kubernetes Pod (k8s_pod)L+M

Notes:
MUsed by Monitoring.
LUsed by Logging.
L+MUsed by both Monitoring and Logging.

Migration from the legacy support (Logging): Examine all your advanced logs filters and change any old resource types and labels to the new resource types and labels. The filters might be used in the following:

  • Logs-based metrics
  • Logs export sinks
  • Logs exclusions

Migration from the legacy support (Monitoring): Examine all your monitoring filters change any old resource types and labels to the new resource types and labels. Your monitoring filters might be used in:

  • Custom chart definitions.
  • Group definitions.
  • Alerting policy conditions.

Resource labels

The labels in the new monitored resource types are slightly different from the labels used in the older types, as discussed in the following sections.

From container to k8s_container (Logging)

The following table lists the different labels in the two monitored resource types:

G​K​E Container
(container) label
Kubernetes Container
(k8s_container) label
Notes
project_id project_id No change
cluster_name cluster_name No change
namespace_id namespace_name No change in most cases
instance_id Use metadata.systemLabels.node_name1
pod_id pod_name
container_name container_name No change
zone location zone refers to the instance (node) location;
location refers to the cluster location.

1In log entries, the current instance_id field holds the numeric instance identifier. The metadata.systemLabels.node_name field holds the alphanumeric instance identifier. Both refer to the same instance.

From gke_container to k8s_container (Monitoring)

The following table lists the different labels in the two monitored resource types:

G​K​E Container
(gke_container) label
Kubernetes Container
(k8s_container) label
Notes
project_id project_id No change
cluster_name cluster_name No change
namespace_id namespace_name No change in most cases
instance_id Use metadata.systemLabels.node_name1
pod_id pod_name
container_name container_name No change
zone location zone refers to instance (node) location;
location refers to the cluster location.2

1In metric data, both the instance_id field and the metadata.systemLabels.node_name field hold same value—the alphanumeric instance identifier.
2In a "zonal" Google Kubernetes Engine cluster, these values are the same. In a "regional" Google Kubernetes Engine cluster, the values might be different.

From gce_instance to k8s_node

The following table lists the different labels in the two monitored resource types:

G​C​E VM Instance
(gce_instance) label
Kubernetes Node
(k8s_node) label
Notes
project_id project_id No change
(none) cluster_name
instance_id node_name
zone location zone refers to the instance (node) location;
location refers to the cluster location.2

1The current instance_id field might hold the numeric instance identifier, or it might hold the alphanumeric identifier. The metadata.systemLabels.node_name field always holds the alphanumeric instance identifier. Both refer to the same instance.
2In a "zonal" Google Kubernetes Engine cluster, these values are the same. In a "regional" Google Kubernetes Engine cluster, the values might be different.

GKE Logging

When you upgrade to Stackdriver Kubernetes Monitoring, the following logs-related changes take effect:

  • The previously described resource type changes means your log entries now appear in the Logs Viewer under these resource headings:

  • Log entries under k8s_container have a log ID of stdout or stderr, rather than the container cluster name. The cluster name is in the resource.labels field.

  • Log entries might have different contents in their root-level (top-level) labels field. Check the resource.labels field for more information, which should be consistent with the GA legacy support.

Migration from the legacy support: If you have logs-based metrics, log export sinks), or log exclusions whose log filters use Kubernetes monitored resource types, then do the following:

  • Update to the new resource types.
  • Change references to the labels log-entry field to resource.labels.
  • Change resource.label values according to the preceding section, Resource labels.

Next steps

Was this page helpful? Let us know how we did:

Send feedback about...

Stackdriver Monitoring