Auto-repairing nodes

Stay organized with collections Save and categorize content based on your preferences.

This page shows you how to configure node auto-repair in Google Kubernetes Engine (GKE).

Overview

Node auto-repair helps keep the nodes in your GKE cluster in a healthy, running state. When enabled, GKE makes periodic checks on the health state of each node in your cluster. If a node fails consecutive health checks over an extended time period, GKE initiates a repair process for that node.

Settings for different modes of operation

For Autopilot clusters, nodes are managed by GKE, and have node auto-repair already configured. This setting is part of the node configuration and cannot be disabled.

For Standard clusters, node auto-repair is enabled by default for new node pools in GKE with version 1.17.6-gke.4 and later.

For Standard clusters:

  • subscribed to release channels, node auto-repair cannot be disabled. If you create additional pools, node auto-repair is enabled by default and cannot be disabled.
  • using a static version (no channel), node auto-repair can be disabled. If you create additional node pools, you can enable or disable node auto-repair for those node pools, independent of the auto-repair setting for the default node pool.

Repair criteria

GKE uses the node's health status to determine if a node needs to be repaired. A node reporting a Ready status is considered healthy. GKE triggers a repair action if a node reports consecutive unhealthy status reports for a given time threshold. An unhealthy status can mean:

  • A node reports a NotReady status on consecutive checks over the given time threshold (approximately 10 minutes).
  • A node does not report any status at all over the given time threshold (approximately 10 minutes).
  • A node's boot disk is out of disk space for an extended time period (approximately 30 minutes).

You can manually check your node's health signals at any time by using the kubectl get nodes command.

Node repair process

If GKE detects that a node requires repair, the node is drained and re-created. GKE waits one hour for the drain to complete. If the drain doesn't complete, the node is shut down and a new node is created.

If multiple nodes require repair, GKE might repair nodes in parallel. GKE balances the number of repairs depending on the size of the cluster and the number of broken nodes. GKE will repair more nodes in parallel on a larger cluster, but fewer nodes as the number of unhealthy nodes grows.

If you disable node auto-repair at any time during the repair process, in- progress repairs are not cancelled and continue for any node currently under repair.

Node repair history

GKE generates a log entry for automated repair events. You can check the logs by running the following command:

gcloud container operations list

Enable auto-repair for an existing node pool

You enable node auto-repair on a per-node pool basis.

If auto-repair is disabled on an existing node pool in a Standard cluster, use the following instructions to enable it:

gcloud

gcloud container node-pools update POOL_NAME \
    --cluster CLUSTER_NAME \
    --region=COMPUTE_REGION \
    --enable-autorepair

Replace the following:

  • POOL_NAME: the name of your node pool.
  • CLUSTER_NAME: the name of your Standard cluster.
  • COMPUTE_REGION: the Compute Engine region for the cluster. For zonal clusters, use the --zone COMPUTE_ZONE option.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. Click the Nodes tab.

  4. Under Node Pools, click the name of the node pool you want to modify.

  5. On the Node pool details page, click Edit.

  6. Under Management, select the Enable auto-repair checkbox.

  7. Click Save.

Verify node auto-repair is enabled for a node pool

Node auto-repair is enabled on a per-node pool basis. You can verify that a node pool in your cluster has node auto-repair enabled with the Google Cloud CLI or the Google Cloud console.

gcloud

Describe the node pool:

gcloud container node-pools describe NODE_POOL_NAME \
--cluster=CLUSTER_NAME

If node auto-repair is enabled, the output of the command will include these lines:

management:
  ...
  autoUpgrade: true

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. On the Google Kubernetes Engine page, click the name of the cluster of the node pool you want to inspect.

  3. Click the Nodes tab.

  4. Under Node Pools, click the name of the node pool you want to inspect.

  5. Under Management, in the Auto-repair field, verify that auto-repair is enabled.

Disable node auto-repair

You can disable node auto-repair for an existing node pool in a Standard cluster by using the gcloud CLI or the Google Cloud console.

gcloud

gcloud container node-pools update POOL_NAME \
    --cluster CLUSTER_NAME \
    --region=COMPUTE_REGION \
    --no-enable-autorepair

Replace the following:

  • POOL_NAME: the name of your node pool.
  • CLUSTER_NAME: the name of your Standard cluster.
  • COMPUTE_REGION: the Compute Engine region for the cluster. For zonal clusters, use the --zone COMPUTE_ZONE option.

Console

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. Click the Nodes tab.

  4. Under Node Pools, click the name of the node pool you want to modify.

  5. On the Node pool details page, click Edit.

  6. Under Management, clear the Enable auto-repair checkbox.

  7. Click Save.

What's next