Health Check Concepts

Google Cloud Platform (GCP) provides health checking mechanisms that determine if backends – such as instance groups and network endpoint groups (NEGs) – properly respond to traffic. This document discusses health checking concepts specific to GCP and its load balancers.

Overview

GCP provides global and regional health check systems that connect to backends on a configurable, periodic basis. Each connection attempt is called a probe. GCP records the success or failure of each probe.

Health checks and load balancers work together. Based on a configurable number of sequential successful or failed probes, GCP computes an overall health state for each backend in the load balancer. Backends that respond successfully for the configured number of times are considered healthy. Backends that fail to respond successfully for a separate number of times are unhealthy.

GCP uses the overall health state of each backend to determine its eligibility for receiving new requests. In addition to being able to configure probe frequency and health state thresholds, you can configure the criteria that define a successful probe. This document describes how health checks work in detail.

GCP uses special routes not defined in your VPC network for health checks. For complete information on this, read Load balancer return paths.

Health check categories, protocols, and ports

GCP organizes health checks by category and protocol.

There are two health check categories: health checks and legacy health checks. Each category supports a different set of protocols and a means for specifying the port used for health checking. The protocol and port determine how GCP health check systems contact your backends. For example, you can create a health check that uses the HTTP protocol on TCP port 80, or you can create a health check that uses the TCP protocol for a named port configured on an instance group.

Most GCP load balancers require non-legacy health checks, but Network Load Balancing requires legacy health checks that use the HTTP protocol. Refer to Selecting a health check for specific guidance on selecting the category and the protocol, and specifying the ports.

You cannot convert a legacy health check to a health check or vice versa.

The term health check does not refer to legacy health checks. Legacy health checks are explicitly called legacy health checks in this document.

Selecting a health check

Health checks must be compatible with the type of load balancer and the types of backends (instance groups or network endpoint groups) it uses. The three factors you must specify when you create a health check are:

  • Category: health check or legacy health check, which must be compatible with the load balancer
  • Protocol: defines what protocol the GCP systems use to periodically probe your backends
  • Port specification: defines which ports are used for the health check's protocol

The guide at the end of this section summarizes valid combinations of health check category, protocol, and port specification based on a given type of load balancer and backend type.

As used in this section, the term instance group refers to unmanaged instance groups, managed zonal instance groups, or managed regional instance groups.

Category and protocol

The type of load balancer and the types of backends that the load balancer uses determine the health check's category. Network Load Balancing requires legacy health checks that use the HTTP protocol. For all other load balancer types, use regular health checks.

You must select a protocol from the list of protocols supported by the health check's category. It's a best practice to use the same protocol as the load balancer itself; however, this is not a requirement, nor is it always possible. For example, network load balancers require legacy health checks, and they require that the legacy health checks use the HTTP protocol, despite the fact that Network Load Balancing supports TCP and UDP in general. For network load balancers, you must run an HTTP server on your VMs so that they can respond to health check probes.

The following table lists the health check categories and the protocols each category supports.

Health check category Supported protocols
Health check • HTTP
• HTTPS
• HTTP/2 (with TLS)
• SSL
• TCP
Legacy health check • HTTP
• HTTPS (Legacy HTTPS health checks are not supported for network load balancers and cannot be used with most other types of load balancers.)

Category and port specification

In addition to a protocol, you must select a port specification for your health check. Health checks provide three port specification methods, and legacy health checks provide one method. Not all port specification methods are applicable to each type of load balancer. The type of load balancer and the types of backends it uses determine which port specification method you can use.

Health check category Port specification methods and meanings
Health check --port: specify a TCP port number
--port-name: specify any named port set on an instance group
--use-serving-port: for instance groups, use the same named port used by the backend service; for network endpoint groups, use the port defined on each endpoint
Legacy health check --port: specify a TCP port number

Note here and below: The flag --use-serving-port can be used with gcloud beta compute health-checks create. It cannot be used with gcloud beta compute health-checks update.

Load balancer guide

Use this table to choose the correct category and protocol of health check for a given load balancer.

Load balancer Backend type Health Check Category and Scope Port specification
Internal TCP/UDP Instance Groups on a regional internal backend service Health check (global) Port number (--port) or named port (--port-name).
You cannot use the --use-serving-port flag because backend services with INTERNAL load balancing schemes do not have an associated named port.
Internal HTTP(S) Network Endpoint Groups
on a backend service
Health check (regional) Port number (--port) or
--use-serving-port
Instance Groups on a backend service Health check (regional) Port number (--port), named port (--port-name), or
--use-serving-port
Network Instance Groups using target pools Legacy health check (global)
using the HTTP protocol
Legacy health checks only support port specification by port number (--port).
TCP Proxy
SSL Proxy
HTTP(S) 1
Network Endpoint Groups
on a backend service
Health check (global) Port number (--port) or
--use-serving-port
Instance Groups on a backend service Health check (global) Port number (--port), named port (--port-name), or
--use-serving-port

1It is possible, but not recommended, to use a legacy health check for backend services associated with HTTP(S) load balancers under the following circumstances:

  • The backends used by the backend service are instance groups, not network endpoint groups.
  • The backend VMs can be probed using either HTTP or HTTPS.

How health checks work

Probes

When you create a health check or create a legacy health check, you specify the following flags or accept their default values. These flags control how frequently each GCP health check system probes your instance group or NEG backends. GCP implements probes using multiple systems.

A health check's settings cannot be configured on a per-backend basis. Health checks are associated with a whole backend service, and legacy health checks are associated with a whole target pool or backend service, for certain HTTP(S) load balancers. Thus, the parameters for the probe are the same for all backends referenced by a given backend service or target pool.

Configuration flag Purpose Default value
Check interval
check-interval
The check interval is the amount of time from the start of one probe issued by one probing system to the start of the next probe issued by the same system. Units are seconds. If omitted, GCP uses 5s (5 seconds).
Timeout
timeout
The timeout is the amount of time that GCP will wait for a response to a probe. Its value must be less than or equal to the check interval. Units are seconds. If omitted, GCP uses 5s (5 seconds).

Probe IP ranges and firewall rules

For health checks to work, you must create ingress allow firewall rules so that traffic from GCP probe systems can reach your backends.

The following table shows the source IP ranges to allow, depending on the type of load balancer:

Load Balancer Probe source IP ranges Firewall rule example
Internal TCP/UDP
Internal HTTP(S)
HTTP(S)
SSL Proxy
TCP Proxy
35.191.0.0/16
130.211.0.0/22
Firewall rules for all load balancers except network load balancers
Network 35.191.0.0/16
209.85.152.0/22
209.85.204.0/22
Firewall rules for network load balancers

Importance of firewall rules

GCP requires that you create the necessary ingress allow firewall rules to permit traffic from probe systems to your backends. As a best practice, limit these rules to just the protocols and ports that match those used by your health checks. For the source IP ranges, make sure to use the documented ranges.

If you do not have ingress allow firewall rules that permit the protocol, port, and source IP range used by your health check, the implied deny ingress firewall rule blocks inbound traffic from all sources. When probe systems are unable to contact your backends, the GCP load balancer categorizes all of your backends as unhealthy. The behavior when all backends are unhealthy depends on the type of load balancer:

  • HTTP(S) and internal HTTP(S) load balancers return HTTP 502 responses to clients when all backends are unhealthy.

  • Connections to SSL Proxy and TCP Proxy load balancers time out when all backends are unhealthy.

  • Network load balancers attempt to distribute traffic to all backend VMs when they are all unhealthy as a means of last resort.

  • Internal TCP/UDP load balancers that aren't using failover distribute traffic to all backend VMs when the backends are all unhealthy as a means of last resort. You can disable this behavior by enabling failover.

Security considerations for probe IP ranges

Consider the following information when planning health checks and the necessary firewall rules:

  • The probe IP ranges belong to Google. GCP uses special routes, outside of your VPC network but within Google's production network, to facilitate communication from probe systems.

  • Google uses the probe IP ranges exclusively to execute health check probes and to send traffic from Google Front Ends (GFEs) for HTTP(S), SSL Proxy, and TCP Proxy load balancers. If a packet is received from the internet (including the external IP address of a Compute Engine instance or a GKE node) and the packet's source IP address is within a probe IP range, Google drops the packet.

  • The probe IP ranges are a complete set of possible IP addresses used by GCP probe systems. If you use tcpdump or a similar tool, you might not observe traffic from all IP addresses in all of the probe IP ranges. As a best practice, create ingress allow firewall rules for your chosen load balancer using all of the probe IP ranges as sources because GCP can implement new probe systems automatically without notification.

Multiple probes and frequency

GCP sends health check probes from multiple redundant systems from the appropriate source IP ranges. No single probe system is responsible for all of the probes. Multiple systems issue probes simultaneously so that failure of one does not cause GCP to lose track of backend health states.

The interval and timeout settings you configure for a health check are applied to each probe system. For a given backend, software access logs and tcpdump show more frequent health check probes than your configured settings. Multiple probe systems simultaneously contacting your backends result in more health check probes than the configuration for a single probe system.

This is expected behavior, and you cannot configure the number of probe systems that GCP uses for health checks. However, you can estimate the effect of multiple simultaneous probes by considering the following factors:

  • To estimate the probe frequency per backend service, consider the following:

    • Base frequency per backend service: Each health check has an associated check frequency, inversely proportional to the configured check interval:

      1(check interval)

      When you associate a health check with a backend service, you establish a base frequency used by each probe system for backends on that backend service.

    • Probe scale factor: The backend service's base frequency is multiplied by the number of simultaneous probe systems that GCP uses. This number can vary, but is generally between 5 and 10.

  • Multiple forwarding rules for internal TCP/UDP load balancers:: If you have configured multiple internal forwarding rules (each having a different IP address) pointing to the same regional internal backend service, GCP uses multiple probe systems to check each IP address. The probe frequency per backend service is multiplied by the number of configured forwarding rules.

  • Multiple forwarding rules for network load balancers:: If you have configured multiple forwarding rules that point to the same target pool, GCP uses multiple probe systems to check each IP address. The probe frequency as seen by each backend in the target pool is multiplied by the number of configured forwarding rules.

  • Multiple target proxies for HTTP(S) load balancers: If you have configured multiple target proxies for the same URL map for HTTP(S) Load Balancing, GCP uses multiple probe systems to check the IP address associated with each target proxy. The probe frequency per backend service is multiplied by the number of configured target proxies.

  • Multiple target proxies for SSL Proxy and TCP Proxy load balancers: If you have configured multiple target proxies for the same backend service for SSL Proxy or TCP Proxy Load Balancing, GCP uses multiple probe systems to check the IP address associated with each target proxy. The probe frequency per backend service is multiplied by the number of configured target proxies.

  • Sum over backend services: If a backend (such as an instance group) is used by multiple backend services, the backend instances are contacted as frequently as the sum of frequencies for each backend service's health check.

    With network endpoint group backends (NEGs), it's more difficult to determine the exact number of health check probes. For example, the same endpoint can be in multiple NEGs, where those NEGs don't necessarily have the same set of endpoints, and different endpoints can point to the same backend.

Destination for health check packets

GCP health check probes send packets only to the primary network interface of each backend instance. The destination IP address of these packets depends on the type of load balancer:

  • For internal TCP/UDP load balancers and network load balancers, the destination of health check packets is the IP address of the load balancer's forwarding rule. If multiple forwarding rules point to the same backend service or target pool, GCP sends probes to each forwarding rule's IP address. This can result in an increase in the number of probes, as described in the previous section.
  • For HTTP(S), TCP Proxy, SSL Proxy, and internal HTTP(S) load balancers that use instance groups as backends, the destination of health check packets is the primary internal IP address associated with the primary network interface of each backend instance.
  • For HTTP(S), TCP Proxy, SSL Proxy, and internal HTTP(S) load balancers that use network endpoint groups as backends, the destination of health check packets is the IP address of the endpoint, which can be either a primary or secondary (alias IP) address.

Success criteria for HTTP, HTTPS, and HTTP/2

When a health check uses the HTTP, HTTPS, or HTTP/2 protocol, each probe requires an HTTP 200 (OK) response code to be delivered before the probe timeout. In addition:

  • You can configure GCP probe systems to send HTTP requests to a specific request path. If you don't specify a request path, / is used.
  • If you configure a content-based health check, by specifying an expected response string, GCP must find the expected string within the first 1,024 bytes of the HTTP response.
  • If you configure an expected response string, each GCP health check probe must find the expected response string within the first 1,024 bytes of the actual response from your backends.

The following combinations of request path and response string flags are available for health checks using HTTP, HTTPS, and HTTP/2 protocols:

Configuration flag Success Criteria
Request path
request-path
Specify the URL path to which GCP sends health check probe requests.
If omitted, GCP sends probe requests to the root path, /. The request-path option doesn't support query parameters.
Response
response
The optional response flag allows you to configure a content-based health check. The expected response string must be less than or equal to 1,024 ASCII (single byte) characters. When configured, GCP expects this string within the first 1,024 bytes of the response in addition to receiving HTTP 200 (OK) status.

Success criteria for SSL and TCP

Unless you specify an expected response string, probes for health checks using the SSL and TCP protocols are successful when both of the following base conditions are true:

  • Each GCP probe system is able to successfully complete an SSL or TCP handshake before the configured probe timeout, and
  • For TCP health checks, the TCP session is terminated gracefully by either your backend or the GCP probe system, or your backend sends a TCP RST (reset) packet while the TCP session to the probe system is still established.

Be aware that if your backend sends a TCP RST (reset) packet to close a TCP session for a TCP health check, after the GCP probe system initiates a graceful TCP termination, the probe might be considered unsuccessful.

You can create a content based health check if you provide a request string and an expected response string, each up to 1,024 ASCII (single byte) characters in length. When an expected response string is configured, GCP considers a probe successful only if the base conditions are satisfied and the response string returned exactly matches the expected response string. The following combinations of request and response flags are available for health checks using the SSL and TCP protocols:

Configuration flags Success Criteria
Neither request nor response specified
Neither flag specified: --request, --response
GCP considers the probe successful when the base conditions are satisfied.
Both request and response specified
Both flags specified: --request, --response
GCP sends your configured request string and waits for the expected response string. GCP considers the probe successful when the base conditions are satisfied and the response string returned exactly matches the expected response string.
Only response specified
Flags specified: only --response
GCP waits for the expected response string, and considers the probe successful when the base conditions are satisfied and the response string returned exactly matches the expected response string.
You should only use --response by itself if your backends would automatically send a response string as part of the TCP or SSL handshake.
Only request specified
Flags specified: only --request
GCP sends your configured request string, considers the probe successful when the base conditions are satisfied. The response, if any, is not checked.

Health state

GCP uses the following configuration flags and whether or not probes were successful to determine the overall health state of each backend being load balanced:

Configuration flag Purpose Default value
Healthy threshold
healthy-threshold
The healthy threshold specifies the number of sequential successful probe results for a backend to be considered healthy. If omitted, GCP uses a threshold of 2 probes.
Unhealthy threshold
unhealthy-threshold
The unhealthy threshold specifies the number of sequential failed probe results for a backend to be considered unhealthy. If omitted, GCP uses a threshold of 2 probes.

GCP considers backends to be healthy once this healthy threshold has been met. Healthy backends are eligible to receive new connections.

GCP considers backends to be unhealthy when the unhealthy threshold has been met. Unhealthy backends are not eligible to receive new connections; however, existing connections are not immediately terminated. Instead, the connection remains open until a timeout occurs or until traffic is dropped. The specific behavior differs depending on the type of load balancer that you're using.

Existing connections might fail to return responses, depending on the cause for failing the probe. An unhealthy backend can become healthy if it is able to meet the healthy threshold again.

Additional notes

Content-based health checks

A content-based health check is one that has additional success criteria. With a content-based check, the response string from your backend is compared to an expected string. Use a content-based health check to instruct the health check probe to more completely validate your backend's response.

  • You configure an HTTP, HTTPS, or HTTP/2 content-based health check by specifying an expected response string, and, optionally, defining a request path. For more details, refer to Success criteria for HTTP, HTTPS, and HTTP/2.

  • You configure an SSL or TCP content-based health check by specifying an expected response string, and, optionally, a request string. For more details, refer to Success criteria for SSL and TCP.

Certificates and health checks

GCP health check probe systems do not perform certificate validation, even for protocols that require that your backends use certificates (SSL, HTTPS, and HTTP/2). As examples:

  • You can use self-signed certificates or certificates signed by any certificate authority (CA).
  • Certificates that have expired or that are not yet valid are acceptable.
  • Neither the CN nor the subjectAlternativeName attributes need to match a Host header or DNS PTR record.

What's next

For information on configuring health checks, see Creating Health Checks.

Was this page helpful? Let us know how we did:

Send feedback about...

Load Balancing