This page provides an overview of vertical Pod autoscaling in
Google Kubernetes Engine (GKE) and provides reference material for the
VerticalPodAutoscaler
custom resource and related types.
Vertical Pod autoscaling provides recommendations for resource usage over time. For sudden increases in resource usage, use the Horizontal Pod Autoscaler.
To learn how to use vertical Pod autoscaling, see Scale container resource requests and limits.
To learn best practices for autoscaling, see Best practices for running cost-optimized Kubernetes applications on GKE.
How vertical Pod autoscaling works
Vertical Pod autoscaling lets you analyze and set CPU and memory resources required by Pods. Instead of having to set up-to-date CPU requests and limits and memory requests and limits for the containers in your Pods, you can configure vertical Pod autoscaling to provide recommended values for CPU and memory requests and limits that you can use to manually update your Pods, or you can configure vertical Pod autoscaling to automatically update the values.
Vertical Pod autoscaling is enabled by default in Autopilot clusters.
Vertical Pod autoscaling in Auto mode
Due to Kubernetes limitations, the only way to modify the resource requests of a
running Pod is to recreate the Pod. If you create a VerticalPodAutoscaler
object with an updateMode
of Auto
, the VerticalPodAutoscaler
evicts a Pod
if it needs to change the Pod's resource requests.
To limit the amount of Pod restarts, use a Pod disruption budget. To ensure that your cluster can handle the new sizes of your workloads, use cluster autoscaler and node auto-provisioning.
Vertical Pod autoscaling notifies the cluster autoscaler ahead of the update, and provides the resources needed for the resized workload before recreating the workload, to minimize the disruption time.
Benefits
Vertical Pod autoscaling provides the following benefits:
- Setting the right resource requests and limits for your workloads improves stability and cost efficiency. If your Pod resource sizes are smaller than your workloads require, your application can either be throttled or it can fail due to out-of-memory errors. If your resource sizes are too large, you have waste and, therefore, larger bills.
- Cluster nodes are used efficiently because Pods use exactly what they need.
- Pods are scheduled onto nodes that have the appropriate resources available.
- You don't have to run time-consuming benchmarking tasks to determine the correct values for CPU and memory requests.
- Reduced maintenance time because the autoscaler can adjust CPU and memory requests over time without any action on your part.
GKE vertical Pod autoscaling provides the following benefits over the Kubernetes open source autoscaler:
- Takes maximum node size and resource quotas into account when determining the recommendation target.
- Notifies the cluster autoscaler to adjust cluster capacity.
- Uses historical data, providing metrics collected before you enable the Vertical Pod Autoscaler.
- Runs Vertical Pod Autoscaler Pods as control plane processes, instead of deployments on your worker nodes.
Limitations
- To use vertical Pod autoscaling with horizontal Pod autoscaling, use multidimensional Pod autoscaling. You can also use vertical Pod autoscaling with horizontal Pod autoscaling on custom and external metrics.
- Vertical Pod autoscaling is not ready for use with JVM-based workloads due to limited visibility into actual memory usage of the workload.
- Vertical Pod autoscaling has a default setting of two minimum replicas for
Deployments to replace Pods with revised resource values. In
GKE version 1.22 and later, you can override this setting by
specifying a value for
minReplicas
in the PodUpdatePolicy field.
Best practices
- To avoid cluster update disruptions, we recommend that you keep the number
of
VerticalPodAutoscaler
objects per cluster under 1,000. - Vertical Pod autoscaling works best with long-running homogenous workloads.
API reference
This is the v1
API reference. We strongly recommend using this version of the
API.
VerticalPodAutoscaler v1 autoscaling.k8s.io
Fields | |
---|---|
|
API group, version, and kind. |
metadata |
Standard object metadata. |
spec |
The desired behavior of the |
status |
The most recently observed status of the |
VerticalPodAutoscalerSpec v1 autoscaling.k8s.io
Fields | |
---|---|
targetRef |
Reference to the controller that manages the set of Pods for the
autoscaler to control, for example, a Deployment or a StatefulSet.
You can point a |
updatePolicy |
Specifies whether recommended updates are applied when a Pod is started and whether recommended updates are applied during the life of a Pod. |
resourcePolicy |
Specifies policies for how CPU and memory requests are adjusted for individual containers. The resource policy can be used to set constraints on the recommendations for individual containers. If not specified, the autoscaler computes recommended resources for all containers in the Pod, without additional constraints. |
recommenders |
Recommender responsible for generating recommendation for this VPA object. Leave empty to use the default recommender provided by GKE. Otherwise the list can contain exactly one entry for a user-provided alternative recommender. Supported since GKE 1.22. |
VerticalPodAutoscalerList v1 autoscaling.k8s.io
Fields | |
---|---|
|
API group, version, and kind. |
metadata |
Standard object metadata. |
items |
A list of |
PodUpdatePolicy v1 autoscaling.k8s.io
Fields | |
---|---|
updateMode |
Specifies whether recommended updates are applied when a Pod is started and whether recommended updates are applied during the life of a Pod. Possible values are "Off", "Initial", "Recreate", and "Auto". The default is "Auto" if you don't specify a value. |
minReplicas |
Minimal number of replicas which need to be alive for Updater to attempt Pod eviction (pending other checks like Pod Disruption Budget). Only positive values are allowed. Defaults to global '--min-replicas' flag which is set to 2 in GKE. Supported since GKE 1.22. |
PodResourcePolicy v1 autoscaling.k8s.io
Fields | |
---|---|
containerPolicies |
An array of resource policies for individual containers. There can be at most one entry for every named container and optionally a single wildcard entry with `containerName = '*'`, which handles all containers that do not have individual policies. |
ContainerResourcePolicy v1 autoscaling.k8s.io
Fields | |
---|---|
containerName |
The name of the container that the policy applies to. If not specified, the policy serves as the default policy. |
mode |
Specifies whether recommended updates are applied to the container when it is started and whether recommended updates are applied during the life of the container. Possible values are "Off" and "Auto". The default is "Auto" if you don't specify a value. |
minAllowed |
Specifies the minimum CPU request and memory request allowed for the container. By default, there is no minimum applied. |
maxAllowed |
Specifies the maximum CPU request and memory request allowed for the container. By default, there is no maximum applied. |
ControlledResources |
Specifies the type of recommendations that will be computed (and
possibly applied) by the |
VerticalPodAutoscalerRecommenderSelector v1 autoscaling.k8s.io
Fields | |
---|---|
name |
Name of the recommender responsible for generating recommendation for this object. |
VerticalPodAutoscalerStatus v1 autoscaling.k8s.io
Fields | |
---|---|
recommendation |
The most recently recommended CPU and memory requests. |
conditions |
Describes the current state of the |
RecommendedPodResources v1 autoscaling.k8s.io
Fields | |
---|---|
containerRecommendation |
An array of resource recommendations for individual containers. |
RecommendedContainerResources v1 autoscaling.k8s.io
Fields | |
---|---|
containerName |
The name of the container that the recommendation applies to. |
target |
The recommended CPU request and memory request for the container. |
lowerBound |
The minimum recommended CPU request and memory request for the container. This amount is not guaranteed to be sufficient for the application to be stable. Running with smaller CPU and memory requests is likely to have a significant impact on performance or availability. |
upperBound |
The maximum recommended CPU request and memory request for the container. CPU and memory requests higher than these values are likely to be wasted. |
uncappedTarget |
The most recent resource recommendation computed by the autoscaler, based on actual resource usage, not taking into account the ContainerResourcePolicy. If actual resource usage causes the target to violate the ContainerResourcePolicy, this might be different from the bounded recommendation. This field does not affect actual resource assignment. It is used only as a status indication. |
VerticalPodAutoscalerCondition v1 autoscaling.k8s.io
Fields | |
---|---|
type |
The type of condition being described. Possible values are "RecommendationProvided", "LowConfidence", "NoPodsMatched", and "FetchingHistory". |
status |
The status of the condition. Possible values are True, False, and Unknown. |
lastTransitionTime |
The last time the condition made a transition from one status to another. |
reason |
The reason for the last transition from one status to another. |
message |
A human-readable string that gives details about the last transition from one status to another. |
What's next
- Learn how to Scale container resource requests and limits.
- Learn about Cluster autoscaler.