Vertical Pod Autoscaling

In Kubernetes, a VerticalPodAutoscaler automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically adjusting resource requests and limits to match actual usage.

Vertical scaling means that the response to increased resource demand is to assign more resources (for example: memory or CPU) to the Pods that are already running for the workload. This is also known as "rightsizing" or "autopilot". This is different from horizontal scaling, which for Kubernetes would mean deploying more Pods to distribute the load.

If the resource usage decreases, and the Pod resource requests are above optimal levels, the VerticalPodAutoscaler instructs the workload resource (the Deployment, StatefulSet, or other similar resource) to adjust resource requests back down, preventing resource waste.

The VerticalPodAutoscaler is implemented as a Kubernetes API resource and a controller. The resource determines the behavior of the controller. The vertical pod autoscaling controller, running within the Kubernetes data plane, periodically adjusts the resource requests and limits of its target (for example, a Deployment) based on analysis of historical resource utilization, the amount of resources available in the cluster, and real-time events such as out-of-memory (OOM) conditions.

API object

The VerticalPodAutoscaler is defined as a Custom Resource Definition (CRD) in Kubernetes. Unlike HorizontalPodAutoscaler, which is part of the core Kubernetes API, VPA must be installed separately in your cluster.

The current stable API version is autoscaling.k8s.io/v1. More details about the VPA installation and API can be found in the VPA GitHub repository.

How does a VerticalPodAutoscaler work?

graph BT metrics[Metrics Server] api[API Server] admission[VPA Admission Controller] vpa_cr[VerticalPodAutoscaler CRD] recommender[VPA Recommender] updater[VPA Updater] metrics --> recommender recommender -->|Stores Recommendations| vpa_cr subgraph Application Workload controller[Deployment / RC / StatefulSet] pod[Pod / Container] end vpa_cr -->|Checks for changes| updater updater -->|Evicts Pod or Updates in place| controller controller -->|Requests new Pod| api api -->|New Pod Creation| admission admission -->|Retrieves latest recommendation| vpa_cr admission -->|Injects new resource values| api api -->|Creates Pod| controller controller -->|New Pod with Optimal Resources| pod classDef vpa fill:#9FC5E8,stroke:#1E1E1D,stroke-width:1px,color:#1E1E1D; classDef crd fill:#D5A6BD,stroke:#1E1E1D,stroke-width:1px,color:#1E1E1D; classDef metrics fill:#FFD966,stroke:#1E1E1D,stroke-width:1px,color:#1E1E1D; classDef app fill:#B6D7A8,stroke:#1E1E1D,stroke-width:1px,color:#1E1E1D; class recommender,updater,admission vpa; class vpa_cr crd; class metrics metrics; class controller,pod app;

Figure 1. VerticalPodAutoscaler controls the resource requests and limits of Pods in a Deployment

Kubernetes implements vertical pod autoscaling through multiple cooperating components that run intermittently (it is not a continuous process). The VPA consists of three main components: The Recommender, which analyzes resource usage and provides recommendations. The Updater, which updates Pod resource requests either by evicting Pods or modifying them in place. And the Admission Controller, which applies recommendations to new or recreated Pods.

Once during each period, the Recommender queries the resource utilization for Pods targeted by each VerticalPodAutoscaler definition. The Recommender finds the target resource defined by the targetRef, then selects the pods based on the target resource's .spec.selector labels, and obtains the metrics from the resource metrics API to analyze actual CPU and memory consumption.

The Recommender analyzes both current and historical resource usage data (CPU and memory) for each Pod targeted by the VerticalPodAutoscaler. It examines:

  • Historical consumption patterns over time to identify trends
  • Peak usage and variance to ensure sufficient headroom
  • Current resource requests compared to actual usage
  • Out-of-memory (OOM) events and other resource-related incidents

Based on this analysis, the Recommender calculates three types of recommendations:

  • Target recommendation (optimal resources for typical usage)
  • Lower bound (minimum viable resources)
  • Upper bound (maximum reasonable resources). These recommendations are stored in the VerticalPodAutoscaler resource's .status.recommendation field.

The Updater component monitors the VerticalPodAutoscaler resources and compares current Pod resource requests with the recommendations. When the difference exceeds configured thresholds and the update policy allows it, the Updater can either:

  • Evict Pods, triggering their recreation with new resource requests (traditional approach)
  • Update Pod resources in place without eviction, when the cluster supports in-place Pod resource updates

The chosen method depends on the configured update mode, cluster capabilities, and the type of resource change needed. In-place updates, when available, avoid Pod disruption but may have limitations on which resources can be modified. The Updater respects PodDisruptionBudgets to minimize service impact.

The Admission Controller operates as a mutating webhook that intercepts Pod creation requests. It checks if the Pod is targeted by a VerticalPodAutoscaler and, if so, applies the recommended resource requests and limits before the Pod is created. This ensures new Pods start with appropriately sized resource allocations, whether they're created during initial deployment, after an eviction by the Updater, or due to scaling operations.

The VerticalPodAutoscaler requires the Metrics Server to be installed in the cluster. The VPA components fetch metrics from the metrics.k8s.io API. The Metrics Server needs to be launched separately as it is not deployed by default in most clusters. For more information about resource metrics, see Metrics Server.

Update modes

The VerticalPodAutoscaler supports different update modes that control how and when resource recommendations are applied to your Pods. You configure the update mode using the updateMode field in the VPA spec under updatePolicy:

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: my-app-vpa
spec:
  targetRef:
    apiVersion: "apps/v1"
    kind: Deployment
    name: my-app
  updatePolicy:
    updateMode: "Recreate"  # Off, Initial, Recreate, InPlaceOrRecreate

Off

In Off mode, the VPA Recommender still analyzes resource usage and generates recommendations, but these recommendations are not automatically applied to Pods. The recommendations are only stored in the VPA object's status field.

Initial

In Initial mode, VPA only sets resource requests when Pods are first created. It does not update resources for already running Pods, even if recommendations change over time.

Recreate

In Recreate mode, VPA actively manages Pod resources by evicting Pods when their current resource requests differ significantly from recommendations. When a Pod is evicted, the workload controller (Deployment, StatefulSet, etc.) creates a replacement Pod, and the VPA Admission Controller applies the updated resource requests to the new Pod.

InPlaceOrRecreate

In InPlaceOrRecreate mode, VPA attempts to update Pod resource requests and limits without restarting the Pod when possible. However, if in-place updates cannot be performed for a particular resource change, VPA falls back to evicting the Pod (similar to Recreate mode) and allowing the workload controller to create a replacement Pod with updated resources.

Auto

Auto mode is currently an alias for Recreate mode and behaves identically. It was introduced to allow for future expansion of automatic update strategies.

Resource policies

Resource policies allow you to fine-tune how the VerticalPodAutoscaler generates recommendations and applies updates. You can set boundaries for resource recommendations, specify which resources to manage, and configure different policies for individual containers within a Pod.

You define resource policies in the resourcePolicy field of the VPA spec:

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: my-app-vpa
spec:
  targetRef:
    apiVersion: "apps/v1"
    kind: Deployment
    name: my-app
  updatePolicy:
    updateMode: "Recreate"
  resourcePolicy:
    containerPolicies:
    - containerName: "application"
      minAllowed:
        cpu: 100m
        memory: 128Mi
      maxAllowed:
        cpu: 2
        memory: 2Gi
      controlledResources:
      - cpu
      - memory
      controlledValues: RequestsAndLimits

minAllowed and maxAllowed

These fields set boundaries for VPA recommendations. The VPA will never recommend resources below minAllowed or above maxAllowed, even if the actual usage data suggests different values.

controlledResources

The controlledResources field specifies which resource types VPA should manage for a container. If not specified, VPA manages both CPU and memory by default. You can limit VPA to manage only specific resources. Valid resource names include cpu and memory.

controlledValues

The controlledValues field determines whether VPA controls resource requests, limits, or both:

  • RequestsAndLimits (default): VPA sets both requests and limits. The limit is scaled proportionally to the request.
  • RequestsOnly: VPA only sets requests, leaving limits unchanged. Limits are respected and can still trigger throttling or OOMKills if usage exceeds them.
Last modified November 20, 2025 at 4:29 PM PST: moved to docs/concepts/workloads/autoscaling (8815893a56)