Quick Facts
- Category: Cloud Computing
- Published: 2026-04-30 20:30:25
- Navigating the Cigna ACA Exit: A Patient's Step-by-Step Survival Guide
- What You Need to Know About Why are top university websites serving porn? It ...
- How to Protect Your Minecraft Account from the LofyStealer Malware Campaign
- Why Most AI Initiatives Fall Short (It's Not About Technology)
- How Here’s how the new Microsoft and OpenAI deal breaks down
Kubernetes v1.36 brings significant enhancements to the Memory QoS feature, originally introduced in v1.22 and refined in v1.27. Developed by SIG Node, this alpha-level feature uses the cgroup v2 memory controller to provide the kernel with precise instructions on how to handle container memory. The latest update introduces opt-in memory reservation, tiered protection based on Quality of Service (QoS) classes, improved observability metrics, and a kernel version warning for the memory.high limit. These changes give cluster administrators finer-grained control over memory allocation while reducing the risk of out-of-memory (OOM) kills.
Enhanced Memory Management in v1.36
The Memory QoS feature in v1.36 decouples throttling from memory reservation. By enabling the feature gate, the kubelet sets memory.high (using the memoryThrottlingFactor, defaulting to 0.9) to initiate throttling. However, memory reservation is now controlled independently via the new memoryReservationPolicy kubelet configuration field. This separation allows administrators to first observe workload behavior under throttle-only conditions before committing to hard or soft protections.
Opt-In Memory Reservation Policy
The memoryReservationPolicy field offers two options:
None(default): The kubelet writes nomemory.minormemory.lowvalues. Onlymemory.highthrottling is active.TieredReservation: The kubelet writes tiered memory protection based on the pod’s QoS class, as described below.
This opt-in approach ensures that operators can enable throttling first, assess node headroom, and then gradually apply reservation policies without risking overcommitment.
Tiered Protection by QoS Class
When memoryReservationPolicy is set to TieredReservation, the kubelet applies different memory protections for each QoS tier:
- Guaranteed Pods: Receive hard protection via
memory.min. The kernel will not reclaim this memory under any circumstances. If the guarantee cannot be honored, the kernel invokes the OOM killer on other processes. For example, a Guaranteed pod requesting 512 MiB sees:cat /sys/fs/cgroup/.../memory.min→536870912 - Burstable Pods: Receive soft protection via
memory.low. The kernel avoids reclaiming this memory under normal pressure but may reclaim it to prevent a system-wide OOM. Same 512 MiB request on a Burstable pod results inmemory.lowset to536870912. - BestEffort Pods: Get neither
memory.minnormemory.low. Their memory remains fully reclaimable.
This tiered approach ensures that only the most critical workloads (Guaranteed) lock memory with the highest priority, while Burstable workloads are gently protected but remain flexible under extreme pressure.
Comparison with Previous Behavior
In Kubernetes v1.27 and earlier, enabling the MemoryQoS feature gate immediately set memory.min for every container with a memory request. Because memory.min is a hard reservation that the kernel never reclaims, workloads with large requests could lock a significant portion of node memory, leaving little room for system daemons or BestEffort pods.
Consider a node with 8 GiB RAM where Burstable pod requests total 7 GiB. Under the old behavior, that 7 GiB was locked as memory.min, drastically reducing headroom. With tiered reservation in v1.36, those same requests map to memory.low instead. Under normal pressure the kernel protects it, but under extreme stress it can reclaim part of that memory to avoid a system-wide OOM. Only Guaranteed pods use memory.min, keeping the total hard reservation lower.
This change makes the system more resilient and gives administrators confidence to enable memory QoS without fear of locking too much memory.
New Observability Metrics
Kubernetes v1.36 exposes two new alpha-level metrics on the kubelet’s /metrics endpoint:
| Metric | Description |
|---|---|
kubelet_memory_qos_node_memory_min_bytes | Total amount of memory.min across all cgroups on the node |
kubelet_memory_qos_node_memory_low_bytes | Total amount of memory.low across all cgroups on the node |
These metrics enable operators to monitor how much memory is protected at each tier, making capacity planning and troubleshooting more data-driven. For example, you can verify that memory.min is not too high relative to total node memory.
Kernel Version Warning for memory.high
Another addition in v1.36 is a kernel version warning when the memory.high limit is used. This helps administrators ensure that their nodes run a kernel version that correctly handles memory.high behavior, preventing unexpected throttling or OOM events. The warning appears in the kubelet logs if the kernel is older than the recommended version.
With these improvements, the Memory QoS feature in Kubernetes v1.36 provides a safer, more tunable approach to memory management. By separating throttling from reservation, introducing tiered protection per QoS class, and adding observability metrics, operators can now fine-tune memory guarantees without sacrificing node efficiency.