Kubernetes 1.36 Alpha: Pod-Level Resource Managers Unlock Better Performance for Critical Workloads

By ● min read

Introduction

Kubernetes v1.36 introduces an alpha feature called Pod-Level Resource Managers, which fundamentally changes how the kubelet allocates CPU, memory, and NUMA-aligned resources to pods. Previously, the Topology, CPU, and Memory Managers operated strictly on a per-container basis. With this enhancement, you can now define resource budgets at the pod level, enabling hybrid allocation models that combine exclusive, dedicated resources for primary containers with shared pools for sidecars. This brings unprecedented flexibility and efficiency to performance-sensitive workloads like machine learning training, high-frequency trading, and low-latency databases.

Kubernetes 1.36 Alpha: Pod-Level Resource Managers Unlock Better Performance for Critical Workloads

The Problem: Per-Container Allocation Limitations

Performance-critical applications often require exclusive, NUMA-aligned resources to guarantee predictable behavior. However, modern Kubernetes pods rarely contain just one container—they frequently include sidecar containers for logging, monitoring, service mesh proxies, or data ingestion. Before this feature, getting dedicated, integer-based CPU resources for your main application forced you to allocate the same exclusive resources to every container in the pod. This was wasteful for lightweight sidecars that don’t need dedicated cores. The alternative was to accept Burstable or BestEffort Quality of Service (QoS) classes, sacrificing NUMA alignment and performance guarantees.

In short, users faced a painful trade-off: either waste resources on sidecars to achieve Guaranteed QoS and NUMA alignment, or accept degraded performance for the primary workload.

How Pod-Level Resource Managers Solve It

Enabled via the PodLevelResourceManagers and PodLevelResources feature gates, this alpha feature extends the kubelet’s managers to understand pod-level resource specifications (.spec.resources). The kubelet can now create hybrid resource allocation models:

This approach eliminates the need to allocate dedicated cores to every container. Sidecars can run in a shared pool, using CPU and memory efficiently while maintaining strict isolation from the primary workload. The Topology Manager performs a single NUMA alignment based on the entire pod’s budget, ensuring consistent performance.

Real-World Use Cases

Tightly-Coupled Database Pod

Consider a latency-sensitive database pod that includes a main database container, a local metrics exporter, and a backup agent sidecar. With the Topology Manager set to pod scope:

  1. The kubelet performs NUMA alignment using the pod’s total resource budget (e.g., 8 CPUs, 16Gi memory).
  2. The database container gets its exclusive CPU and memory slices from that NUMA node.
  3. The remaining resources form a pod shared pool.
  4. The metrics exporter and backup agent run in this shared pool, sharing resources with each other but strictly isolated from the database’s exclusive slices and other pods on the node.

This lets you safely co-locate auxiliary containers on the same NUMA node as the primary workload without wasting dedicated cores. Below is a simplified example pod spec:

apiVersion: v1
kind: Pod
metadata:
  name: tightly-coupled-database
spec:
  # Pod-level resources establish the overall budget and NUMA alignment size.
  resources:
    requests:
      cpu: "8"
      memory: "16Gi"
    limits:
      cpu: "8"
      memory: "16Gi"
  initContainers:
  - name: metrics-exporter
    image: metrics-exporter:v1
    restartPolicy: Always
  - name: backup-agent
    image: backup-agent:v1
  containers:
  - name: database
    image: postgres:15
  ...

Machine Learning Training with Sidecars

ML training pods often include a GPU-accelerated main container and sidecars for logging, monitoring, or checkpoint uploads. With pod-level resource managers, the GPU container can receive exclusive CPU and memory while sidecars share a smaller pool. This ensures the training workload isn’t starved of resources, while still allowing auxiliary functionality to operate reliably.

Enabling and Configuring the Feature

Pod-Level Resource Managers are available in Kubernetes v1.36 as an alpha feature. To enable it:

For detailed configuration, refer to the example above and the Kubernetes documentation.

Conclusion

Pod-Level Resource Managers solve a longstanding pain point for operators of performance-sensitive workloads. By allowing pod-level resource budgets and hybrid allocation models, Kubernetes v1.36 alpha feature enables NUMA-aligned exclusive resources for primary containers while efficiently sharing resources among sidecars. As the feature matures, it promises to become a standard tool for optimizing resource utilization without sacrificing performance.

Tags:

Recommended

Discover More

Maximize Your PC’s Potential: 10 Key Insights About the Corsair Vengeance 32GB DDR5-6000 RAM DealHow the Resident Evil Reboot Found Gold in the Series' Most Hated GameSpotify's Green Verification Badge: How It Ensures You're Listening to Real ArtistsUncovering Critical Interactions in Large Language Models at ScaleAzure Local Expands Sovereign Private Cloud Deployments to Thousands of Nodes