Stop Paying for Resources Your Pods Don't Need

March 23, 2026

If you manage Kubernetes infrastructure at scale, you already know the pattern. Development teams request CPU and memory "just to be safe." Nobody wants their app to OOM. Nobody wants to get paged at 2am because a pod got throttled. So requests get padded and they stay padded.

The result? Clusters are full of pods consuming far less than what they've been allocated. Nodes are running hot on paper but idle in practice. And the platform team responsible for cost governance across dozens of clusters, projects, and namespaces has no easy way to prove it.

Rafay's new App Resizing feature (introduced with 4.1 release) gives centralized platform teams the data that they need to identify over-provisioned workloads and make a compelling, evidence-backed case for rightsizing them.

The workflow is simple: trigger a report on-demand through the UI, or schedule it to run periodically via API. As illustrated below, reports can be initiated by a Platform Admin directly from the UI or automated via API with scope defined across projects, clusters, namespaces, and time period.

Figure 1: App Resizing workflow — from trigger to output

Rafay collects granular CPU and memory utilization metrics with up to 30 days of retention, and generates a per-pod comparison of:

Reports can be scoped to specific clusters, namespaces, or projects, giving platform teams the flexibility to focus on the noisiest offenders first — or run a broad sweep across the entire organization.

Each report is exported as a ZIP file containing one CSV per cluster, making it easy to share with application teams or feed into a broader cost management workflow.

Built for the Platform Team, Shared with Everyone

The real power of App Resizing isn't just in generating the data — it's in what you do with it.

Platform teams can share reports directly with development and application teams to kick off rightsizing conversations with hard numbers rather than guesswork. Instead of asking a team to "review their resource requests," you can hand them a CSV that shows their pod has been consuming 15% of its requested memory for the past 30 days.

That's a very different conversation.

Figure 2: Generated App Resizing reports, ready to download and share

How It Works: End to End

Platform Admin (UI)  ──┐
                        ├──▶  Configure Scope  ──▶  Metrics DB  ──▶  ZIP Report
Automated via API   ──┘       (Projects /                             (1 CSV per
                               Clusters /                              cluster)
                               Namespaces /
                               Time Period)
                                              ┌──────────────────────────┤
                                              ▼          ▼               ▼
                                         End Users   Cost Savings   Platform Team
                                        (resize pods) (reclaim capacity) (at scale)

What's Next

This release focuses on insight and reporting. A future release will add the ability to auto-resize workloads applying rightsizing recommendations automatically, initially targeted at test and non-production clusters where the risk threshold is lower.

📋 Note: App Resizing requires the Rafay Prometheus stack to be enabled for metrics collection.
Share this post

Want a deeper dive in the Rafay Platform?

Book time with an expert.

Book a demo
Tags:

You might be also be interested in...

Product

Accelerating the AI Factory: Rafay & NVIDIA NCX Infra Controller (NICo)

Learn how Rafay and NVIDIA NCX Infrastructure Controller (NICO) help enterprises operationalize AI factories—turning GPU infrastructure into scalable, self-service, and governed AI platforms.

Read Now

Scaling Trust: The Fortanix and Rafay Integration for Enterprise Confidential AI

Learn how the Fortanix and Rafay integration enables confidential AI for enterprises—protecting sensitive data while running AI workloads on secure, governed GPU platforms.

Read Now

Product

‍Rafay Launches AI Grid Orchestration Solution to Help Telcos Intelligently Deploy Distributed AI Infrastructure‍

Rafay, a member of the NVIDIA Inception program, brings infrastructure orchestration and workload automation to AI Grid architectures, enabling telcos and service providers to transform distributed GPU environments into a governed, self-service platform.

Read Now