The Kubernetes Current Blog

What GPU Metrics to Monitor and Why?

With the increasing reliance on GPUs for compute-intensive tasks such as machine learning, deep learning, data processing, and rendering, both infrastructure administrators and users of GPUs (i.e. data scientists, ML engineers and GenAI app developers) require timely access and insights into performance, efficiency, and overall health of their GPU resources.

In order to make data driven, logical decisions, it is critical for these users to have access to critical metrics for their GPUs. This is the first blog in a series where we will describe the GPU metrics that you should track and monitor. In subsequent blogs, we will do a deep dive into each metric, why it matters and how to use it effectively.


 

GPU Metrics Analyzed

Although it may be possible to look at all kinds of metrics data, not everything matters. Let’s look at which GPU metrics generally and specifically evaluate them from the lens of why they matter.

Metric Why Monitoring this Metric Matters
GPU Utilization Ensures efficient resource allocation and identifies under/over-utilization
Memory Utilization Prevents memory bottlenecks, OOM errors, and enables capacity planning
Power Consumption Helps manage energy costs and prevents overheating or power-related issues
Temperature Prevents thermal throttling and potential GPU hardware failures
Error Metrics (ECC, throttling) Detects hardware or software issues, preventing long-term GPU damage
Clock Speeds (SM Clock) Identifies performance throttling or power management issues
Memory Bandwidth Utilization Diagnoses memory-bound workloads and optimizes memory access patterns

Resource Optimization and Utilization

GPUs are extremely expensive and energy-intensive resources. Ensuring that GPUs are fully utilized (i.e. not underused or overused) is essential for cost-efficiency. Admins need access to this data so that they can rebalance workloads across multiple GPUs, identify idle GPUs, and right-size resources for different applications, reducing overall infrastructure costs.

Critical Metrics

  • GPU Utilization Tracks how much of the GPU’s compute capacity is being used. High utilization suggests full use of the resource, while low utilization might indicate wasted capacity.
  • Memory Utilization Monitoring memory usage helps ensure that workloads are using GPU memory effectively, preventing memory bottlenecks or under utilization.
Note
Utilization by itself can be a misleading metric. You can easily hit 100% where the GPU is doing a lot of waiting. Power consumption is a better (but not perfect) measure. If you’re burning watts, you can assume that the GPU is doing something useful. i.e. High utilization, no watts is not good.

 

Performance Monitoring and Troubleshooting

GPU metrics provide vital information for diagnosing performance bottlenecks in applications that rely on GPU acceleration. Admins and users would like to quickly identify performance issues (such as GPU bottlenecks or inefficiencies).

Critical Metrics

  • Memory Utilization Some workloads are compute-bound, while others are memory-bound. Monitoring helps allocate the right GPUs for the job.
  • GPU Clock Speeds (SM clock) Helps identify when GPUs are throttling due to power or thermal limitations, which can degrade performance.
  • GPU Utilization Monitoring how effectively GPUs are used in relation to their cost helps to ensure a good return on investment.
  • Power Consumption GPUs consume significant power. Monitoring power draw can help administrators optimize power usage, saving energy costs.
  • Memory Copy Utilization Highlights if data transfer between CPU and GPU is becoming a bottleneck, suggesting the need for better data movement strategies.

 

Prevent Hardware Failures

GPUs, especially in data centers, are expensive assets that must be maintained for long-term reliability. Monitoring hardware health metrics can prevent damage and downtime. Admins would like to prevent GPU overheating or damage, leading to reduced hardware failures and prolonged hardware lifespan. Proactive monitoring reduces the risk of costly replacements and downtime.

Critical Metrics

  • Temperature High temperatures can lead to thermal throttling or long-term damage to the GPU. Regular monitoring helps detect cooling issues.
  • Power Consumption High or unstable power consumption can indicate excessive load or inefficient power usage, possibly leading to hardware failures.
  • Error Metrics (ECC, throttling) Error-correcting code (ECC) errors or GPU throttling signals issues that might result in hardware degradation over time.

 

How Rafay helps Customers

The vast majority of Rafay’s customers deploy and operate AI/ML applications powered by GPUs. Over three years back, we added support for integrated GPU metrics in the Rafay Platform. At a high level, the platform does three things:

  1. It automatically scrapes the GPU metrics from clusters in the customer’s Rafay Org
  2. It aggregates the GPU metrics in a centralized time series DB on the multi-tenant Rafay SaaS Controller
  3. Provides Role based Access to GPU metrics to users (admins, app developers and data scientists) in an intuitive manner via the Rafay Console

Perhaps the biggest benefit for customers is that there is no tooling for them to license, install and manage.

GPU Metrics Aggregation

Important
Metrics aggregation can be performed on clusters that are operating behind firewalls in a data center or cloud provider without requiring the user to make any firewall changes.

 

Conclusion & Next Steps

GPU metrics are essential for administrators to monitor, optimize, and maintain GPU resources efficiently. Monitoring GPU utilization, memory usage, power consumption, and thermal performance helps admins improve performance, reduce costs, prevent failures, and ensure the smooth running of GPU-accelerated applications. For organizations with substantial investments in GPU infrastructure, GPU metrics provide a data-driven approach to making decisions about resource allocation, workload management, and scaling.

Sign up for a free Org if you want to try this OR request for a demo OR see us in person at our booth at the NVidia AI Summit in Washington DC from 7-9 Oct, 2024.

In the next blog, we will do a deep dive into the GPU Memory Utilization metric. In subsequent blogs, we will cover other GPU metrics that matter.

Free Org

Sign up for a free Org if you want to try this yourself with our get started guides.

Free Org

Live Demo

Schedule time with us to watch a demo in action.

Schedule Demo

Rafay’s AI/ML Products

Learn about Rafay’s offerings in AI/ML Infrastructure and Tooling

Learn More

Upcoming Events

Meet us in-person in the Rafay booth in one of the upcoming events

Event Calender

Author

Trusted by leading companies