In the first blog in the DRA series, we introduced the concept of Dynamic Resource Allocation (DRA) that recently went GA in Kubernetes v1.34 which was released end of August 2025.
In the second blog, we installed a Kuberneres v1.34 cluster and deployed an example DRA driver on it with "simulated GPUs". In this blog, we’ll will deploy a few workloads on the DRA enabled Kubernetes cluster to understand how "Resource Claim" and "ResourceClaimTemplates" work.
We have optimized the steps for users to experience this on their laptops in less than 5 minutes. The steps in this blog are optimized for macOS users
Deploy Test Workload with ResourceClaim
This section assumes that you have completed the steps in the second blog and have access to a functional Kubernetes cluster with DRA configured and enabled. We will deploy example workloads that demonstrate how ResourceClaims can be used to select and configure resources in various ways.
Let's create a ResourceClaim which we will reference in a Pod. Note that the deviceClassName is a required field because it helps narrow down the scope of the request to a specific device class. In the example below, the ResouceClaim called "some-gpu" will be created in the same namespace (dra-tutorial) we created in the previous blog.
This section assumes that you have completed the steps in the second blog and have access to a functional Kubernetes cluster with DRA configured and enabled. We will deploy example workloads that demonstrate how ResourceClaims can be used to select and configure resources in various ways.
Let's create a ResourceClaim which we will reference in a Pod. Note that the deviceClassName is a required field because it helps narrow down the scope of the request to a specific device class. In the example below, the ResouceClaim called "some-gpu" will be created in the same namespace (dra-tutorial) we created in the previous blog.
Copy the YAML below and save it to a file called "resourceclaim.yaml". This will allow us to create a request for any GPU advertising over 10Gi memory capacity.
Let's check the status of the pod by issuing the following command:
kubectl get pod pod0 -n dra-tutorial
You should see something like the following. As you can see, our pod is in a RUNNING state.
NAME READY STATUS RESTARTS AGE
pod0 1/1 Running 0 61s
Check Resource Claim
Now, let's check the status of our resourceclaim.
kubectl get resourceclaims -n dra-tutorial
As you can see from the output below, the STATE has transitioned from pending to allocated,reserved
kubectl get resourceclaim some-gpu -n dra-tutorial -o yaml
You can also get deeper details and status of the resourceclaim by issuing the following command.
kubectl get resourceclaim some-gpu -n dra-tutorial -o yaml
Shown below is an illustrative example of the output. Once the pod is deployed, the Kubernetes cluster will attempt to schedule the pod to a node where Kubernetes can satisfy the ResourceClaim. In this example, all the GPUs have sufficient capacity to satisfy the pod's claim.
When our pod with the resource claim is deleted, the DRA driver will deallocate the GPU so it can be available for scheduling again. In this step, we will delete the pod that we created in the previous step
kubectl delete pod pod0 -n dra-tutorial
Clean Up
If you wish to clean up everything, you can delete the Kind cluster we provisioned earlier by issuing the following command.
kind delete cluster --name dra-test
Conclusion
In this blog, we deployed a test workload that was using a ResourceClaim to select and configure GPU resources using DRA. In the next blog, we will deploy a test workload that will use ResourceClaimTemplates.
Scaling Trust: The Fortanix and Rafay Integration for Enterprise Confidential AI
Learn how the Fortanix and Rafay integration enables confidential AI for enterprises—protecting sensitive data while running AI workloads on secure, governed GPU platforms.
Run nvidia-smi on Remote GPU Kubernetes Clusters Using Rafay Zero Trust Access
See how infrastructure operators can securely validate GPU health in remote Kubernetes clusters by running nvidia-smi using Rafay’s Zero Trust Kubectl Access workflow.