Allocate Devices to Workloads with DRA
Kubernetes v1.32 [beta]
(enabled by default: false)This page shows you how to allocate devices to your Pods by using dynamic resource allocation (DRA). These instructions are for workload operators. Before reading this page, familiarize yourself with how DRA works and with DRA terminology like ResourceClaims and ResourceClaimTemplates. For more information, see Dynamic Resource Allocation (DRA).
About device allocation with DRA
As a workload operator, you can claim devices for your workloads by creating ResourceClaims or ResourceClaimTemplates. When you deploy your workload, Kubernetes and the device drivers find available devices, allocate them to your Pods, and place the Pods on nodes that can access those devices.
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Your Kubernetes server must be at or later than version v1.32.To check the version, enter kubectl version
.
- Ensure that your cluster admin has set up DRA, attached devices, and installed drivers. For more information, see Set Up DRA in a Cluster.
Identify devices to claim
Your cluster administrator or the device drivers create DeviceClasses that define categories of devices. You can claim devices by using Common Expression Language to filter for specific device properties.
Get a list of DeviceClasses in the cluster:
kubectl get deviceclasses
The output is similar to the following:
NAME AGE
driver.example.com 16m
If you get a permission error, you might not have access to get DeviceClasses. Check with your cluster administrator or with the driver provider for available device properties.
Claim resources
You can request resources from a DeviceClass by using ResourceClaims. To create a ResourceClaim, do one of the following:
- Manually create a ResourceClaim if you want multiple Pods to share access to the same devices, or if you want a claim to exist beyond the lifetime of a Pod.
- Use a ResourceClaimTemplate to let Kubernetes generate and manage per-Pod ResourceClaims. Create a ResourceClaimTemplate if you want every Pod to have access to separate devices that have similar configurations. For example, you might want simultaneous access to devices for Pods in a Job that uses parallel execution.
If you directly reference a specific ResourceClaim in a Pod, that ResourceClaim must already exist in the cluster. If a referenced ResourceClaim doesn't exist, the Pod remains in a pending state until the ResourceClaim is created. You can reference an auto-generated ResourceClaim in a Pod, but this isn't recommended because auto-generated ResourceClaims are bound to the lifetime of the Pod that triggered the generation.
To create a workload that claims resources, select one of the following options:
Review the following example manifest:
apiVersion: resource.k8s.io/v1beta2
kind: ResourceClaimTemplate
metadata:
name: example-resource-claim-template
spec:
spec:
devices:
requests:
- name: gpu-claim
exactly:
deviceClassName: example-device-class
selectors:
- cel:
expression: |-
device.attributes["driver.example.com"].type == "gpu" &&
device.capacity["driver.example.com"].memory == quantity("64Gi")
This manifest creates a ResourceClaimTemplate that requests devices in the
example-device-class
DeviceClass that match both of the following parameters:
- Devices that have a
driver.example.com/type
attribute with a value ofgpu
. - Devices that have
64Gi
of capacity.
To create the ResourceClaimTemplate, run the following command:
kubectl apply -f https://k8s.io/examples/dra/resourceclaimtemplate.yaml
Review the following example manifest:
apiVersion: resource.k8s.io/v1beta2
kind: ResourceClaim
metadata:
name: example-resource-claim
spec:
devices:
requests:
- name: single-gpu-claim
exactly:
deviceClassName: example-device-class
allocationMode: All
selectors:
- cel:
expression: |-
device.attributes["driver.example.com"].type == "gpu" &&
device.capacity["driver.example.com"].memory == quantity("64Gi")
This manifest creates ResourceClaim that requests devices in the
example-device-class
DeviceClass that match both of the following parameters:
- Devices that have a
driver.example.com/type
attribute with a value ofgpu
. - Devices that have
64Gi
of capacity.
To create the ResourceClaim, run the following command:
kubectl apply -f https://k8s.io/examples/dra/resourceclaim.yaml
Request devices in workloads using DRA
To request device allocation, specify a ResourceClaim or a ResourceClaimTemplate
in the resourceClaims
field of the Pod specification. Then, request a specific
claim by name in the resources.claims
field of a container in that Pod.
You can specify multiple entries in the resourceClaims
field and use specific
claims in different containers.
Review the following example Job:
apiVersion: batch/v1 kind: Job metadata: name: example-dra-job spec: completions: 10 parallelism: 2 template: spec: restartPolicy: Never containers: - name: container0 image: ubuntu:24.04 command: ["sleep", "9999"] resources: claims: - name: separate-gpu-claim - name: container1 image: ubuntu:24.04 command: ["sleep", "9999"] resources: claims: - name: shared-gpu-claim - name: container2 image: ubuntu:24.04 command: ["sleep", "9999"] resources: claims: - name: shared-gpu-claim resourceClaims: - name: separate-gpu-claim resourceClaimTemplateName: example-resource-claim-template - name: shared-gpu-claim resourceClaimName: example-resource-claim
Each Pod in this Job has the following properties:
- Makes a ResourceClaimTemplate named
separate-gpu-claim
and a ResourceClaim namedshared-gpu-claim
available to containers. - Runs the following containers:
container0
requests the devices from theseparate-gpu-claim
ResourceClaimTemplate.container1
andcontainer2
share access to the devices from theshared-gpu-claim
ResourceClaim.
- Makes a ResourceClaimTemplate named
Create the Job:
kubectl apply -f https://k8s.io/examples/dra/dra-example-job.yaml
Clean up
To delete the Kubernetes objects that you created in this task, follow these steps:
Delete the example Job:
kubectl delete -f https://k8s.io/examples/dra/dra-example-job.yaml
To delete your resource claims, run one of the following commands:
Delete the ResourceClaimTemplate:
kubectl delete -f https://k8s.io/examples/dra/resourceclaimtemplate.yaml
Delete the ResourceClaim:
kubectl delete -f https://k8s.io/examples/dra/resourceclaim.yaml