Kubernetes v1.36 [alpha]This page shows you how to access device metadata from containers that use dynamic resource allocation (DRA). Device metadata lets workloads discover information about allocated devices such as device attributes or network interface details — by reading JSON files at well-known paths inside the container.
Before reading this page, familiarize yourself with Dynamic Resource Allocation (DRA) and how to allocate devices to workloads.
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Your Kubernetes server must be version v1.36.To check the version, enter kubectl version.
EnableDeviceMetadata and
MetadataVersions options when starting the plugin. Check the driver's
documentation for details.When you use a directly referenced ResourceClaim to allocate devices, the device metadata files appear inside the container at:
/var/run/kubernetes.io/dra-device-attributes/resourceclaims/<claimName>/<requestName>/<driverName>-metadata.json
Review the following example manifest:
apiVersion: resource.k8s.io/v1
kind: ResourceClaim
metadata:
name: gpu-claim
spec:
devices:
requests:
- name: gpu
exactly:
deviceClassName: gpu.example.com
---
apiVersion: v1
kind: Pod
metadata:
name: gpu-metadata-reader
spec:
resourceClaims:
- name: my-gpu
resourceClaimName: gpu-claim
containers:
- name: workload
image: ubuntu:24.04
resources:
claims:
- name: my-gpu
request: gpu
command:
- sh
- -c
- |
echo "=== DRA device metadata ==="
find /var/run/kubernetes.io/dra-device-attributes -name '*-metadata.json' -print -exec cat {} \;
sleep 3600
restartPolicy: Never
This manifest creates a ResourceClaim named gpu-claim that requests a
device from the gpu.example.com DeviceClass, and a Pod that reads the
device metadata.
Create the ResourceClaim and Pod:
kubectl apply -f https://k8s.io/examples/dra/dra-device-metadata-pod.yaml
After the Pod is running, view the container logs to see the metadata:
kubectl logs gpu-metadata-reader
The output is similar to:
=== DRA device metadata ===
/var/run/kubernetes.io/dra-device-attributes/resourceclaims/gpu-claim/gpu/gpu.example.com-metadata.json
{
"kind": "DeviceMetadata",
"apiVersion": "metadata.resource.k8s.io/v1alpha1",
...
}
To inspect the full metadata file, exec into the container:
kubectl exec gpu-metadata-reader -- \
cat /var/run/kubernetes.io/dra-device-attributes/resourceclaims/gpu-claim/gpu/gpu.example.com-metadata.json
The output is a JSON object containing device attributes like the model, driver version, and device UUID. See metadata schema for details on the JSON structure.
When you use a ResourceClaimTemplate, Kubernetes generates a ResourceClaim for each Pod. Because the generated claim name is not predictable, the metadata files appear at a path that uses the Pod's claim reference name instead:
/var/run/kubernetes.io/dra-device-attributes/resourceclaimtemplates/<podClaimName>/<requestName>/<driverName>-metadata.json
The <podClaimName> corresponds to the name field in the Pod's
spec.resourceClaims[] entry. The JSON metadata also includes a
podClaimName field that records this mapping.
Review the following example manifest:
apiVersion: resource.k8s.io/v1
kind: ResourceClaimTemplate
metadata:
name: gpu-claim-template
spec:
spec:
devices:
requests:
- name: gpu
exactly:
deviceClassName: gpu.example.com
---
apiVersion: v1
kind: Pod
metadata:
name: gpu-metadata-template-reader
spec:
resourceClaims:
- name: my-gpu
resourceClaimTemplateName: gpu-claim-template
containers:
- name: workload
image: ubuntu:24.04
resources:
claims:
- name: my-gpu
request: gpu
command:
- sh
- -c
- |
echo "=== DRA device metadata (from template) ==="
find /var/run/kubernetes.io/dra-device-attributes -name '*-metadata.json' -print -exec cat {} \;
sleep 3600
restartPolicy: Never
This manifest creates a ResourceClaimTemplate and a Pod. Each Pod gets its
own generated ResourceClaim. The metadata path uses the Pod's claim
reference name my-gpu.
Create the ResourceClaimTemplate and Pod:
kubectl apply -f https://k8s.io/examples/dra/dra-device-metadata-template-pod.yaml
After the Pod is running, view the metadata:
kubectl exec gpu-metadata-template-reader -- \
cat /var/run/kubernetes.io/dra-device-attributes/resourceclaimtemplates/my-gpu/gpu/gpu.example.com-metadata.json
The k8s.io/dynamic-resource-allocation/devicemetadata package provides
ready-made functions for reading metadata files. These functions handle
version negotiation automatically, decoding the metadata stream and converting
it to internal types so your code works across schema versions without manual
version checks.
For a directly referenced ResourceClaim:
import "k8s.io/dynamic-resource-allocation/devicemetadata"
dm, err := devicemetadata.ReadResourceClaimMetadata("gpu-claim", "gpu")
For a template-generated claim (using the Pod's claim reference name):
dm, err := devicemetadata.ReadResourceClaimTemplateMetadata("my-gpu", "gpu")
If you know the specific driver name, you can read a single driver's metadata file:
dm, err := devicemetadata.ReadResourceClaimMetadataWithDriverName("gpu.example.com", "gpu-claim", "gpu")
The returned *metadata.DeviceMetadata contains the claim metadata, requests,
and per-device attributes.
Applications in other languages can read the JSON file directly and inspect
the apiVersion field to determine the schema version before parsing.
Delete the resources that you created:
kubectl delete -f https://k8s.io/examples/dra/dra-device-metadata-pod.yaml
kubectl delete -f https://k8s.io/examples/dra/dra-device-metadata-template-pod.yaml