Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -29,3 +29,15 @@ It can be deployed to the user cluster either during the cluster creation or aft
- Under the Application values page section, check the default values and add values if any required to be configured explicitly. Finally click on the `+ Add Application` to deploy the Nvidia GPU Operator application to the user cluster.

To further configure the values.yaml, find more information on the [Nvidia GPU Operator Helm chart documentation](https://github.com/NVIDIA/gpu-operator/)

## AI Conformance

To support AI workloads, Kubermatic Kubernetes Platform uses the NVIDIA GPU Operator to automatically expose GPU information through node labels.

Once the operator is installed, it discovers the GPUs available on your cluster nodes and applies a set of descriptive labels.

These labels provide useful details about the hardware, such as the GPU product name and the installed CUDA driver and runtime versions.

You can view these labels on the Nodes page.

![GPU Labels on Node](03-node-labels.png)