-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DmlExecutionProvider is missing after olive-ai installation #1619
Comments
Hi, this happened because the I opened a PR to fix this #1620. Meanwhile, you can fix this by
|
I had again started from scratch to check the complete flow: conda create --name olive-directml python=3.12 conda activate olive-directml From the documentation https://microsoft.github.io/Olive/getting-started/getting-started.html I had followed the steps for Olive installation for Windows DirectML pip install olive-ai[directml,finetune] pip install transformers==4.44.2 onnxruntime-genai-directml After following these steps, I could not find 'DmlExecutionProvider'. So as per you above comments(typo in onnxruntime-geani-directml which corrected) I executed below commands: pip uninstall -y onnxruntime onnxruntime-directml onnxruntime-genai Now I could see 'DmlExecutionProvider' in the providers list and device as 'CPU-DML' Later I tried to execute below command for Automatic Optimization of Model with Olive: olive auto-opt --model_name_or_path meta-llama/Llama-3.2-1B-Instruct --trust_remote_code --output_path models/Llama-3.2-1B-Instruct --device gpu --provider DmlExecutionProvider --use_ort_genai --precision int4 --log_level 1 I got error as ModuleNotFoundError: No module named 'onnxruntime_genai.models' So I installed onnxruntime_genai without dependencies as below: pip install onnxruntime-genai --no-deps Even after this command is executed I could see 'DmlExecutionProvider' in the providers list and device as 'CPU-DML' Again I tried to execute below command for Automatic Optimization of Model with Olive: olive auto-opt --model_name_or_path meta-llama/Llama-3.2-1B-Instruct --trust_remote_code --output_path models/Llama-3.2-1B-Instruct --device gpu --provider DmlExecutionProvider --use_ort_genai --precision int4 --log_level 1 I used device as gpu and DmlExecutionProvider as provider. Now the model got successfully downloaded and saved to the given output path. When I try to run the model using below code:
I am getting error as:
RuntimeError: Unknown provider type: dml The way I am passing device as gpu and provider as DmlExecutionProvider for Automatic Optimization of Model with Olive are they correct or not? Help me resolve the issue. I am very much interested in utilizing the DirectML capabilities of my device. |
I think the packages installations might have gotten mixed up again. Could you check pip list to see what onnxruntime and onnxruntime-genai packages are installed? |
I am using a windows laptop with processor AMD Ryzen 7 PRO 7840U w/ Radeon 780M Graphics and 32 GB RAM with DirectX support.
I created a new conda environment and activated the environment using below commands:
conda create -n olive-directml python=3.12
conda activate olive-directml
My python version installed is 3.12.9
First I tried to check whether directml support is present for my device by installing onnxruntime-directml using below command:
pip install onnxruntime-directml
onnxruntime-directml version installed: 1.20.1
I checked the provider and device information using below code:
import onnxruntime as ort
providers = ort.get_available_providers()
print("Available providers:", providers)
print("Device:", ort.get_device())
Output:-
Available providers: ['DmlExecutionProvider', 'CPUExecutionProvider']
Device: CPU-DML
Next when I tried to follow github documentation for olive, and setup olive for Windows DirectML.
https://microsoft.github.io/Olive/getting-started/getting-started.html
I had installed the command:
pip install olive-ai[directml,finetune]
onnxruntime-directml version installed: 1.20.1
Now when I checked the provider and device information using below code:
import onnxruntime as ort
providers = ort.get_available_providers()
print("Available providers:", providers)
print("Device:", ort.get_device())
Output:-
Available providers: ['AzureExecutionProvider', 'CPUExecutionProvider']
Device: CPU
'DmlExecutionProvider' is missing after the olive-ai installation.
Is this DmlExecutionProvider missing is already known issue or am I doing something wrong?
Can I use integrated Radeon 780M GPU of my machine for inferencing using DirectML or not? If I can use, please let me know the steps to follow?
I was able to follow the Olive documentation for CPU and perform inferencing using Olive. But I would like to explore DirectML capabilities of my device as well for inferencing.
The text was updated successfully, but these errors were encountered: