-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update real_accelerator.py #6845
Conversation
Hi @keiwoo - the goal of this file is to detect what accelerator you have, unless you set it with the Could you clarify if when you first ran this if you had Tagging @Liangliang-Ma from the XPU team as well. |
hey @loadams, thanks for your review. I totally understand the goal of this file. We just skip the part of detection with the
I suggest that we can just do nothing just and keep it staying with
In this case, we will always have
Hope everything is explained well and I will test which package install |
I found the reason why |
Hi @keiwoo - I see, I misunderstood and thought you were using an XPU but were having issues detecting it, you are using another accelerator and because intel_extension_for_pytorch is installed, you're getting into this part of the file when that's undesirable - is that correct? |
@keiwoo, thanks for your work here. I agree to avoiding using What do you think? |
Avoid using cpu as fallback for a specific accelerator and the selection of cpu added as catch-all when accelerator detection fails
@microsoft-github-policy-service agree |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree with you @tjruwase. That would be more clear and I have made some revision. How about that?
@keiwoo Thanks for this PR. Yes I think it make sense to set accelerator only when no other GPU can be found, your PR makes this intention clear. Currently for accelerator selection there are four different hints:
I think eventurally all accelerators may need device detection to simplify environemnt management in hybrid cloud, the change to cpu detection in this PR conforms with this goal. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks!
…or whl building) (#6886) This fixes a bug introduced in #6845, which breaks the `no-torch` workflow that we require in order to do releases where we do not require torch to be in the environment when building an sdist. This adds the same logic to the cpuaccelerator that the cudaaccelerator had where we don't require torch to be installed to build the whl.
Comment out or delete
accelerate_name="cpu"
whenxpu
is not detected.When
xpu
is not detected it just pass at lines from 68 to 74 ifDS_ACCELERATOR
is set. However,cpu
is assigned toaccelerate_name
if it cannot importintel_extension_for_pytorch
or findxpu
, namely, at line from 125 to 133 whenDS_ACCELERATOR
is not set.I found this problem yesterday and spent whole afternoon figuring it out. I got
intel_extension_for_pytorch
installed with other package which I do not use actually and have no idea about this. Then I found that itcpu
is assigned to accelerate_name directly if it cannot findxpu
and it affectscuda
detection. In fact,cpu
will be assigned finally ifcuda
is even not detected at line from 170 to 177.