-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark intel xpu #1259
base: main
Are you sure you want to change the base?
Benchmark intel xpu #1259
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1259
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 8 Cancelled JobsAs of commit 99313ce with merge base 71a442a (): CANCELLED JOBS - The following jobs were cancelled. Please retry:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Hi @sunjiweiswift! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
86de507
to
e07d083
Compare
723c110
to
4aeaebd
Compare
f125ac2
to
d769d90
Compare
can you add some test command in summary to show how to run benchmarks for xpu |
92dac71
to
5fb782a
Compare
5fb782a
to
657ff7e
Compare
I modified the default_device variable ("default_device = 'cuda' if torch.cuda.is_available() else 'xpu' if torch.xpu.is_available() else 'cpu'") so it will be set to cuda on nvdia gpu and xpu on intel gpu. So I still use the “sh benchmark.sh” command . |
@sunjiweiswift are you able to publish some benchmarking results as well? if so please add a new entry similar to https://github.com/pytorch/ao/tree/main/torchao/quantization#uintx-quantization with title "xpu backend" |
260c921
to
9d487ff
Compare
Currently these quantizations are not fully supported. I updated the benchmark data of bf16 int8dq int8wo, and wikitext-perplexity has not been tested yet(I encountered a problem of insufficient memory on Intel GPU), and will be updated in subsequent commits |
9d487ff
to
99313ce
Compare
please rebase to resolve conflict before landing |
Added llama for Intel support. The command is still sh benchmark.sh