[CI/Build][Intel] Enable benchmarks on Intel Gaudi 3 runner#94
[CI/Build][Intel] Enable benchmarks on Intel Gaudi 3 runner#94huydhn merged 2 commits intopytorch:mainfrom
Conversation
|
Hi @jakub-sochacki! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
louie-tsai
left a comment
There was a problem hiding this comment.
looks good to me. thanks
| pip install -r .github/scripts/requirements.txt \ | ||
| --extra-index-url https://download.pytorch.org/whl/rocm6.3 | ||
| elif [[ "${DEVICE_NAME}" == "hpu" ]]; then | ||
| grep -v "^torch==" .github/scripts/requirements.txt > /tmp/requirements_no_torch.txt |
There was a problem hiding this comment.
Nit: Is there a way to write something as follows in the requirements file?
torch; platform_system == 'Linux' and platform_machine != 'hpu'
Probably no, I haven't seen this syntax before, so just want to check
There was a problem hiding this comment.
I don't think we need to install pytorch here now, but let me follow up on this in a separate PR to just remove torch from requirements.txt
There was a problem hiding this comment.
Alright @huydhn, so please let me know if you removed torch from requirements there in other PR. Then i will remove the line which filters out torch for gaudi.
There was a problem hiding this comment.
@huydhn , this change is to make sure we don't re-install torch. Since HPU is using special torch version. I hope it is OK to leave the grep -v "^torch==" .github/scripts/requirements.txt > /tmp/requirements_no_torch.txt just in case.
huydhn
left a comment
There was a problem hiding this comment.
LGTM! Plz rebase and I can help land this
linux.hpu.gaudi3.8runner to benchmark matrix (placeholder)test-[throughput | latency | serving]-hpu.jsonfiles with benchmark configurationsgaudi3to default runners list in workflow dispatchhl-smicommandvllm/last-good-commit-for-vllm-gaudibranch to get compatibility history (N most recent VLLM_STABLE_COMMIT updates)The commit selection mechanism solves the race condition where VLLM_STABLE_COMMIT might change between CI image builds and benchmark runs (every 12 hours), ensuring benchmarks always find an existing compatible Docker image.