{ai}[foss/2025a] PyTorch v2.9.1, Triton v3.5.1 w/ CUDA 12.8.0 from wheels#25267
{ai}[foss/2025a] PyTorch v2.9.1, Triton v3.5.1 w/ CUDA 12.8.0 from wheels#25267lexming wants to merge 1 commit intoeasybuilders:developfrom
Conversation
…ss-2025a-CUDA-12.8.0-whl.eb, Triton-3.5.1-gfbf-2025a-CUDA-12.8.0-whl.eb
|
Diff of new easyconfig(s) against existing ones is too long for a GitHub comment. Use |
|
@lexming been trying this and I see an error if I use it in a venv: |
|
Loading |
|
@verdurin loading |
|
Can confirm that works. |
|
@lexming these PyTorch easyconfigs are missing extra path settings:
These were added by the 'original' PyTorch (which are using the pytorch easyblock), but not by the PythonBundle. |
|
@backelj we could indeed make those libs findable through search paths, that's probably a harmless change. However, software that builds on top of PyTorch usually uses PyTorch building tools which can automatically provide the paths to those libs. On my side, I already installed a bunch of easyconfigs on top of this without issue. Can you tell me what package failed to build for you? |
I encountered the issue when trying to build OpenMM-Torch-1.5.1-foss-2025a.eb, see commit 5f771fc. |
(created using
eb --new-pr)Depends on:
This is a binary installation of the official wheels from pytorch.org.
I did some benchmarking between this type of installation with wheels and the usual build from source in EB and the performance is practically the same for inference/training jobs on GPU. I put more detailed information in easybuilders/easybuild#931