{ai}[foss/2021b] PyTorch v1.12.1 w/ Python 3.9.6 w/ CUDA 11.4.1#17154
{ai}[foss/2021b] PyTorch v1.12.1 w/ Python 3.9.6 w/ CUDA 11.4.1#17154Flamefire wants to merge 1 commit intoeasybuilders:developfrom
Conversation
|
Test report by @branfosj |
|
Test report by @Flamefire |
|
Test report by @Flamefire |
|
Test report by @Flamefire |
|
Some multi-GPU tests fail (when multiple GPUs are available). I found that updating to CUDA 11.5.0 fixes this --> See #17272 So I'm afraid that this will not work properly unless we decide to use CUDA 11.5 here. A PyTorch 1.12.1 for 2022a/CUDA 11.7 is available via #16484 |
|
@Flamefire to avoid confusion, should we close this in favour of #17272 ? Because if I understand it correctly this PR (i.e. with CUDA 11.4.1) will never work, correct? |
Correct. I didn't want to decide that on my own as having 2 CUDAs in a toolchain is at least new. |
(created using
eb --new-pr)Note that on x86 AVX machines this requires the compiler fix from #17135 or
test_quantizationwill fail (specificallytest_qnnpack_add_broadcastandtest_qnnpack_add)