-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dpotrf + dpotri: Windows vs Linux #4886
Comments
can you set OPENBLAS_VERBOSE=2 in the Windows environment please, just to be sure that it uses SKYLAKEX there too as expected ? there may be a few places in the code where OpenMP is handled differently on the two platforms, and I guess the libgomp runtime on Windows may differ from the Linux implementation too... I'm currently at a conference with limited access to decent hardware, so it may take me a few days to investigate |
Thanks for looking into this, Martin. I can confirm that SKYLAKEX is detected on Windows. |
Any more thoughts on this? |
Thoughts have been few and far between as I caught covid in the meantime. Sorry, nothing obvious in the OpenBLAS codebase comes to mind even now. I guess you could try if setting OMP_WAIT_POLICY=passive has any influence on this misbehaviour. |
Thanks, Martin. OMP_WAIT_POLICY=passive does have an influence: it makes the problem a good deal worse! We have used earlier versions of OpenBLAS. We noticed the problem only recently, by chance -- it may well have been there before, unnoticed. Anyway I'm attaching a self-contained test-case and I'll inline below the results from running it on a few systems. Aside from the relatively extreme problem on Windows, it seems to me that in general the matrix size at which multi-threading kicks in is much too small for optimality. In
In many cases the default value of Anyway, here are the results I have from the test case. The times are for 50000 replications of inversion of a p.d. matrix. "default" means letting OpenBLAS decide how many threads to use, and "single" means forcing use of a single thread. All the machines referenced below are quad-core.
|
Thank you very much. Unfortunately I did not manage to do much so far, but at least this does not appear to be a recent regression. |
From individual timing of the two functions, the problem appears to be specifically related to POTRI rather than POTRF. There used to be a reimplementation of POTRI in OpenBLAS but this was disabled ten years ago in #410 due to problems with the code (and subsequent suggestions that the function itself posed no bottleneck). The POTRI from Reference-LAPACK is basically a frontend for TRTRI, which again OpenBLAS reimplements. Contrary to the one for POTRF, this reimplementation currently uses full-on multithreading even for the smallest workloads, which is addressed by #4994 |
Thanks, Martin. I too did some more timing tests, and I agree that it's POTRI rather than POTRF that's the trigger for the slowdown on Windows. |
The behaviour when compiled with LLVM19 (and its libomp) appears to be a lot more Linux-like even without my small correction from #4994. |
Hi,
Using default threading...
OpenBLAS 0.3.28, Win10, i5-4460 @3.2GHz Marcin |
hi Marcin, thanks for the data - is this with USE_OPENMP=1 as well ? Very similar timing for the native MSVC build is a bit surprising as that would be using generic C kernels instead of the optimized GEMM (MSVC still does not support our unix-y style of assembly) |
interesting, thanks - maybe the Windows11 scheduler plays a role as well (at least with my Zen 5/5c). at least PR #4994 should not hurt in any case - I think |
Marcin, in your results above it seems that only in the gcc case (top left) is single-threading actually being imposed. In the other cases supposed "single-threading" makes no difference. In my test code "single" is specified by
and this doesn't seem to be doing anything in the clang and msvc cases. |
Allin, but shouldn't omp_set_num_threads()/omp_get_num_threads() be used inside #pragma block? (This is how I understand the OpenMP standard). |
My impression is that #pragma is needed only when launching a team of threads. |
I've come across what looks like an anomalous difference in performance inverting a positive definite matrix using dpotrf() and dpotri(), on Windows as compared with Linux. This is on a dual-boot SkylakeX laptop, using OpenBLAS 0.3.28, compiled with gcc 14.2.0 on Arch Linux and cross-compiled with x86_64-mingw-w64-mingw32-gcc 14.2.0 for Windows 11, in both cases using OpenMP for threading. The configuration flags are mostly the same for the two OpenBLAS builds, except that the Windows build uses DYNAMIC_ARCH=1 but the Linux one is left to auto-detect SkylakeX.
The context is a Gibbs sampling operation with many thousands of iterations, so the performance difference becomes very striking. My test rig iterates inversion of a sequence of p.d. matrices of moderate size, from dimension 4 to 64 by powers of 2. Given the moderate size, multi-threading is not really worthwhile. Best performance is achieved by setting OMP_NUM_THREADS=1; in that case the rig runs very fast on both platforms, with Windows marginally slower than Linux. But if I set the number of OMP threads to equal the number of physical cores (4), which is the default in the program I'm working with,
I'd be very grateful if anyone can offer insight into what might be going on here. I'd be happy to supply more details depending on what might be relevant.
The text was updated successfully, but these errors were encountered: