-
-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue 700 enable multi allocatable arrays #714
Conversation
…nable-multi-allocatable-arrays
Passive locking with the global dynamic window did not work with either mpi implementation (mpich, openmpi).
Codecov Report
@@ Coverage Diff @@
## master #714 +/- ##
==========================================
+ Coverage 54.25% 54.92% +0.66%
==========================================
Files 3 3
Lines 2938 3028 +90
==========================================
+ Hits 1594 1663 +69
- Misses 1344 1365 +21 |
@vehre thanks for this PR. I think it's safe to merge so I'll do so shortly. Alessarndo took a look at it for me and passed along the following question: "Why did you use MPI_Win_lock instead of a combination of MPI_Win_lockall and MPI_Win_Flush?" |
Hi Damian,
I used win_lock and win_unlock instead of unlockall and flush, because only
the former worked for me while the latter lead to deadlocks. And that works
only with mpich and not with openmpi. I am happy to learn different working
approaches, but I only found that one working.
Regards,
Andre
Am 25. September 2020 17:09:54 schrieb Damian Rouson
<[email protected]>:
…
@vehre thanks for this PR. I think it's safe to merge so I'll do so
shortly. Alessarndo took a look at it for me and passed along the following
question: "Why did you use MPI_Win_lock instead of a combination of
MPI_Win_lockall and MPI_Win_Flush?"
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
@vehre I see that there's a conflict showing on this PR. Presumably that's because one or more PR's were merged into master since this PR branched off of master. It appears the conflict is trivial so I'll see if I can resolve it and then rerun the tests later today or this weekend. If you happen to get to it first, let me know. |
I am sorry, I can't take a look before Sunday afternoon. I have no computer
to my disposal currently.
Am 25. September 2020 21:25:08 schrieb Damian Rouson
<[email protected]>:
…
@vehre I see that there's a conflict showing on this PR. Presumably that's
because one or more PR's were merged into master since this PR branched off
of master. It appears the conflict is trivial so I'll see if I can resolve
it and then rerun the tests later today or this weekend. If you happen to
get to it first, let me know.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Passive locking with the global dynamic window did not work with either mpi implementation (mpich, openmpi).
159aa9f
to
20718e7
Compare
Resolved the merge conflict. |
…m:sourceryinstitute/opencoarrays into issue-700-enable-multi-allocatable-arrays
@vehre I see that you resolved the merge conflict. I just tested locally on macOS and all tests pass. I'll wait for the Linux tests to pass on Travis CI and then merge. I think something is wrong with the macOS tests on Travis CI so I'll ignore any failures there. I think we'll transition soon to using GitHub's CI capabilities, which might resolve the issue. |
Summary of changes
Fixing issue-511 correctly. Adding ability to really use global dynamic window. Change to fine granular mandatory locking.
Rationale for changes
The pull reQuest is split into several commits, to simplify review. The most interesting one is the last one.
The initial fix for issue #511 while seeming effective did not work when more than one scalar array-reference was present in the references of a coarray. The last commit fixes this by dedicatedly analysing the array references and not relying on the reallocation flag, but using its own notion to figure error situations.
Furthermore had the way of locking dedicated windows and the global dynamic one by changed to a fine granular and near access locking. I.e. each MPI_Get or _Put is now encased in the appropriate lock and unlock calls. Only this made using the global dynamic window possible which then fixed #700 . At least with mpich 3.3.2. OpenMPI 4.0.2 still has several regressions, that are not understandable.
Additional info and certifications
This pull request (PR) is a:
I certify that
OpenCoarrays developer a chance to review my proposed code
be introduced)
Code coverage data