Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added non-blocking root communicator #1478
base: develop
Are you sure you want to change the base?
added non-blocking root communicator #1478
Changes from 1 commit
4835c73
926fd00
f53ca2f
5025063
d8faf4a
4a7e3f9
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar comment here and below. You could define static constexpr integer variables that use names containing true and false to make the code more readable and avoid magic numbers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MPI_Iprobe
is nonblocking here, so is there a chance thempiFlag
is not set totrue
when it is expected to be? Would it be better to have this be a blockingMPI_Probe
? Basing this comment off this stackoverflow post: https://stackoverflow.com/questions/43823458/mpi-iprobe-vs-mpi-probeAdditionally, if using
MPI_Iprobe
, shouldmpiFlag
default be set tofalse
, so it can be set totrue
only by a successful function call?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the mpiFlag will be set in either context to either true or false, but to your point, it is safer to initialize this as false.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The stackoverflow example illustrates an interesting but slightly different approach than what I'm intending to do. They are calling MPI_Iprobe in a while loop that does not exit until it returns a flag that is non-zero. In my case, I am checking to see if any messages need to be received only once, and if there are no messages, the function exits by returning nullptr. This intent in the stackoverflow example is to continuously monitor the status, whereas I'm only intending to periodically monitor the status whenever the code path enters into this function. Both could be relevant to the problem I'm trying to solve with this communicator, where the root rank needs to receive information from other ranks that they are aborting. I had a preference toward the latter option (periodically monitoring the status whenever the root rank reaches a point where it enters this code path) because it seemed to me like the more efficient option, even if it comes at a cost of sometimes not receiving the status before the program aborts. But I'm not really sure which option is best for this scenario. I'd be curious to hear your thoughts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, I would expect the latter option to have less overhead, doing a single poll with
MPI_Iprobe
instead of spinning onMPI_Iprobe
until status is updated in the former case. Nevertheless, I might not be considering something, so am also curious if others have ideas.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I understand the MPI API, this is actually a blocking
MPI_Recv
call? So thismpiNonBlockingReceiveMessages
function is currently blocking to receive messages.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's correct. The non-blocking part is the call to MPI_Iprobe, but then the Recv is blocking. My intent here is to be sure that the receive is fully finished before anything else is done, but to not block any further execution if there are no messages to be received (i.e. when mpiFlag is false). I can change the function name to clarify the intent here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarify the above point, MPI_Iprobe is used instead of MPI_probe because the former will return with an mpiFlag value regardless of whether messages need to be received, whereas the latter is a blocking call that will only return if there is a message to be received.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gotcha, the combination of
MPI_Iprobe
+MPI_Recv
makes sense now!I had tunnel vision comparing the non-blocking and blocking MPI interfaces.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does also make me think about renaming this communicator to something like "NonCollectiveCommunicator" rather than "NonBlockingCommunicator". It's true that it calls these non-blocking functions, but I think the main feature is actually that we don't rely on collective calls to communicate messages to root.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like that idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is
tag
argument here?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The MPI communication calls currently use the value associated with LJ_TAG by default (defined in MPIUtility.cpp). The non-blocking receives used by the new communicator in this PR work better when we use another tag in order to not conflict with other communicators. I added logic into the MPI utility functions to check whether the tag was overridden (i.e. non-zero). In those cases, the sends/receives will use the tag value passed in. Otherwise, we revert to the default LJ_TAG for MPI communication. Setting this default in the function declarations prevents us from having to change all the existing calls to these methods by other communicators.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah. Got it. Thanks for the explanation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar comment about why
tag
arg is here.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar response as above