-
Notifications
You must be signed in to change notification settings - Fork 281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
coll: use MPI_Aint for v-collective array parameters #5065
Conversation
test:mpich/ch3/most |
test:mpich/ch3/most |
Just to add some data here, I ran some simple experiments to measure the overhead of promoting these arrays. Using OSU benchmark for MPI_Alltoallv (4 it JLSE nodes, fully subscribed ppn=88) with actual communication in MPICH commented out (i.e. message size is irrelevant): Original MPICH
With large count promotion
Actual communication costFor reference, actually communication cost for
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is definitely pushing the boundaries of what a single commit can change, but I suppose it is unavoidable given all the APIs that depend on the array arguments.
Here is communication data from Gomez w/ Infiniband (ch4:ucx):
|
Also added python code to generate MPI_Aint impl prototypes and generate code to do counts array swap before calling MPIR_Xxx
fe78ecd
to
2a597ca
Compare
test:mpich/ch4/ofi |
Reference #4880 |
Pull Request Description
Last PR (#5044 ) switched to use
MPI_Aint
for all internal collective routines. This PR does the same for the counts/displs array for v-collectives.Also added python code to generate MPI_Aint impl prototypes and generate
code to do counts array swap before calling MPIR_Xxx
Expected Impact
Author Checklist
module: short description
and follows good practice