-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MPI_Pack with device memory #5
Comments
That shouldn't be breaking like that. I'll see if I can reproduce this crash. |
I was able to reproduce this issue. |
Hi, thank you. In commit 3c4a1be, I no longer see the crash. However, what I would like to do is see how the MPI implementation handles types on the GPU, so I am now running
It seems that no actual benchmarks are run, as the output is this:
Is this configuration supported? |
It looks like -cuda_aware_mpi got dropped from the command line. |
I dropped it because I interpreted it to mean that it just enabled some assertions and tests, but now I see that the little benchmarks are referred to as "tests" in the outputs.
In any case, I tried with it on:
|
Hi,
I have built this on a system with a single GPU, that I would like to share between two MPI ranks (just for the sake of getting things up and running).
The build basically follows the
ubuntu_nvcc10_gcc8
except adjusted for gcc 10.I built commit e06e54d (the latest at the time of writing).
I tried to run it with the following:
~/software/openmpi-4.0.5/bin/mpirun -n 2 bin/comb 10_10_10 -divide 2_1_1 -cuda_aware_mpi -comm enable mpi -exec enable mpi_type -memory enable cuda_device
but I get the following error:
I also managed ot run the focused tests:
which appears to have worked with the following output:
Is device memory + MPI + MPI_Type a supported configuration at this time? If so, any advice?
Thanks!
The text was updated successfully, but these errors were encountered: