-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HPX test_gpu executable won't build with cuda or rocm variant #75
Comments
The |
None of the namespace that trigger the errors listed above have been changed recently. I'll have a closer look. This seems to be a configuration problem... |
@wspear where can I see the complete build logs (including building HPX)? |
@hkaiser these builds are located in our e4s singularity container images for cuda and rocm respectively. I've put the build artifacts online here: http://nic.uoregon.edu/~wspear/hpx-spack/ Let me know if I can provide anything else or if you want me to try anything on my side. |
Regarding that HPX PR, could I make those changes to compute.hpp in an installed hpx and have it work as intended, or does the change need to be made to compute.hpp.in in the source tree before building? |
Locally editing should be sufficient. The patch will however only work if HPX was built with HPX_WITH_DISTRIBUTED_RUNTIME=OFF, which I assume it was. |
The spack install in our container is read-only but I saw the same issue building a local [email protected]+cuda. Manually adding the changes from the PR to include/hpx/compute.hpp with didn't seem to make a difference. The spack package doesn't set that variable so as long as it is the default it should be fine. |
The default is HPX_WITH_DISTRIBUTED_RUNTIME=ON, which might explain why the change I proposed has no effect. I'll have another look at this. |
@wspear I have pushed another commit that adds more #includes that were missing. Could you please try again? |
The installed headers didn't look quite the same as the ones in the source tree so I added the gpu block to every compute.hpp I found under include/hpx: ./include/compute.hpp The test code still didn't build, but it now it registers 23 errors instead of 27. Error Output
|
The test_gpu.cpp example included in the testsuite builds with a non-gpu variant of hpx, but it fails with both rocm and cuda variants. There are cuda specific api calls in the code so I'm not surprised rocm fails (and I'm not sure how the non-cuda build is able to succeed). Could we get an updated test that can either handle both accelerator types, or get a working version for cuda and a new version for rocm? I didn't spot any existing test codes for hpx+gpus floating around in the hpx repo but if they exist I could import them to the testsuite myself.
This is observed with the current (24.05) e4s release which uses [email protected]. We should be at 1.10 for the next release so it's probably fine to target that if the implementations would be mutually exclusive.
@msimberg @hkaiser
ROCM variant error output
CUDA variant error output
The text was updated successfully, but these errors were encountered: