-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use conda to build python packages during GPU tests #897
Use conda to build python packages during GPU tests #897
Conversation
ec21c59
to
c917ad7
Compare
Is there some advantage to doing this, or is it just to test conda packaging as well? |
That's one of the advantages. We are also doing this to remove the usage of the |
b6360f6
to
a7e30b3
Compare
a7e30b3
to
09e49fc
Compare
Codecov Report
@@ Coverage Diff @@
## branch-22.06 #897 +/- ##
==============================================
Coverage ? 0.00%
==============================================
Files ? 22
Lines ? 3075
Branches ? 0
==============================================
Hits ? 0
Misses ? 3075
Partials ? 0 Continue to review full report at Codecov.
|
rerun tests |
The timeout from |
rerun tests |
1 similar comment
rerun tests |
09e49fc
to
5d1db0f
Compare
This many timeouts doesn't look normal, even the timeout that was increased in #901 was hit here. @Ethyling could you add |
@pentschev PR updated with this |
d65e32f
to
0d5b7b6
Compare
I have been trying to reproduce timeouts on CUDA 11.0, driver 450.80.02, just like in CI, for days but have been unsuccessful. I can only imagine at this point that there is something related to the CI machines that make it just a tiny bit slower, in which case we may try to increase timeouts. I will run tests a few more times in CI here to see how it behaves and see what are the tests that may need increased timeouts. |
rerun tests |
Tests that failed in latest run are now resolved by #905, rerunning. |
rerun tests |
1 similar comment
rerun tests |
Nice, thank you @pentschev!
I prefer to wait that all the PRs pass for all the RAPIDS projects before merging it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved with a question.
772733a
to
b0024db
Compare
@@ -10,7 +10,7 @@ package: | |||
version: {{ version }} | |||
|
|||
source: | |||
path: ../../.. | |||
git_url: ../../.. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is required to prevent this issue:
AssertionError: Can't merge/copy source into subdirectory of itself. Please create separate spaces for these things.
src: /workspace
dst: /workspace/.conda-bld/work
rerun tests |
1 similar comment
rerun tests |
b0024db
to
601f9a7
Compare
Signed-off-by: Jordan Jacobelli <[email protected]>
601f9a7
to
80fd80a
Compare
Failing test is due to dask/distributed#6320 , which was caused by dask/distributed#5910 merge. |
Do you know if this will be fixed soon? |
Sorry @Ethyling , I don't know if that is going to be fixed soon. I marked the test to xfail in #908 to unblock CI for now. |
rerun tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for the work here @Ethyling .
Feel free to merge if this is ready to go. |
Thank you for your help with this PR @pentschev! |
@gpucibot merge |
This PR convert the
from sources
build we are doing in GPU test job to aconda build
. This is done for the following reasons:conda
compilers andmamba
to build RAPIDS packagesThis may increase the global pipeline time, but the usage of
mamba
should resolve this asmamba
is faster thanconda
to build packages