-
-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Maintenance effort vs. package downloads #171
Comments
We're switching build systems, I intend to get that done before Still, point taken - there's only so much templated C++ and Cython we can add before (free for open source) CI systems can't deal with it anymore. |
As a reference point, I checked the last successful The whole thing including test suite run took 20 minutes; the build itself less than 5 minutes: The successful
So the hardware is fine . The actual problem:
So it's |
Cool! Xref (for anyone interested): scipy/scipy#13615 Regarding your second point, yes, the conda solver (& general CI setup) eats a substantial amount of time. Not sure what options there are to change this, it's not like people aren't aware of it. The failing log you picked out is for PPC+PyPy, which I noted happens consistently, leading me to believe it's some bug in the interaction with PyPy + PPC that reproducibly leads to hangs - hence why I stopped retriggering that job. Note that Travis seems to have different underlying hardware in its fleet as well, there were a lot of timeouts (5-6) before the py37/py38 builds now ran through. Since I kept restarting the failed cpython builds (and after it runs through the previous logs are not available anymore), there are unfortunately no other failing logs (but would be easy with a new PR...). |
Is there a conda-forge issue about switching to Mamba somewhere? Mamba is reliable enough by now I'd think, and this would address the actual root cause of these problems. |
Hey Ralf, yes we discussed the usage of mambabuild briefly at the last core meeting and Marius just opened a PR to make it the default. Basically we have the "go" to allow uploads of packages built with mambabuild (currently it can only be used for debugging because uploading is prohibited). That's a quick fix in the conda-smithy build script generation. Would be cool if you weigh in on the open PR! |
Link for convenience: conda-forge/conda-smithy#1507 |
That's great news, thanks @wolfv! From reading through the two PRs, it's not 100% clear to me - but I think we can enable it today in this feedstock by adding |
Actually I am not sure if we need a new release of |
@h-vetinari worth trying to see if we can avoid turning off aarch64 builds that way? |
This situation has improved quite a lot in recent times, so I'm closing this issue... |
In the context of trying to babysit the aarch/ppc CI after #169, I had a look at the available builds and began to wonder how the package downloads stacked up. At the time of writing 1.6.3 was pretty much exactly 2 months old, and had the following DL numbers (note that py37 build on aarch had failed for 3a76a91).
py37
py38
py39
pypy37
PPC - which is by far the biggest problem child right now - only represents <0.1% of downloads, which begs the question of whether that justifies the (wildly disproportionate compared to other arches) maintenance effort.
It would be easily solvable with better CI (or even just a higher timeout), but unless someone with a big interest in PPC (IBM?) sponsors a separate CI queue for that, I'm doubtful if the CI woes are fixable, especially since the scipy build is becoming heavier & heavier.
Note that conda-forge does support building PPC packages through QEMU on azure, but for scipy specifically, this produces ~2000 test failures.
CC @conda-forge/core @jayfurmanek
The text was updated successfully, but these errors were encountered: