-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New resolver takes a very long time to complete #9187
Comments
I'm going to use this issue to centralize incoming reports of situations that seemingly run for a long time, instead of having each one end up in it's own issue or scattered around. |
@jcrist said in #8664 (comment)
|
These might end up being resolved by #9185 |
Thanks, @dstufft. I'll mention here some useful workaround tips from the documentation -- in particular, the first and third points may be helpful to folks who end up here:
|
For my case, the problematic behavior can be reproduced much faster with The legacy resolver bails out on this quickly with: |
One other idea toward this is, stopping after 100 backtracks (or something) with a message saying "hey, pip is backtracking due to conflicts on $package a lot". |
I wonder how much time is taken up by downloading and unzipping versus actually taking place in the resolver iteself? |
Most of it, last I checked. Unless we're hitting some very bad graph situation, in which case... 🤷 the users are better off giving pip the pins. |
I'm having our staff fill out that google form where ever they can, but I just want to mention that pretty much all of our builds are experiencing issues with this. Things that worked fine and had a build time of about 90 seconds are now timing out our CI builds. In theory we could increase the timeout, but we're paying for these machines by the minute so having all of our builds take a huge amount of time longer is a painful choice. We've switched over to enforcing the legacy resolver on all of our builds for now. |
As a general note to users reaching this page, please read https://pip.pypa.io/en/stable/user_guide/#dependency-resolution-backtracking. |
I was asked to add some more details from twitter, so here are some additional thoughts. Right now the four solutions to this problem are:
Waiting it out is literally too expensive to considerThis solution seems to rely on downloading an epic ton of packages. In the era of cloud this means-
These all cost money, although the exact balance will depend on the packages (people struggling with a beast like tensorflow might choke on the hard drive and bandwidth, while people with smaller packages just get billed for the build time). What's even more expensive is the developer time wasted during an operation that used to take (literally) 90s that now takes over 20 minutes (it might take longer but it times out on our CI systems). We literally can't afford to use this dependency resolution system. Trial and error constraints are extremely burdensomeThis adds a whole new set of processes to everyone's dev cycle where not only do they have to do the normal dev work, but now they need to optimize the black box of this resolver. Even the advice on the page is extremely trial and error, basically saying to start with the first package giving you trouble and continue iterating until your build times are reasonable. Adding more config files complicates and already overcomplicated ecosystem.Right now we already have to navigate the differences between Reducing versions checked during development doesn't scaleRestricting versions during development but releasing the package without those constraints means that the users of that package are going to have to reinvent those constraints themselves during development. If I install a popular package my build times could explode until I duplicate their efforts. There's no way to share those constraints other than copy/paste methods, which adds to the maintenance burden. What this is ultimately going to result in is people not using constraints at all, instead limiting the dependency versions directly based not off of actual compatibility but a mix of compatibility and build times. This will make it harder to support smaller packages in the long term. |
Might be a good reason to prioritize pypi/warehouse#8254 |
Definitely. And a sdist equivalent when PEP 643 is approved and implemented.
It doesn't directly rely on downloading, but it does rely on knowing the metadata for packages, and for various historical reasons, the only way to get that data is currently by downloading (and in the case of source distributions, building) the package. That is a huge overhead, although pip's download cache helps a lot here (maybe you could persist pip's cache in your CI setup?) On the plus side, it only hits hard in cases where there are a lot of dependency restrictions (where the "obvious" choice of the latest version of a package is blocked by a dependency from another package), and it's only tended to be really significant in cases where there is no valid solution anyway (although this is not always immediately obvious - the old resolver would happily install invalid sets of packages, so the issue looks like "old resolver worked, new one fails" where it's actually "old one silently broke stuff, new one fails to install instead"). This doesn't help you address the issue, I know, but hopefully it gives some background as to why the new resolver is behaving as it is. |
@tedivm please look into using pip-tools to perform dependency resolution as a separate step from deployment. It's essentially point 4 -- "local" dependency resolution with the deployment only seeing pinned versions. |
Actually, It would be an interesting experiment to see. These pathological cases that people. are experiemnting with, if they let the resolver complete once, persist the cache, and then try again, is it faster? If it's still hours long even with a cache, then that suggests pypi/warehouse#8254 isn't going to help much. I don't know what we're doing now exactly, but I also wonder if it would make sense to stop exhaustively searching the versions after a certain point. This would basically be a trade off of saying that we're going to start making assumptions about how dependencies evolve over time. I assume we're currently basically starting with the latest version, and iterating backwards one version at a time, is that correct? If so, what if we did something like:
This isn't exactly the correct use of a binary search, because the list of versions aren't really "sorted" in that way, but it would kind of function similiarly to git bisect? The biggest problem with it is it will skip over good versions if the latest N versions all fail, and the older half of versions all fail, but the middle half are "OK". Another possible idea is instead of a binary search, do a similar idea but instead of bucketing the version set in halves, try to bucket them into buckets that match their version "cardinality". IOW, if this has a lot of major versions, bucket them by major version, if it has few major versions, but a lot of minor versions, bucket it by that, etc. So that you divide up the problem space, then start iterating backwards trying the first (or the last?) version in each bucket until you find one that works, then constraint the solver to just that bucket (and maybe one bucket newer if if you're testing the last version instead of first?). I dunno, it seems like exhaustively searching the space is the "correct" thing to do if you want to always come up with the answer if one exists anywhere, but if we can't make that fast enough, even with changes to warehouse etc, we could probably try to be smart about using heuristics to narrow the search space, under the assumption that version ranges typically don't change that often and when they do, they don't often change every single release. Maybe if we go into heuristics mode, we emit a warning that we're doing it, suggest people provide more information to the solver, etc. Maybe provide a flag like Maybe we're already doing this and I'm jsut dumb :) |
We're not doing it, and you're not dumb :-) But it's pretty hard to do stuff like that - most resolution algorithms I've seen are based on the assumption that getting dependency data is cheap (many aren't even usable by pip because they assume all dependency info is available from the start). So we're getting into "designing new algorithms for well-known hard CS problems" territory :-( |
Some resolvers I surveyed indeed do this, espacially from ecosystems that promote semver heavily (IIRC Cargo?) since major version bumps there imply more semantics, so this is at least a somewhat charted territory. The Python community do not generally adhere to semver that strictly, but we may still be able to do it since the resolver never promised to return the best solution, but only a good enough one (i.e. if both 2.0.1 and 1.9.3 satisfy, the resolver does not have to choose 2.0.1). |
The other part is how we handle failure-to-build. With our current processes, we could have to get build deps, do the build (or at best call With binary search-like semantics, we'd have to be lenient about build failures and allow pip to attempt-to-use a different version of the package on failures (compared to outright failing as we do today).
I think stopping after we've backtracked 100+ times and saying "hey, this is taking too long. Help me by reducing versions of $packages, or tell me to try harder with --option." is something we can do relatively easily now. If folks are on board with this, let's pick a number (I've said 100, but I pulled that out of the air) and add this in? |
Do we have a good sense of whether these cases where it takes a really long time to solve are typically cases where there is no answer and it's taking a long time to exhaustively search the space because our slow time per candidate means it takes hours.. or are these cases where there is a successful answer, but it just takes us awhile to get there? |
@dstufft in my case, there was no suitable solution (see #9187 (comment)). I guessed which might be the problematic dependencies and with reduced set of packages it doesn't take that long and produces expected error. With full requirements-min.txt it didn't complete in hours. With nearly 100 pinned dependencies, the space to search is enormous, and pip ends up with (maybe) infinitely printing "Requirement already satisfied:" when trying to search for some solution (see https://github.com/WeblateOrg/weblate/runs/1474960864?check_suite_focus=true for long log, it was killed after some hours). I just realized that the CI process is slightly more complex that what I've described - it first installs packages based on the ranges, then generates list of minimal versions and tries to adjust existing virtualenv. That's probably where the "Requirement already satisfied" logs come from. The problematic dependency chain in my case was:
In the end, I think the problem is that it tries to find solution in areas where there can't be any. With full pip cache:
In this case, adding |
Hi all, I was able to reproduce OPs issue using their linked requirements.txt: https://github.com/pypa/pip/files/5618233/requirements-min.txt I have been experimenting with an optimization where pip has a large solution space and has to backtrack, in my patched version of pip: #10201 (comment) . I would appreciate if anyone has test cases they want to try they provide them or try the patched version of pip themselves. While it still took a few minutes it was able to hone in on a resolution impossible error which is that the user requested
|
I install any packages I might use in my small scripts, one-liners, and interactive usage to the global Python env. These amount to a lot of packages, but I do not particularly care about conflicts between them; If some package doesn't play nice with other packages' latest versions, I just move the scripts using that particular package into a virtual env. (Which has never happened until now using the legacy resolver.) This new resolver takes so long on this use case that I have never seen it actually completed. Here is my |
Yea, this looks like a case where tree trimming is what we need. |
A requirements file is supposed to specify the versions of packages which work with each other. E.g. If I use If you don't care if packages work on not you can keep using pip 20.2.
Your use case as you've specified is not supported by the new resolver, it will not install packages that say that are incompatible with each other. That said I agree the new resolver should be optimized to be faster, and in fact I my experimental version of pip I can install your requirements.txt in a few minutes without any issues: #10201 (comment) |
I'd actually state that more strongly. Your use case isn't supported by pip, unless you use "Pip takes too long to tell me I have conflicting dependencies" is a valid bug report. But "pip won't install a set of conflicting requirements" isn't. |
I would say if pip used to work with certain requirements file and then after upgrade it either refuses to or takes hours then this is a showstopper bug and shouldn't be treated as business as usual. People rely on such infrastructure tools to be backwards compatible. |
People also require such infrastructure to be correct, prior to 20.3 pip would install packages that were explicitly incompatible. If you want to be fast but wrong you can pin to pip 20.2. Do you have a reproducible example? I am actually trying to do something about the performance issue by experimenting with optimizations: #10201 (comment) . A lot of people complain about this issue but don't actually provide an example that has enough information to reproduce and therefore people who are trying to do something about it (like myself) can't actually help you. |
I think my use case will be solved just by having the ability to pin packages to the latest version: |
Is that a feature that you or someone has requested or someone is implementing? Or is just just a musing on a nice to have? Personally I'm not sold on FYI I think a feature with a similar effect is one I've written up here: #10417 where you set But I am a while of being able to submit a PR for this, so no explicit work is being done on it right now. |
I have a requirements file that I cannot install - too slow - into an existing env. It installs into a freshly created one though. |
A way to hang up pip. Download the requirements file, make a fresh virtualenv (I used Python 3.8) and: pip install pip==20.2
pip install -r requirements-broken.txt
pip install -U pip
pip install -r requirements-broken.txt |
So the first installation of the requirements using pip 20.2 creates a broken environment, so from this point on wards I don't think this scenario is supported (up to the pip maintainers). Pip explicitly tells you this when you install:
At this point I don't think installing the same requirements with pip 20.3+ has any hope because it needs to explicitly install older dependencies than already installed. I could be wrong maybe there are some incantation of option flags that let pip install older dependencies than already installed when using a requirements file? Unfortunately though pip 20.3+ doesn't tell you why it can't install because because it gets stuck backtracking within the possible set of dependencies to see if there is some solution (and goes down the wrong path). However when I use the version of pip I have created here which attempts to optimize this large backtracking situation it very quickly gives the error on why this current environment won't work:
|
Correct. If you have broken dependencies in your existing installation (you can check this with |
It would be fine if that simply didn't work, hanging up is worse. |
You can run |
@njiles Apologies this is going back a while, I've been scouring these posts for all reproducible examples of pip taking a very long time backtracking. This is one such example. Using the changes I propose here it is able to backtrack much more efficiently and after a little while it gives the following error, hope that helps:
|
…issues/9187\#issuecomment-853091201 (flake8, pylint, astroid, pycodestyle never resolve in endless retry loop)
great unexpected news for me :
|
That is good Idea, but awful solution. Downloading all packages just to resolve dependencies? what was you thinking ? |
That's not what Pip is doing. Pip downloads the latest of each requirement and, in general, only downloads an older package if there are conflicting requirements. |
I think this issue can now be closed with #10481 New reports will require new analysis and the reasoning will be different. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Since this issue was filed, we've made a significant improvements to the dependency resolution logic, to our documention around dependency resolution and improved behaviours for many of the reported instances of poor behaviour. I'd like to thank everyone who's engaged constructively in this discussion. If the behaviour of pip's dependency resolver is still an issue for your usecase with pip 21.3 or newer, please file a new issue for your usecase. Notably, please do NOT file a blanket issue for this problem like "pip's resolver is slow" or "pip's resolver backtracks a lot" -- such issues increase the amount of effort pip's maintainers have to put in to triage through the reports, to consolidate similar cases as well as to figure out what's actionable about each of them. We'd appreciate bug reports containing clear information about how to reproduce the behaviour that you're seeing, and what you'd want it to do differently. I'm sure there's still a lot of ways we can improve the behaviour of pip's dependency resolver, and reports that contain enough information to reproduce the issue will help us identify and improve them. As a reminder, currently, all of pip's maintainers contribute to pip in a largely/completely volunteer capacity. Futher, dependency resolution is a complicated problem, both computationally (NP-complete, in case you're algorithmically minded) as well as in terms of what the "right answer" is for various usecases (the strategies for "getting the right answer" are often contradictory). I'm gonna go ahead and lock this now, since this thread has already started going off-topic. |
What did you want to do?
One of CI jobs for Weblate is to install minimal versions of dependencies. We use requirements-builder to generate the minimal version requirements from the ranges we use normally.
The
pip install -r requirements-min.txt
command seems to loop infinitely after some time. This started to happen with 20.3, before it worked just fine.Output
This seems to repeat forever (well for 3 hours so far, see https://github.com/WeblateOrg/weblate/runs/1474960864?check_suite_focus=true)
Additional information
Requirements file triggering this: requirements-min.txt
It takes quite some time until it gets to above loop. There is most likely something problematic in the dependencies set...
The text was updated successfully, but these errors were encountered: