-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pixi run pip install
is overridden by subsequent call to pixi
#1233
Comments
I think I understand the use-case, it begs the question though, is the package that |
I should have clarified: |
So the use-case is:
Couldn't you add |
We'd rather keep a "lax" dependency on Edit: also it wouldn't cover the CI use case, where we want to run against a very specific, pre-built wheel. |
Couldn't you just make a seperate environment for running with the latest version? EDIT: Also is my read of your use-case correct ):? |
Four our use-case (use-cases, really), our ideal requirements are: A) As a user:
B) As a dev:
C) As CI:
( |
I believe (A) forces examples to have a lax dependency on |
Thanks, let me see.. So for a), I think this is already possible right:
B) I believe we should support
C) You are right there is no real good support for it as long as the rerun-sdk is managed in the lock as well. |
I think your summary matches our experience. The workaround I mentioned in OP provides us with an escape hatch for (C), and, as a dev, for the odd time you might still need to install an alternative (Thinking of it, I should have listed (B-extra) which is the ability as a dev to occasionally install any rerun-sdk from anywhere because.... life happens 😄) |
Yeah for B-extra, you would need the ability to override the locked dependency in some way. Maybe checking the |
I think, and that's kinda totally unrelated to your issue, but somehow it does come back to it. The reason we need to install the environments when using Two ways of going about this:
If we have that we could reinstate the full functionality of |
@abey79 we did some improvements to this workflow in the mean-time, that were discussed with you guys as well outside of this issue. Is this enough to close this one for now :)? |
The exact description of the issue is somewhat incorrect, but we are still frequently struggling with behaviors related to this workflow. Things have evolved such that our
This was working for a while and behaving in a way that we were mostly happy with where Pixi WOULDN'T re-build our rust lib every time we created a pixi environment. However, that seems to have changed after a recent update. Now any time we create the pixi environment, pixi is dispatching to maturin and re-building our rust library. This is sometimes nice, in that it picks up the latest rust changes, but also super annoying if you wanted to build those bindings manually with user-specified flags, etc. and then that library version gets replaced as soon as you run pixi again. |
@tdejager Do you know if this makes sense? |
Yeah, I read this but did not reply sorry. I would have to see/play with it to see what happens, I do know that It seems there is a workflow issue we need to figure out here though. |
FYI, I've not reproduced the issue myself, but I've also started hearing reports of pixi now caching (and worse, using the cache to restore) the contents of the librerun_bindings.so which is a totally workflow-breaking thing. One user (on OS X if it's relevant -- need to find what version they're on) basically now always gets the bindings from the first version of the editable build that pixi invokes. |
Yeah so that last thing sounds like a bug indeed, if we can find a way to reproduce it would be great. Are you guys using the |
What do you mean with |
Hi @jleibs, ignore my previous comments :) so I've been testing against the rerun repository a bit more this morning. Maybe you could give the exact reproducer of the situation you find annoying. What I tried, and my observations:
However, changing the rust code or python code afterwards and running Is step 4) the one you deem to be an annoyance to the workflow? If so, the follow-up question would be: because you are using the Would you be able to re-frame the issue in the behavior you would like to see for which commands? |
Ok, using:
Here is the major issue: First verify things work. This possibly forces forces an initial build of the wheel.
(The dev environment stuff is expected; I don't think this is breaking anything but worth keeping in mind). Now, edit
Now re-build the package. This should modify the .so, so we'll confirm that fact as well.
And finally run our command again:
Expected behavior: should print out "Edited bridge" Actual behavior: no change. Now when we check the bindings file again, we note it has been reverted to the copy of the file from BEFORE we ran
This is the major bug. We are now restoring a cached version of the |
Sorry for the mixup on terminology, I just mean an incremental
(I mentally think of that as "Pixi creates some kind of environment context and then runs the command inside of that context". Will try to be more specific though since there's a lot of overloaded terminology here). |
This is the bit that definitely falls into grey area. Strictly speaking, I agree the "correct / expected" thing to do is to always rebuild the wheel if relevant rust code has changed. Pragmatically speaking, this can cause the edit+test loop to be significantly slower. Often times you are just editing code that impacts the viewer, but not the bindings, but it can still trigger false-positive re-compiles, which then adds time to running the python test code. I think my preferred behavior would be to default to correctness (rebuild the wheel if the rust code is edited), but to have some flag or environment that tells pixi to skip those checks and just use the current pixi environment unmodified even if it's in an inconsistent state. This would let power users do the fast thing when they know it's safe enough without creating unexpected behavior for non-power-users. |
Ah thanks @jleibs, as always(!), for your excellent reproducer. Okay, so I debugged whats happening. Let's start of with the fact that:
What's happening here?
How can we fix the re-installation?So @ruben-arts and @jleibs can start to chime in here what we want the correct behavior to be. I see some paths forward:
How can we fix 3), the re-instating of the old wheelWith the current editables, I'm not sure we can, because we have no real way of detecting if a build is needed AFAIC. But I might be missing something here. Note that using the: https://github.com/PyO3/maturin-import-hook could help rebuilding on import. But not sure if this is something you would want. Note, that with our work on |
I'm not sure but maybe a 'pixi run --no-install --locked' might also be an option on the table @ruben-arts? We would need some extra logic to verify that '--no-install' would be allowed, i.e. a valid conda prefix must exist for that environment. |
Agreed this is generally a hard problem, and in our case users running the build step manually is fine (and in fact preferred). I think the real problem is we have 2 ways of building the package -- one manually via maturin, and one implicitly via pixi. And only the pixi one ends up cached. Rather than opt out of pixi management, I suspect we'll have more success leaning in and simply giving us a manual mechanism of forcing pixi to rebuild the package (and update its cache). Then we could get rid of our For example:
(or maybe this is a combination of arguments to This also alleviates the first annoyance of the slow dev cycle since it means if we don't run the manual task we continue to just use the cached version and everythign is speedy. The only problem I see that this doesn't resolve is allowing power-users to manually specify custom maturin flags via a manual invocation, but I think that's a fair compromise. Not only is it fairly niche (quite possibly only effects me). I think I can work around this by introducing custom pixi environments that use maturin environment variables or something. Either way, I'm willing to live with the pain there for the time being. |
hey @jleibs ! I'm trying to reproduce your example using this commit and running
if I switch to latest main ( I cannot anymore reproduce this situation. It always pick up the change correctly in the rust bindings. The only difference is that .so files are not present in site-packages, but in the rerun_sdk folder and .pth pointer it's created in site-packages ( which I think it's expected because we are in editable mode?) Please let me know if you can reproduce it using latest pixi ( 0.32.1 ) and what could be the steps to do it on my side. Maybe I'm missing something between the lines Thanks! |
As an update on this: as of pixi 0.34 (likely before but this just happens to be what I tested), things are now working quite well for us. If anybody else is running into a similar issue with maturin-based bindings, I believe we inadvertently worked around the issue when we added type stubs and a types.py to our project (as in: https://pyo3.rs/v0.22.5/python-typing-hints.html?highlight=__init__.py#if-you-need-other-python-files) The net result is that in
I believe we can close this issue now and I'll open a new one in the future if we find a regression or problem. |
Checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pixi, using
pixi --version
.Reproducible example
We encountered this issue while working on rerun-io/rerun#5966 (can be reproduced with commit 44412db2519cf2a37b0bd9d951cc4d8630eac037).
-> leads to the same initial error, as if
py-build
had no effect.Issue description
It actually turns out that
py-build
does install a build-from-source rerun-sdk correctly, but this is reverted by the last call to pixi, possibly due to #998?Interestingly, activating a shell is a possibly work-around:
Another case where it appears to work as expected is in CI, with an environment set for the
setup-pixi
action: https://github.com/rerun-io/rerun/blob/36f00a5b0bd3edf06bc0f4ec65ce58f86a32e458/.github/workflows/reusable_build_examples.yml#L66-L86Expected behavior
Ideally,
pixi
would make it easy and robust to manually "override" packages in the environments, including via pip specifically for when there is no other way (e.g maturin). This would cover at least two workflow:Workaround
For our particular case, @jleibs just had a workaround idea that appears to work:
rerun-sdk
local package somewhere (that would install a differently named empty python file)pixi.toml
for theexamples
environmentThat way, examples defaults to failing (
no module named 'rerun'
), and somehow a subsequentpixi run -e examples py-build
doesn't end up being overridden.pyproject.toml:
The text was updated successfully, but these errors were encountered: