-
Notifications
You must be signed in to change notification settings - Fork 29k
[WIP][SPARK-46549][INFRA] Cache the Python dependencies for SQL tests #44546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
b7ffaf8 to
73dc435
Compare
| @@ -0,0 +1,11 @@ | |||
| # PySpark dependencies for SQL tests | |||
|
|
|||
| numpy==1.26.2 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The requirements file format allows for specifying dependency versions using logical operators (for example chardet>=3.0.4) or specifying dependencies without any versions. In this case the pip install -r requirements.txt command will always try to install the latest available package version. To be sure that the cache will be used, please stick to a specific dependency version and update it manually if necessary.
specify versions according to the suggestion in https://github.com/actions/setup-python?tab=readme-ov-file#caching-packages-dependencies
actually, I think maybe we should always specify the versions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related prior discussion on pinning development dependencies: #27928 (review)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually, I think maybe we should always specify the versions
I agree with this, and this is something I tried to do in the PR I linked to just above, but several committers were against it.
When I look at the number of PRs related to pinning dev dependencies over the past three years, I wonder if committers still feel the same way today.
Not pinning development dependencies creates constant breakages that can pop up whenever an upstream library releases a new version. When we pin dependencies, by contrast, we choose when to upgrade and deal with the potential breakage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. I don't have a great solution on this. We could have CI dedicated dep files maybe ... because now we have too many dependencies in CI with too many matrix ... but not sure .. At least, now I am not super against this idea..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you be open to my making another attempt at the approach in #27928? (@zhengruifeng can also take this on if they prefer, of course.)
Basically, we have two sets of development dependencies:
requirements.txt: direct dependencies only that are as flexible as possible; this is what devs install on their laptopsrequirements-pinned.txt: pinned dependencies derived automatically fromrequirements.txtusing pip-tools; this is used for CI
I know this adds a new tool that non-Python developers may not be familiar with (pip-tools), but it's extremely easy to use, has been around a long time, and is in use by many large projects, the most notable of which is Warehouse, the project that backs PyPI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nchammas I just notice the previous discussion #27928.
I personally prefer using requirements.txt files with pinned versions, one reason is that the dependency is actually cached in docker file, and I was confused about the version used in CI from time to time, e.g.
we used the cached RUN python3.9 -m pip install numpy pyarrow ... before, and when pyarrow 13 released at 2023-8-23, I didn't know this release broke PySpark before the cached image was refreshed (at 2023-9-13).
But I don't feel very strong about it and defer to @HyukjinKwon and @dongjoon-hyun on this.
|
We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable. |
What changes were proposed in this pull request?
Enable the caching provided by
setup-pythonWhy are the changes needed?
avoid downloading the Python dependencies if possible
Does this PR introduce any user-facing change?
no, infra-only
How was this patch tested?
ci, manually check:
1, first run to cache the dependencies
https://github.com/zhengruifeng/spark/actions/runs/7363727839/job/20043467880
2, second run to reuse the cache
https://github.com/zhengruifeng/spark/actions/runs/7367425047/job/20050701350
Was this patch authored or co-authored using generative AI tooling?
no