-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do I properly use the caching feature? #42
Comments
I think it's expected to have a cache miss on the first run, and then on a daily basis (by default cache is onyl valid for a day). Did you see cache misses after the first run? Maybe @jonashaag can give some more details! |
Re: what to choose, see #38 (comment), curious to hear the thoughts of both of you on that as well. |
This feature is actually amazing!! In hindsight the docs do make sense I guess. Maybe I was just put off by the error that appeared when I did not expect it. Or maybe the error could be formulated more gently, like 'did not find a cache yet, but it may be available on your next run'. I am still doubting about one thing though. And that is when I have jobs:
standard:
strategy:
fail-fast: false
matrix:
runs-on: [ubuntu-latest, macos-latest, windows-latest]
defaults:
run:
shell: bash -l {0}
name: ${{ matrix.runs-on }} • x64 ${{ matrix.args }}
runs-on: ${{ matrix.runs-on }}
steps:
- name: Basic GitHub action setup
uses: actions/checkout@v2
- name: Set conda environment
uses: mamba-org/provision-with-micromamba@main
with:
environment-file: environment.yaml
environment-name: myenv
cache-env: true Is it actually clever enough to cache per platform? I ask because for example after the last run in the PR on the main branch I still get
Ref: https://github.com/tdegeus/GooseBib |
Yes it caches by platform by default. We should add that to the docs, and also make that message a bit less scary, as you suggested. |
One more clarification on this issue. The caching work on:
It does however not work on the main branch after merging a PR (whereas it could have used the last know environment) |
Can you show an example? |
https://github.com/tdegeus/GooseMPL (last run on main) and tdegeus/GooseMPL#39 |
The latter one has failed CI. Can you send links to a pair of successful actions runs that show this behavior? |
Could it be that a former CI run wasn't finished while you made a new commit?
So, I would expect that you now have a non-empty |
Testing that claim here tdegeus/FrictionQPotFEM#142 If it's indeed the case, do you have any suggestions how to improve the messaging so that it's less confusing? The |
Hmm, the timestamps are far apart, maybe what I hypothesised above isn't true:
|
I was just about the comment that ;) |
Indeed. I'm really only seeing a failure to hit when (on a separate day) the CI first runs (and succeeds) in a PR, and then reruns on the main after the PR is merged. |
The cache is invalidated daily by default so this is no surprise. |
Yes. I meant: Starting without a cache.
|
Aha, I see what you mean. Actually that's unexpected, but maybe not so unexpected because I just found this:
I'm interpreting this as: Cache entries created in a PR will not be available after merging. Which is a bummer when you only have 1 PR a day or something, you will never actually hit the cache. Maybe we should increase the default cache TTL. |
I see. It is indeed expected what I see, which is indeed sometimes a pity. Thanks for clarifying this. Increasing the time could help, but I find it difficult to say what would be reasonable. One cannot have a bunch of data hanging around too long neither. |
Thanks for the caching feature! Seems to work well, but for env sizes of several hundred MByte we experience quite long (e.g., 2 minutes for 450 MByte) times for packing and unpacking the cache. Could this be due to the compression library that is being used internally? |
Are you experiencing those long durations on Windows? I think a lot of time on Windows is spent in |
Assuming this is what you're working on atm, will have a look scipp/scipp#2512 |
Yes, it is Windows. It is slightly faster than without cache, maybe 30-50% (there is a lot of noise in the timings, so I cannot tell for sure)? |
In particular scipp/scipp#2512 (comment), which lists some "benchmark" results. |
I've seen much larger speedups on non-Windows |
I'll give those a try then (Windows has always been slowing us down, so that is what I tested first). Cheers for the useful input! |
Good luck! On Windows I'm pretty sure the bottleneck is just that the filesystem is SO slow. GH might should switch to zstd there as well though, for faster decompression. |
From the readme it is not very clear to me have to enable the download cache.
I tried:
but this gives
Also it is unclear to me if I'd rather should use
cache-env
orcache-downloads
The text was updated successfully, but these errors were encountered: