-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Layer hashes always changing when building on Gitlab CI even when no reason to #3973
Comments
probably solved already but this one is quite simple. you first need to pull the image from registry into your dind sidecar, only then you will be able to cache from it |
Will it not try to auto-pull like FROM does? |
I've tested your hypothesis and it's incorrect, even if I pull the image explicitly, it still changes the hashes. I've created a repo reproducing this issue, it's pulling the image correctly and yet it always the last three layers:
|
This makes every ci so slow on gitlab. +1 |
I had an interesting observation yesterday. We also face this issue whereas builds inconsistently cache on GitLab. I dug a bit and found: #37304 , but alas the fix did not do anything. So I did some tests to see when the caching is working. To my surprise the only consistent result I got was when the same shared runner was utilized 4 times in a row and each build was using the caching as it should. Keep in mind build 1 was to create the cache image from my build stage. Builds 2,3,4 properly used the caching. On my 5th build another runner was selected and the cache stopped working. With the same issue where the layer hashes changed. So this could be runner/host related? |
If it is, it shouldn't be: the same image being built for the same arch on two different hosts shouldn't produce different hashes because it's not a reproducible build by definition. Note in my example |
It seems to be due to private runner running on Kubernetes executor in my case. Related to the ticket @TheoParkos shared. |
@vasilvestre you're saying the umask fixed it for you? |
I can't benefit if it, I use kubernetes executor and our gitlab version is old and do not have the fix uet |
I've added: variables:
FF_DISABLE_UMASK_FOR_DOCKER_EXECUTOR: true to my Gitlab CI file and it didn't work, the newly built layers are still always getting pushed (meaning their hashes changed). |
Edit: reproducer https://gitlab.com/dkarlovi/docker-build-hash
If I build this image and it gets pushed into Gitlab's registry and then rerun the workflow, all the layers in my image get totally new hashes, making all layers from my part of the image get repushed constantly, as if I'm building a completely new image:
Sample Dockerfile
Each time this pipeline runs (successfully), the output is
Note that this happens even for the
WORKDIR
layer, so it's each and every step in my image, but it recognizesalpine
layer from before, that part works. It seems it's something specifically with the layers I build.Sample `.gitlab-ci.yml`
Build metadata logs
First run metadata log
Second run metadata log
Full run outputs
First run log
Second run log
The text was updated successfully, but these errors were encountered: