Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update go-containerregistry #599

Merged
merged 2 commits into from
Mar 6, 2019
Merged

Conversation

priyawadhwa
Copy link
Collaborator

Update go-containerregistry since it can now handle image names of the
format repo:tag@digest.

Should fix #535.

Thanks @viceice for the fix!

Update go-containerregistry since it can now handle image names of the
format repo:tag@digest.

Should fix GoogleContainerTools#535.

Thanks @viceice for the fix!
@viceice
Copy link

viceice commented Mar 5, 2019

@priyawadhwa Is the failing check a problem?

@priyawadhwa
Copy link
Collaborator Author

priyawadhwa commented Mar 6, 2019

Hmm it seems like go-containerregistry is complaining about pushing an image with a bunch of the same layer (Dockerfile__test_copy_same_file_many_times)

failed to push to destination gcr.io/kaniko-test/kaniko-dockerfile_test_copy_same_file_many_times:latest: 
UNKNOWN: Unable to write blob
 sha256:2a72f5f901f1612717dc31200476234795bdef01972d9de90edc0e7eab561402

if I cut down the number of COPY context/foo /foo in that Dockerfile to ~5 it works, but at ~10 it fails to push again.

cc @dlorenc, @jonjohnsonjr have you seen this before?

@jonjohnsonjr
Copy link
Contributor

That's a real error from GCR, probably this: https://cloud.google.com/storage/docs/key-terms#immutability

cc @imjasonh maybe the streaming layer stuff introduced this? We could handle errors better with retries but ideally we wouldn't be trying to PUT the same blobs. I guess in real-life images this would never happen.

@priyawadhwa what is this testing? Looking at some of the output artifacts, it's really bizarre to me that not all the layers end up being the same.

@imjasonh
Copy link
Collaborator

imjasonh commented Mar 6, 2019

AFAIK Kaniko doesn't use stream.Layer at all (yet?).

Maybe there's some problem with uploading all the layers in concurrent goroutines, and when there are >10 identical blobs uploading at once they're more likely to conflict during a write? Just a guess, I don't actually know how GCR works.

@jonjohnsonjr
Copy link
Contributor

remote.Write used to use BlobSet to dedupe the layers, which was removed in this PR: https://github.com/google/go-containerregistry/pull/301/files

We could something similar to this but with BlobSet:
https://github.com/google/go-containerregistry/blob/4aac97bd085d96caedbc8213ddb6f489b7eb6706/pkg/v1/remote/write.go#L68

@jonjohnsonjr
Copy link
Contributor

From GCS docs:

a single particular object can only be updated or overwritten up to once per second. For example, if you have an object bar in bucket foo, then you should only upload a new copy of foo/bar about once per second. Updating the same object faster than once per second may result in 429 Too Many Requests errors.

@jonjohnsonjr
Copy link
Contributor

Try again!

@dlorenc
Copy link
Collaborator

dlorenc commented Mar 6, 2019

Whoohoo!

@dlorenc dlorenc merged commit 9693215 into GoogleContainerTools:master Mar 6, 2019
@priyawadhwa priyawadhwa deleted the vendor branch March 6, 2019 18:39
@priyawadhwa
Copy link
Collaborator Author

Thanks @jonjohnsonjr & @imjasonh!

This Dockerfile is meant to be testing snapshotting (ref #289)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Building a stage using image with both tag and SHA digest fails
5 participants