-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Look for usable on-cluster credentials using k8schain #243
Conversation
Just tried this on my cluster, and got this:
Reproducing with this pod.yaml: apiVersion: v1
kind: Pod
metadata:
name: build-pod
spec:
restartPolicy: Never
initContainers:
- image: gcr.io/cloud-builders/git
name: 'git-clone'
args: ['clone', 'https://gist.github.com/f1fb27779bc17009198e2cef946bf749.git', '/workspace']
volumeMounts:
- mountPath: /workspace
name: workspace
- image: gcr.io/jasonhall-kube/executor
name: kaniko
workingDir: /workspace
args:
- --destination=gcr.io/jasonhall-kube/built-with-kaniko-fixed
volumeMounts:
- mountPath: /workspace
name: workspace
containers:
- image: ubuntu
name: nop
args: ['echo', 'done']
volumes:
- name: workspace
emptyDir: {} @mattmoor any ideas about what's going wrong here? I'm hoping that |
It looks like you changed the auth for getting base images, but not for pushing at the end of the build. That might be the issue? |
Well that would certainly help, wouldn't it 😄 Unfortunately, I don't think that's the problem, since the failure seems to happen long before the image is built or pushed. The full logs from the failing container are:
|
pkg/executor/executor.go
Outdated
@@ -91,11 +90,12 @@ func DoBuild(k KanikoBuildArgs) (name.Reference, v1.Image, error) { | |||
if err != nil { | |||
return nil, nil, err | |||
} | |||
auth, err := authn.DefaultKeychain.Resolve(ref.Context().Registry) | |||
k8sc, err := k8schain.NewInCluster(k8schain.Options{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NewInCluster
will use the pod's serviceaccount identity to try and talk to the API server and read secrets, you want this which I now realize isn't documented. Oops.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh okay. I'll give that a try. Sounds like we need better docs about which is useful in which scenarios, and maybe a rename, since I figured NewInCluster
would be the one I wanted while trying to auth from in the cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That worked! 🎉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This LGTM, thanks for jumping on this @imjasonh
pkg/executor/executor.go
Outdated
if err != nil { | ||
return nil, nil, err | ||
} | ||
sourceImage, err = remote.Image(ref, remote.WithAuth(auth), remote.WithTransport(http.DefaultTransport)) | ||
kc := authn.NewMultiKeychain(authn.DefaultKeychain, k8sc) | ||
sourceImage, err = remote.Image(ref, remote.WithAuthFromKeychain(kc), remote.WithTransport(http.DefaultTransport)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI remote.WithTransport(http.DefaultTransport)
is a nop
@priyawadhwa Rebased and updated |
Very eager to see this land :) thanks @imjasonh and @priyawadhwa |
LGTM, thanks for making this change! |
This updates deps and uses
k8schain
andNewMultiKeychain
to attempt to authorize pushes using available image pull secrets.This change isn't ready to merge, I just wanted to get some initial feedback about the plan and maybe ideas about how I should go about testing this.