-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explore solutions for running skaffold with minikube+containerd #10330
Comments
in office hours @afbjorklund mentioned this comment: #9640 (comment) hope that is helpful |
@priyawadhwa : So in the minikube documentation, we don't even include that as an alternative but only build locally: Both BuildKit upstream and Skaffold, are using Docker (and its API) as their main interface to BuildKit... 😕 Even the "custom" builder is quite targetted to Docker, just using the client CLI instead of the daemon API ? For Podman they did go the custom UNIX socket route, but eventually "gave up" and implemented Docker. And it also has all the issues of requiring that the user has a custom client binary, docker/podman-remote/buildctl. There is an implementation (PR) for the in-cluster builder, though: GoogleContainerTools/skaffold#2642 But if we implement https://skaffold.dev/docs/pipeline-stages/builders/custom/ The only missing feature is the $PUSH_IMAGE, but that shouldn't be too hard and is a good addition anyway. The way it works, is that it duplicates some of the work of the various clients:
The benefit of this (duplication), is that user now doesn't need any client binary... Most of this was detailed in the issue: #4868 The original implementation did a tar stream*, but sadly that doesn't work with BuildKit... * Compare: |
While I am adding support for the real command (in Go), here's a quick shell proof-of-concept: #!/bin/sh
# Custom build script for Skaffold, for minikube containerd runtime.
# See https://skaffold.dev/docs/pipeline-stages/builders/custom/
test -n "$IMAGE" || exit 1
test -d "$BUILD_CONTEXT" || exit 1
options="--oci-worker=false --containerd-worker=true --containerd-worker-namespace=k8s.io"
#minikube ssh -- sudo systemctl start buildkit
minikube ssh -- "sh -c 'pgrep buildkitd || (sudo -b buildkitd $options && sleep 3)'"
TMPDIR=/tmp
archive=$(mktemp)
dir=$(mktemp -d)
tar -cf $archive "$BUILD_CONTEXT"
#minikube scp $archive
minikube ssh --native-ssh=false -- "cat > $archive" < $archive
minikube ssh -- mkdir -p $dir
minikube ssh -- tar -C $dir -xf $archive
output="--output type=image,name=$IMAGE"
if ${PUSH_IMAGE:-false}; then
output="$output,push=true"
fi
#minikube ssh -- docker build -t "$IMAGE" - < $archive
minikube ssh -- sudo buildctl build --frontend=dockerfile.v0 \
--local context=$dir --local dockerfile=$dir $output
if ${PUSH_IMAGE:-false}; then
#minikube ssh -- docker push "$IMAGE"
true
fi
minikube ssh -- rm -rf $dir
minikube ssh -- rm $archive
rmdir $dir
rm $archive Some nice-to-have features (buildkit.service, minikube scp) are missing, but seems to work: $ IMAGE=myimage:latest BUILD_CONTEXT=. sh -x ./containerd-build.sh
+ test -n myimage:latest
+ test -d .
+ options=--oci-worker=false --containerd-worker=true --containerd-worker-namespace=k8s.io
+ minikube ssh -- sh -c 'pgrep buildkitd || (sudo -b buildkitd --oci-worker=false --containerd-worker=true --containerd-worker-namespace=k8s.io && sleep 3)'
WARN[0000] using host network as the default
INFO[0000] found worker "roxndec9uibvcs0hydvko31rr", labels=map[org.mobyproject.buildkit.worker.executor:containerd org.mobyproject.buildkit.worker.hostname:minikube org.mobyproject.buildkit.worker.snapshotter:overlayfs], platforms=[linux/amd64 linux/386]
INFO[0000] found 1 workers, default="roxndec9uibvcs0hydvko31rr"
WARN[0000] currently, only the default worker can be used.
INFO[0000] running server on /run/buildkit/buildkitd.sock
+ TMPDIR=/tmp
+ mktemp
+ archive=/tmp/tmp.qpIzR8R8nD
+ mktemp -d
+ dir=/tmp/tmp.RQcolRQEoS
+ tar -cf /tmp/tmp.qpIzR8R8nD .
+ minikube ssh --native-ssh=false -- cat > /tmp/tmp.qpIzR8R8nD
+ minikube ssh -- mkdir -p /tmp/tmp.RQcolRQEoS
+ minikube ssh -- tar -C /tmp/tmp.RQcolRQEoS -xf /tmp/tmp.qpIzR8R8nD
+ output=--output type=image,name=myimage:latest
+ false
+ minikube ssh -- sudo buildctl build --frontend=dockerfile.v0 --local context=/tmp/tmp.RQcolRQEoS --local dockerfile=/tmp/tmp.RQcolRQEoS --output type=image,name=myimage:latest
[+] Building 1.9s (6/6) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 31B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/busybox:latest 1.8s
=> [1/2] FROM docker.io/library/busybox@sha256:e1488cb900233d035575f0a7787448cb1fa93bed0ccc0d4efc1963d7d72a8f17 0.0s
=> => resolve docker.io/library/busybox@sha256:e1488cb900233d035575f0a7787448cb1fa93bed0ccc0d4efc1963d7d72a8f17 0.0s
=> CACHED [2/2] RUN true 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => exporting manifest sha256:31f282d47b4b7df3045e8c2f6c1dbe927cf05e2de922c35f6397fc09ae6f1f2c 0.0s
=> => exporting config sha256:7ea22498947bc710f715dc79c3526c9fbd12692ce95a6a8cc61d32cb9c66ac90 0.0s
=> => naming to myimage:latest 0.0s
+ false
+ minikube ssh -- rm -rf /tmp/tmp.RQcolRQEoS
+ minikube ssh -- rm /tmp/tmp.qpIzR8R8nD
+ rmdir /tmp/tmp.RQcolRQEoS
+ rm /tmp/tmp.qpIzR8R8nD Dockerfile FROM busybox
RUN true Results: $ minikube ssh -- sudo ctr -n k8s.io images ls | grep myimage:latest
myimage:latest application/vnd.docker.distribution.manifest.v2+json sha256:31f282d47b4b7df3045e8c2f6c1dbe927cf05e2de922c35f6397fc09ae6f1f2c 748.3 KiB linux/amd64 io.cri-containerd.image=managed |
Hey @afbjorklund thanks for your guidance! This is all really helpful. Right now, skaffold errors out if it detects minikube+containerd instead of minikube+docker. I want to initially focus on just making skaffold work with minikube+containerd. The easiest way to do this is to add a Once that's complete, we should consider ways of optimizing the process, which is where a lot of these ideas come in! I have a few thoughts:
I agree that this probably isn't the way we want to go.
Based on this comment and the shell script you provided, it looks like Skaffold users would have to switch to using this custom build script themselves for each image in their application, knowing that they wanted to build against minikube and containerd. Is that correct? Some ideas I had for improving the speed here:
I also think it's important that if we do implement |
Right, I don't think there is a way to use the containerd runtime and still provide a docker daemon that it can talk to ?
CRI-O users will have the option of using Podman, since version 3.0 does provide a docker socket for use with SSH. i.e. Docker will hardcode the socket path as /var/run/docker.sock, so a symlink needs to be provided for it to work 😞 /run/docker.sock -> /run/podman/podman.sock They also use different variables, Building in pods should work, but also "different". |
We should already have this, as part of our
We might miss the first part, where it would save an image from a local container engine onto disk (e.g. |
Yah I'd considered using
The reason I didn't want to reuse Adding a new
I think #2 is important in case people are using local builders that aren't docker. |
I actually meant "code from" cache add, rather than trying to use the existing commands in it. Having a The main problem with 1) is the same as with "env", you have "docker-image" and "podman-image" and whatever-image For the 2) local tarball we only have two options: docker and oci. And nobody uses the second option, so that's rather easy... Having to save | load everything isn't awesome, but the way things are at the moment. Have high hopes for "stargazing". |
Might as well stick with docker as the default runtime in that case, assuming that a proper CRI appears to take over after the dockershim (which is still with us until Kubernetes 1.23 or something) |
You could also leave option 3 open, for runtimes that support it:
That makes it easy to do something like:
The benefit of this is that it doesn't have to touch the disk, not even tmpfs. Especially nice when sending it remotely, like from the host to the build node. |
Ah gotcha, sounds good to me :)
That's a really good idea, I'll look into how complex it would be to implement. I'm hoping to at least get |
To any casual Skaffold users reading this, the script above (#10330 (comment)) should already work with minikube 1.17.1 BuildKit support for containerd was added to minikube in release 1.16, and BuildKit support for docker was in minikube 1.4. |
CC: @spowelljr |
With the latest changes added to "minikube build", this can now be done with:
Currently implemented for:
|
in progress |
This is done on minikube's side, just waiting on skaffold to implement. |
Right now skaffold errors out if it detects that minikube is running with containerd
To solve this issue, we need to:
The text was updated successfully, but these errors were encountered: