Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explore solutions for running skaffold with minikube+containerd #10330

Closed
priyawadhwa opened this issue Feb 1, 2021 · 17 comments
Closed

Explore solutions for running skaffold with minikube+containerd #10330

priyawadhwa opened this issue Feb 1, 2021 · 17 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Milestone

Comments

@priyawadhwa
Copy link

Right now skaffold errors out if it detects that minikube is running with containerd

To solve this issue, we need to:

  • Find a suitable docker-env alternative for continerd runtime
@priyawadhwa priyawadhwa added this to the v1.18.0 milestone Feb 1, 2021
@priyawadhwa priyawadhwa self-assigned this Feb 1, 2021
@medyagh medyagh added kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Feb 1, 2021
@medyagh
Copy link
Member

medyagh commented Feb 1, 2021

in office hours @afbjorklund mentioned this comment: #9640 (comment) hope that is helpful

@afbjorklund
Copy link
Collaborator

@priyawadhwa :
As you can see from the sample session above, it is quite clunky to use buildctl remotely at the moment.

So in the minikube documentation, we don't even include that as an alternative but only build locally:
https://minikube.sigs.k8s.io/docs/handbook/pushing/#5-building-images-inside-of-minikube-using-ssh

Both BuildKit upstream and Skaffold, are using Docker (and its API) as their main interface to BuildKit... 😕

Even the "custom" builder is quite targetted to Docker, just using the client CLI instead of the daemon API ?

For Podman they did go the custom UNIX socket route, but eventually "gave up" and implemented Docker.
Having to add a new "buildctl-env" command for Minikube feels like adding insult to injury, even if "possible".

And it also has all the issues of requiring that the user has a custom client binary, docker/podman-remote/buildctl.


There is an implementation (PR) for the in-cluster builder, though: GoogleContainerTools/skaffold#2642

But if we implement minikube build, we could use that as a custom build script for Skaffold (for all three CRI):

https://skaffold.dev/docs/pipeline-stages/builders/custom/

The only missing feature is the $PUSH_IMAGE, but that shouldn't be too hard and is a good addition anyway.


The way it works, is that it duplicates some of the work of the various clients:

  1. Gather the Dockerfile and the rest of the "build context" (directory), excluding .dockerignore and creating tarball
  2. scp: Transfer this tarball archive (of the build context) to a temporary directory to the Minikube VM / build node
  3. ssh: Build this archive (now local), using the build command for each CRI: docker build, podman build, buildctl build
  4. Push the results to an external registry (i.e. if desired), clean up any temporary directory on client/VM, and so on.

The benefit of this (duplication), is that user now doesn't need any client binary...

Most of this was detailed in the issue: #4868


The original implementation did a tar stream*, but sadly that doesn't work with BuildKit...
Workaround is to create a temporary tarfile, scp it over, extract it and pretend it is a directory.

* Compare: tar cf - Dockerfile | minikube ssh --native-ssh=false docker build -
(another sad fact is that the go-native ssh client doesn't handle binary input very well at all...)

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 2, 2021

While I am adding support for the real command (in Go), here's a quick shell proof-of-concept:

#!/bin/sh

# Custom build script for Skaffold, for minikube containerd runtime.
# See https://skaffold.dev/docs/pipeline-stages/builders/custom/

test -n "$IMAGE" || exit 1
test -d "$BUILD_CONTEXT" || exit 1

options="--oci-worker=false --containerd-worker=true --containerd-worker-namespace=k8s.io"

#minikube ssh -- sudo systemctl start buildkit
minikube ssh -- "sh -c 'pgrep buildkitd || (sudo -b buildkitd $options && sleep 3)'"

TMPDIR=/tmp
archive=$(mktemp)
dir=$(mktemp -d)

tar -cf $archive "$BUILD_CONTEXT"

#minikube scp $archive
minikube ssh --native-ssh=false -- "cat > $archive" < $archive

minikube ssh -- mkdir -p $dir
minikube ssh -- tar -C $dir -xf $archive

output="--output type=image,name=$IMAGE"
if ${PUSH_IMAGE:-false}; then
    output="$output,push=true"
fi

#minikube ssh -- docker build -t "$IMAGE" - < $archive
minikube ssh -- sudo buildctl build --frontend=dockerfile.v0 \
--local context=$dir --local dockerfile=$dir $output

if ${PUSH_IMAGE:-false}; then
    #minikube ssh -- docker push "$IMAGE" 
    true
fi

minikube ssh -- rm -rf $dir
minikube ssh -- rm $archive

rmdir $dir
rm $archive

Some nice-to-have features (buildkit.service, minikube scp) are missing, but seems to work:

$ IMAGE=myimage:latest BUILD_CONTEXT=. sh -x ./containerd-build.sh 
+ test -n myimage:latest
+ test -d .
+ options=--oci-worker=false --containerd-worker=true --containerd-worker-namespace=k8s.io
+ minikube ssh -- sh -c 'pgrep buildkitd || (sudo -b buildkitd --oci-worker=false --containerd-worker=true --containerd-worker-namespace=k8s.io && sleep 3)'
WARN[0000] using host network as the default            
INFO[0000] found worker "roxndec9uibvcs0hydvko31rr", labels=map[org.mobyproject.buildkit.worker.executor:containerd org.mobyproject.buildkit.worker.hostname:minikube org.mobyproject.buildkit.worker.snapshotter:overlayfs], platforms=[linux/amd64 linux/386] 
INFO[0000] found 1 workers, default="roxndec9uibvcs0hydvko31rr" 
WARN[0000] currently, only the default worker can be used. 
INFO[0000] running server on /run/buildkit/buildkitd.sock 
+ TMPDIR=/tmp
+ mktemp
+ archive=/tmp/tmp.qpIzR8R8nD
+ mktemp -d
+ dir=/tmp/tmp.RQcolRQEoS
+ tar -cf /tmp/tmp.qpIzR8R8nD .
+ minikube ssh --native-ssh=false -- cat > /tmp/tmp.qpIzR8R8nD
+ minikube ssh -- mkdir -p /tmp/tmp.RQcolRQEoS
+ minikube ssh -- tar -C /tmp/tmp.RQcolRQEoS -xf /tmp/tmp.qpIzR8R8nD
+ output=--output type=image,name=myimage:latest
+ false
+ minikube ssh -- sudo buildctl build --frontend=dockerfile.v0 --local context=/tmp/tmp.RQcolRQEoS --local dockerfile=/tmp/tmp.RQcolRQEoS --output type=image,name=myimage:latest
[+] Building 1.9s (6/6) FINISHED                                                                                                                                                                         
 => [internal] load build definition from Dockerfile                                                                                                                                                0.0s
 => => transferring dockerfile: 31B                                                                                                                                                                 0.0s
 => [internal] load .dockerignore                                                                                                                                                                   0.0s
 => => transferring context: 2B                                                                                                                                                                     0.0s
 => [internal] load metadata for docker.io/library/busybox:latest                                                                                                                                   1.8s
 => [1/2] FROM docker.io/library/busybox@sha256:e1488cb900233d035575f0a7787448cb1fa93bed0ccc0d4efc1963d7d72a8f17                                                                                    0.0s
 => => resolve docker.io/library/busybox@sha256:e1488cb900233d035575f0a7787448cb1fa93bed0ccc0d4efc1963d7d72a8f17                                                                                    0.0s
 => CACHED [2/2] RUN true                                                                                                                                                                           0.0s
 => exporting to image                                                                                                                                                                              0.0s
 => => exporting layers                                                                                                                                                                             0.0s
 => => exporting manifest sha256:31f282d47b4b7df3045e8c2f6c1dbe927cf05e2de922c35f6397fc09ae6f1f2c                                                                                                   0.0s
 => => exporting config sha256:7ea22498947bc710f715dc79c3526c9fbd12692ce95a6a8cc61d32cb9c66ac90                                                                                                     0.0s
 => => naming to myimage:latest                                                                                                                                                                     0.0s
+ false
+ minikube ssh -- rm -rf /tmp/tmp.RQcolRQEoS
+ minikube ssh -- rm /tmp/tmp.qpIzR8R8nD
+ rmdir /tmp/tmp.RQcolRQEoS
+ rm /tmp/tmp.qpIzR8R8nD

Dockerfile

FROM busybox
RUN true

Results:

$ minikube ssh -- sudo ctr -n k8s.io images ls | grep myimage:latest
myimage:latest                                                                                                  application/vnd.docker.distribution.manifest.v2+json      sha256:31f282d47b4b7df3045e8c2f6c1dbe927cf05e2de922c35f6397fc09ae6f1f2c 748.3 KiB linux/amd64                                                    io.cri-containerd.image=managed 

@priyawadhwa
Copy link
Author

Hey @afbjorklund thanks for your guidance! This is all really helpful.

Right now, skaffold errors out if it detects minikube+containerd instead of minikube+docker. I want to initially focus on just making skaffold work with minikube+containerd. The easiest way to do this is to add a minikube load command, similar to what kind does, which will load an image on the users machine into minikube so that it can be used. Do you have thoughts on this?

Once that's complete, we should consider ways of optimizing the process, which is where a lot of these ideas come in! I have a few thoughts:

Having to add a new "buildctl-env" command for Minikube feels like adding insult to injury, even if "possible".

I agree that this probably isn't the way we want to go.

But if we implement minikube build, we could use that as a custom build script for Skaffold (for all three CRI):

Based on this comment and the shell script you provided, it looks like Skaffold users would have to switch to using this custom build script themselves for each image in their application, knowing that they wanted to build against minikube and containerd. Is that correct?

Some ideas I had for improving the speed here:

  • Using a local registry that skaffold could push to and that minikube could easily access

I also think it's important that if we do implement minikube load first, we benchmark the amount of time it takes. Having some way of measuring these things would help us figure out what the best solution is.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 2, 2021

Based on this comment and the shell script you provided, it looks like Skaffold users would have to switch to using this custom build script themselves for each image in their application, knowing that they wanted to build against minikube and containerd. Is that correct?

Right, I don't think there is a way to use the containerd runtime and still provide a docker daemon that it can talk to ?
If one wants to go that route, then one might as well use the docker runtime (which will in turn start a containerd...)

❌ Exiting due to MK_USAGE: The docker-env command is only compatible with the "docker" runtime, but this cluster was configured to use the "containerd" runtime.

CRI-O users will have the option of using Podman, since version 3.0 does provide a docker socket for use with SSH.
But it still won't work to run minikube podman-env, because the syntax and URL is slightly different between the two.

i.e. Docker will hardcode the socket path as /var/run/docker.sock, so a symlink needs to be provided for it to work 😞

/run/docker.sock -> /run/podman/podman.sock

They also use different variables, DOCKER_HOST vs CONTAINER_HOST. But that was crio/podman, not containerd/buildkit...

Building in pods should work, but also "different".

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 2, 2021

The easiest way to do this is to add a minikube load command, similar to what kind does, which will load an image on the users machine into minikube so that it can be used. Do you have thoughts on this?

We should already have this, as part of our minikube cache add. So more of a question of stripping out the config update and such, and just do the transfer and the actual load (using our cruntime abstraction).

Usage:
  kind load [command]

Available Commands:
  docker-image  Loads docker image from host into nodes
  image-archive Loads docker image from archive into nodes

We might miss the first part, where it would save an image from a local container engine onto disk (e.g. docker save). But it should be rather straight-forward to implement, normally we would pull from registry...

@priyawadhwa
Copy link
Author

Yah I'd considered using podman-env as well and even using crio as our default runtime, but I think we should stick with containerd since the community seems to be moving more towards it.

We should already have this, as part of our minikube cache add. So more of a question of stripping out the config update and such, and just do the transfer and the actual load (using our cruntime abstraction).

The reason I didn't want to reuse cache add is because I think the way the cache currently works is that all added images will be saved to the minikube config and automatically loaded into new minikube clusters. It doesn't really make sense to add application-specific images into the cache for all clusters. We would also then need to manually do a cache remove when the cluster was shutdown.

Adding a new minikube load command would get around these issues and is also more intuitive for users. minikube load should be able to take in the following:

  1. Image in a local docker daemon (we would save it as a tarball, copy it in and then load it)
  2. Path to local tarball (we would copy it in and then load it)

I think #2 is important in case people are using local builders that aren't docker.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 2, 2021

I actually meant "code from" cache add, rather than trying to use the existing commands in it. Having a load sounds good.

The main problem with 1) is the same as with "env", you have "docker-image" and "podman-image" and whatever-image

For the 2) local tarball we only have two options: docker and oci. And nobody uses the second option, so that's rather easy...

Having to save | load everything isn't awesome, but the way things are at the moment. Have high hopes for "stargazing".

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 2, 2021

Yah I'd considered using podman-env as well and even using crio as our default runtime, but I think we should stick with containerd since the community seems to be moving more towards it.

Might as well stick with docker as the default runtime in that case, assuming that a proper CRI appears to take over after the dockershim (which is still with us until Kubernetes 1.23 or something)

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 2, 2021

minikube load should be able to take in the following:

  1. Image in a local docker daemon (we would save it as a tarball, copy it in and then load it)
  2. Path to local tarball (we would copy it in and then load it)

You could also leave option 3 open, for runtimes that support it:

  1. Image on standard input stream (usually 2. with "-" as path)

That makes it easy to do something like:

docker save IMAGE | minikube load -

The benefit of this is that it doesn't have to touch the disk, not even tmpfs.

Especially nice when sending it remotely, like from the host to the build node.

@priyawadhwa
Copy link
Author

I actually meant "code from" cache add, rather than trying to use the existing commands in it. Having a load sounds good.

Ah gotcha, sounds good to me :)

  1. Image on standard input stream (usually 2. with "-" as path)

That's a really good idea, I'll look into how complex it would be to implement. I'm hoping to at least get minikube load in for the next release, and maybe this could get in as well.

@afbjorklund
Copy link
Collaborator

To any casual Skaffold users reading this, the script above (#10330 (comment)) should already work with minikube 1.17.1

BuildKit support for containerd was added to minikube in release 1.16, and BuildKit support for docker was in minikube 1.4.

@medyagh
Copy link
Member

medyagh commented Feb 26, 2021

CC: @spowelljr

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 11, 2021

With the latest changes added to "minikube build", this can now be done with:

minikube image build --tag=$IMAGE --push=$PUSH_IMAGE $BUILD_CONTEXT

Currently implemented for:

  • docker (docker)
  • crio (podman)
  • containerd (buildkitd)

@medyagh medyagh modified the milestones: v.1.19.0, v1.20.0-candidate Mar 30, 2021
@medyagh
Copy link
Member

medyagh commented Mar 30, 2021

in progress

@medyagh
Copy link
Member

medyagh commented May 3, 2021

@spowelljr spowelljr added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Jun 14, 2021
@sharifelgamal
Copy link
Collaborator

This is done on minikube's side, just waiting on skaffold to implement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

5 participants