-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide a different way of building images with minikube #4868
Comments
+1 |
Nice talk about the general problem space (i.e. beyond just what minikube can provide and uses): |
It could be implemented in the same way as This suggestion would mostly simply save having to do Problem point with this would be context for any command other than build, since the user would expect their CWD on the host being the context, but in reality it would be the CWD inside the VM. As long as a user is in their homedir it could be mapped of course (given that the homedir is mounted under most vm-drivers, though not all?). |
It was implemented the same way, only difference being using .tgz and .zip instead of .exe See 8983b8d
The use case was only build, but it still remains to handle the "build context" for podman. As long as we are using the docker client on the host, it will transparently handle directories. https://docs.docker.com/engine/reference/commandline/build/#build-with-path |
I implemented the podman version now, although it does not transport any build context. So it can only build files that are already on the VM, or build tarballs provided by an URL. DockerThe Docker version is sending the command over 2376 to the docker $(minikube docker-config) build ... Where "docker-config" is an imaginary command that does the same as
PodmanThe Podman version is instead running the minikube ssh -- sudo podman build ... This means that we don't need to install a client, and don't need to have a daemon running.
They both take more or less the same flags, such as Handing a directory ("build context") is done by creating a tar stream, including |
Thanks @afbjorklund. I was able to achieve exactly what I wanted based on your notes, moving the buildstep inside minikube instead of relying on my co-developers local docker environments being correctly set up. |
@afbjorklund are you still working on this on e? this would be a cool feature. |
@medyagh : I was trying to build critical mass for including it as a feature, over initial concerns. Seems like we have it, so I can do a rebase and finish that "build context" implentation for podman... |
Something like this: https://github.com/fsouza/go-dockerclient/blob/master/tar.go#L20 So that you can build a directory, and it will automatically create a tarball and scp it... |
Adding We should probably still support the use case, like when not able to run any kind of local |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
this is still a good thing to have, to clarify : minikube already has 5 multiple ways to build images https://minikube.sigs.k8s.io/docs/handbook/pushing/ but this issue is about adding an issue, that remove the dependency on docker on the user's host. so this issue purposes that we hide that from the user for a less friction build |
There's actually only two ways of building (docker and podman), maybe four if you count Please note that the user will still run a local The implementation handles both scenarios though, since it was written before varlink worked. The main idea was the same as with Having to log in to the master node using ssh is considered as a workaround here. Instead the user is supposed to be able to edit the Dockerfile and the yaml file locally. |
The same method that is used for running docker or podman remotely, also works for kaniko. https://github.com/GoogleContainerTools/kaniko#using-kaniko
This build wrapper could handle all those ugly bits (creating and piping the tarball) for you... |
@afbjorklund the idea of |
No, forgot about it. There was the workaround with the subshell. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten Also needs an implementation for buildkit, for use with the containerd container runtime. |
The initial re-implementation, adding a "BuildImage" to the cruntime: #10742 @@ -94,6 +94,8 @@ type Manager interface {
// Load an image idempotently into the runtime on a host
LoadImage(string) error
+ // Build an image idempotently into the runtime on a host
+ BuildImage(string, string) error
// ImageExists takes image name and image sha checks if an it exists
ImageExists(string, string) bool
It seems like it will only handle directories to start with, so unpack any tarballs. Goes something like:
Note that any images (like FROM) are kept in the build cache on the machine. |
The final version ended up slightly more complicated: @@ -94,6 +96,8 @@ type Manager interface {
// Load an image idempotently into the runtime on a host
LoadImage(string) error
+ // Build an image idempotently into the runtime on a host
+ BuildImage(string, string, string, bool, []string, []string) error
// ImageExists takes image name and image sha checks if an it exists
ImageExists(string, string) bool After adding support also for tag/push and env/opt. |
Currently we delegate all building of images to
docker
, usingminikube docker-env
.This requires the user to install Docker on their machine, and then learn how to set it up...
https://kubernetes.io/docs/tutorials/hello-minikube/
If the user doesn't already have a local installation of docker, they can't build the image!
We could do better, by providing an abstraction that will simply do the build for them:
Then the image is built right on the VM, and ready to be used from the minikube pods:
As usual have to edit the pull policy, when not using a registry but the local images.
Change
Always
, as per https://kubernetes.io/docs/concepts/containers/images/Eventually we could improve this by not using the Docker daemon but e.g.
buildah
:https://github.com/containers/libpod/blob/master/docs/podman-build.1.md
https://github.com/containers/buildah/blob/master/docs/buildah-bud.md
That way the user don't have to have the dockerd running, but can use containerd or cri-o.
This project could also be interesting, eventually:
https://github.com/GoogleContainerTools/kaniko
That is: building the images in Kubernetes instead ?
With enough kernel support, also doable with buildah.
https://opensource.com/article/19/3/tips-tricks-rootless-buildah
The text was updated successfully, but these errors were encountered: