Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide a different way of building images with minikube #4868

Closed
afbjorklund opened this issue Jul 25, 2019 · 22 comments · Fixed by #11164
Closed

Provide a different way of building images with minikube #4868

afbjorklund opened this issue Jul 25, 2019 · 22 comments · Fixed by #11164
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Milestone

Comments

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jul 25, 2019

Currently we delegate all building of images to docker, using minikube docker-env.

This requires the user to install Docker on their machine, and then learn how to set it up...

https://kubernetes.io/docs/tutorials/hello-minikube/

minikube/Dockerfile

FROM node:6.14.2
EXPOSE 8080
COPY server.js .
CMD node server.js

minikube/server.js

var http = require('http');

var handleRequest = function(request, response) {
  console.log('Received request for URL: ' + request.url);
  response.writeHead(200);
  response.end('Hello World!');
};
var www = http.createServer(handleRequest);
www.listen(8080);

For more information on the docker build command, read the Docker documentation.

If the user doesn't already have a local installation of docker, they can't build the image!


We could do better, by providing an abstraction that will simply do the build for them:

$ minikube build -- -t hello-node minikube
💾  Downloading docker 18.09.8
Sending build context to Docker daemon  3.072kB
Step 1/4 : FROM node:6.14.2
6.14.2: Pulling from library/node
3d77ce4481b1: Pull complete 
7d2f32934963: Pull complete 
0c5cf711b890: Pull complete 
9593dc852d6b: Pull complete 
4e3b8a1eb914: Pull complete 
ddcf13cc1951: Pull complete 
2e460d114172: Pull complete 
d94b1226fbf2: Pull complete 
Digest: sha256:62b9d88be259a344eb0b4e0dd1b12347acfe41c1bb0f84c3980262f8032acc5a
Status: Downloaded newer image for node:6.14.2
 ---> 00165cd5d0c0
Step 2/4 : EXPOSE 8080
 ---> Running in 2a302085e433
Removing intermediate container 2a302085e433
 ---> 9172f65af846
Step 3/4 : COPY server.js .
 ---> 035625e5e23f
Step 4/4 : CMD node server.js
 ---> Running in 1771091ed23a
Removing intermediate container 1771091ed23a
 ---> 250208286ec5
Successfully built 250208286ec5
Successfully tagged hello-node:latest

Then the image is built right on the VM, and ready to be used from the minikube pods:

$ minikube ssh sudo crictl images hello-node
IMAGE               TAG                 IMAGE ID            SIZE
hello-node          latest              250208286ec58       660MB
$ minikube kubectl -- create deployment hello-node --image=hello-node
💾  Downloading kubectl v1.15.0
deployment.apps/hello-node created

As usual have to edit the pull policy, when not using a registry but the local images.

      containers:
      - image: hello-node:latest
        imagePullPolicy: IfNotPresent
        name: hello-node
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File

Change Always, as per https://kubernetes.io/docs/concepts/containers/images/


Eventually we could improve this by not using the Docker daemon but e.g. buildah :

That way the user don't have to have the dockerd running, but can use containerd or cri-o.


This project could also be interesting, eventually:
https://github.com/GoogleContainerTools/kaniko

That is: building the images in Kubernetes instead ?
With enough kernel support, also doable with buildah.

https://opensource.com/article/19/3/tips-tricks-rootless-buildah

@medyagh
Copy link
Member

medyagh commented Jul 25, 2019

+1

@afbjorklund afbjorklund added the kind/feature Categorizes issue or PR as related to a new feature. label Jul 25, 2019
@afbjorklund
Copy link
Collaborator Author

Nice talk about the general problem space (i.e. beyond just what minikube can provide and uses):
https://kccnceu18.sched.com/event/Dqu1/building-docker-images-without-docker-matt-rickard-google-intermediate-skill-level-slides-attached

@tstromberg tstromberg added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Aug 1, 2019
@Doqnach
Copy link

Doqnach commented Aug 19, 2019

It could be implemented in the same way as minikube kubectl. So simply running minikube docker ... will relay everything to the docker client / demon inside the VM.
It does require the docker argument, which is slightly longer to type then just minikube build ... but it does provide clear purpose (docker based commands). Will also do everything the docker client can instead of just build.

This suggestion would mostly simply save having to do minikube ssh first...

Problem point with this would be context for any command other than build, since the user would expect their CWD on the host being the context, but in reality it would be the CWD inside the VM. As long as a user is in their homedir it could be mapped of course (given that the homedir is mounted under most vm-drivers, though not all?).

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Aug 19, 2019

It could be implemented in the same way as minikube kubectl

It was implemented the same way, only difference being using .tgz and .zip instead of .exe

See 8983b8d

Problem point with this would be context for any command other than build,

The use case was only build, but it still remains to handle the "build context" for podman.

As long as we are using the docker client on the host, it will transparently handle directories.
But when running docker build or podman build on the VM, we need to transport the files.
This is done by creating a tar archive on the client, and then copying that to the machine.
There are some other minor details as well, like handling .dockerignorefiles. But nothing much.

https://docs.docker.com/engine/reference/commandline/build/#build-with-path

@afbjorklund afbjorklund self-assigned this Sep 14, 2019
@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Sep 15, 2019

I implemented the podman version now, although it does not transport any build context.

So it can only build files that are already on the VM, or build tarballs provided by an URL.

Docker

The Docker version is sending the command over 2376 to the dockerd using docker (client).

docker $(minikube docker-config) build ...

Where "docker-config" is an imaginary command that does the same as docker-machine config

Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM busybox
 ---> 19485c79a9bb
Step 2/2 : RUN true
 ---> Using cache
 ---> 5b5b3c378749
Successfully built 5b5b3c378749
Successfully tagged testbuild:latest

Podman

The Podman version is instead running the podman command remotely, using sudo over ssh.

minikube ssh -- sudo podman build ...

This means that we don't need to install a client, and don't need to have a daemon running.

STEP 1: FROM busybox
STEP 2: RUN true
--> Using cache 4fdf7ea3e9292032ccf15cd1fed43cf483724640923b48511c906b9ce632fcd0
STEP 3: COMMIT testbuild
4fdf7ea3e9292032ccf15cd1fed43cf483724640923b48511c906b9ce632fcd0

They both take more or less the same flags, such as -t for tagging the container image.

Handing a directory ("build context") is done by creating a tar stream, including Dockerfile .

@fiskhest
Copy link

fiskhest commented Oct 4, 2019

Thanks @afbjorklund. I was able to achieve exactly what I wanted based on your notes, moving the buildstep inside minikube instead of relying on my co-developers local docker environments being correctly set up.

@medyagh
Copy link
Member

medyagh commented Dec 16, 2019

@afbjorklund are you still working on this on e?
@josedonizetti also showed interest in this.

this would be a cool feature.

@afbjorklund
Copy link
Collaborator Author

@medyagh : I was trying to build critical mass for including it as a feature, over initial concerns.

Seems like we have it, so I can do a rebase and finish that "build context" implentation for podman...

@afbjorklund
Copy link
Collaborator Author

Something like this: https://github.com/fsouza/go-dockerclient/blob/master/tar.go#L20

So that you can build a directory, and it will automatically create a tarball and scp it...

@afbjorklund
Copy link
Collaborator Author

Adding varlink support to the minikube ISO and using podman-remote/podman-env (#6350) means that one does not have to use podman over ssh (directly) anymore. It will handle the build context.

We should probably still support the use case, like when not able to run any kind of local docker client or podman remote whatsover. But it is less urgent now, when there is an alternative for both.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 24, 2020
@medyagh
Copy link
Member

medyagh commented May 20, 2020

this is still a good thing to have, to clarify :

minikube already has 5 multiple ways to build images https://minikube.sigs.k8s.io/docs/handbook/pushing/

but this issue is about adding an issue, that remove the dependency on docker on the user's host.
we still have a way to build image using minikube ssh, that means user wont have to install docker but still they would need to transfer their docker file and files into minikube to use minikube ssh

so this issue purposes that we hide that from the user for a less friction build

@afbjorklund
Copy link
Collaborator Author

There's actually only two ways of building (docker and podman), maybe four if you count ssh.

Please note that the user will still run a local docker (cli) or podman-remote "under the hood".
While we could have re-implemented the client inside of minikube, it wasn't really worth doing...

The implementation handles both scenarios though, since it was written before varlink worked.
For some scenarios, it could be good to avoid running a local client (beyond just tar and ssh)

The main idea was the same as with minikube kubectl, to shield the user from all the details.


Having to log in to the master node using ssh is considered as a workaround here.
Similar to having to log in to the master just to run kubectl, it should not be needed!

Instead the user is supposed to be able to edit the Dockerfile and the yaml file locally.
And then use the provided "build" and "kubectl" commands, to talk to their cluster.

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented May 20, 2020

The same method that is used for running docker or podman remotely, also works for kaniko.

https://github.com/GoogleContainerTools/kaniko#using-kaniko

echo -e 'FROM alpine \nRUN echo "created from standard input"' > Dockerfile | tar -cf - Dockerfile | gzip -9 | docker run \
  --interactive -v $(pwd):/workspace gcr.io/kaniko-project/executor:latest \
  --context tar://stdin \
  --destination=<gcr.io/$project/$image:$tag>

This build wrapper could handle all those ugly bits (creating and piping the tarball) for you...

@stavalfi
Copy link

@afbjorklund the idea of docker $(minikube docker-config) build ... sounds great. is there any progress on that?

@afbjorklund
Copy link
Collaborator Author

@afbjorklund the idea of docker $(minikube docker-config) build ... sounds great. is there any progress on that?

No, forgot about it. There was the workaround with the subshell. ( eval $(minikube docker-env); docker ...)

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 19, 2020
@medyagh medyagh changed the title Provide a way of building images with minikube Provide a differentway of building images with minikube Jul 29, 2020
@medyagh medyagh changed the title Provide a differentway of building images with minikube Provide a different way of building images with minikube Jul 29, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@afbjorklund
Copy link
Collaborator Author

/remove-lifecycle rotten

Also needs an implementation for buildkit, for use with the containerd container runtime.

@afbjorklund afbjorklund reopened this Dec 7, 2020
@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Dec 7, 2020
@afbjorklund afbjorklund added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Dec 7, 2020
@afbjorklund afbjorklund added this to the v1.18.0-candidate milestone Jan 23, 2021
@medyagh medyagh modified the milestones: v.1.19.0, v1.20.0-candidate Mar 1, 2021
@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Mar 7, 2021

The initial re-implementation, adding a "BuildImage" to the cruntime: #10742

@@ -94,6 +94,8 @@ type Manager interface {
 
        // Load an image idempotently into the runtime on a host
        LoadImage(string) error
+       // Build an image idempotently into the runtime on a host
+       BuildImage(string, string) error
 
        // ImageExists takes image name and image sha checks if an it exists
        ImageExists(string, string) bool

It seems like it will only handle directories to start with, so unpack any tarballs.
Eventually it should also handle http:// etc, so you can build from remote URLs.

Goes something like:

  1. Create tarball from context
  2. Transfer tarball to machine
  3. Unpack remote tarball
  4. Build image, and tag it
  5. Remove remote context
  6. Remove remote tarball

Note that any images (like FROM) are kept in the build cache on the machine.
So it is only the build context (Dockerfile and local files), that need transferring.

@afbjorklund
Copy link
Collaborator Author

The final version ended up slightly more complicated:

@@ -94,6 +96,8 @@ type Manager interface {
 
 	// Load an image idempotently into the runtime on a host
 	LoadImage(string) error
+	// Build an image idempotently into the runtime on a host
+	BuildImage(string, string, string, bool, []string, []string) error
 
 	// ImageExists takes image name and image sha checks if an it exists
 	ImageExists(string, string) bool

After adding support also for tag/push and env/opt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants