Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support additional containerization drivers ? #7957

Closed
afbjorklund opened this issue May 1, 2020 · 12 comments
Closed

Support additional containerization drivers ? #7957

afbjorklund opened this issue May 1, 2020 · 12 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@afbjorklund
Copy link
Collaborator

Currently minikube has support for Docker as a KIC node driver, and "almost" support for Podman.

One question is if we should make it more general, to allow for adding more alternative drivers... ?
This would probably involve further abstractions for KIC, beyond the current Docker-based CLI.

To allow running the minikube container "node" container with for instance OpenVZ or with LXC :

Each container performs and executes exactly like a stand-alone server; a container can be rebooted independently and have root access, users, IP addresses, memory, processes, files, applications, system libraries and configuration files.

Our main focus is system containers. That is, containers which offer an environment as close as possible as the one you'd get from a VM but without the overhead that comes with running a separate kernel and simulating all the hardware.

The situation is confused a bit about the two meanings of "docker", Docker Engine vs Docker Desktop.

Currently none of the other solutions offer such a integrated VM for other platforms, but require Linux.

So mostly talking about the Linux version here.

Leaving "other people's VMs" for another story.


@afbjorklund:
I'm a bit skeptical to this feature, it could divert attention from completing the existing drivers...
And there are already alternative solutions for running Kubernetes-on-LXD, like microk8s.

So like our "localkube" bootstrapper (that was left for k3s), it could mean stretching too far ?
Better to use standard solutions like Docker and Kubeadm, and let others handle alternatives.

But I'm curious what others think

@medyagh @tstromberg

@afbjorklund afbjorklund added kind/feature Categorizes issue or PR as related to a new feature. triage/discuss Items for discussion labels May 1, 2020
@afbjorklund
Copy link
Collaborator Author

Note for newcomers: "KIC" refers to Kubernetes-in-Container, an implementation where minikube creates a "system container" rather than a virtual machine like with the regular libmachine drivers.
It is based in large parts on KIND, which is another Kubernetes SIG for testing Kubernetes clusters. In this system container, another container runtime is started (aka docker-in-docker) to run the pods.

The biggest difference between a KIC driver and a VM driver is that it uses the host kernel (by design). So if you want to load kernel modules or do similar things, you still need to use a virtual machine. But KIC has a smaller footprint on Linux, by avoid having to do those kernel things like handle hardware. And if you already have a VM (such as Docker) running on Mac / Win, it's smaller by not needing two.

@medyagh
Copy link
Member

medyagh commented May 1, 2020

I am open to start the discussion on LXC and openvz, as long as we can treat them as one of our drivers and not have to re-invent minikube.
For our KIC drivers (docker and podman) we hardly did any modification in our process than we do for VMs.

  • Create VM --> Create Container
  • Run Kubeadm init ---> Run KubeAdm init

there amount change that required to make KIC work was mostly refactoring existing code to be more reusable without having to create a new route for Kic Drivers.

if adding support for LXC or OpenVZ can be done by respecting our Bootstrapper and Driver interface. and it will not make us maintain two different bootstrappers or two different type of routes for Start, Stop, Pause, Delete. I think that should be something we think about.

the question is can we treat an LXC or OpenVZ just like a machine in our drivers?
I am okay if we have partial support for LXC or OpenVZ (like not have support for multinode, multi cluster)

like if we can start a Single Cluster for a developer on a restricted linux machine with low amount of resource (such as chromebook) using LXC that would be useful to the users and I doubt one would want multi node multi cluster on a chromebook.

another question is, how would LXC overhead would be better compared to docker/podman?
and can we run Docker inside LXC ? currently we do docker in docker or docker in podman.

@afbjorklund
Copy link
Collaborator Author

It would probably be enough with LXC to start with, looks like not too many OpenVZ OS templates.
And it still has the custom OpenVZ kernel (based on RHEL), so maybe harder to get started with...

https://wiki.openvz.org/Releases
https://openvz.livejournal.com/49158.html (2005-2015)

Since LXC (and LXD) is supported by Ubuntu (Canonical), it is probably slightly easier to test with.
However, unlike Podman the CLI is rather different from Docker. So it will need an adaption of KIC.

https://linuxcontainers.org/lxc/getting-started/
https://linuxcontainers.org/distrobuilder/ (image server)

the question is can we treat an LXC or OpenVZ just like a machine in our drivers?
I am okay if we have partial support for LXC or OpenVZ

There should not be much difference between these machines and the docker/podman machines.
The actual bootstrapping (kubeadm) should be the same, even though we now forked the drivers.

another question is, how would LXC overhead would be better compared to docker/podman?
and can we run Docker inside LXC ? currently we do docker in docker or docker in podman.

I don't expect this to be much different, it is the same cgroups and the same namespaces etc...
There is this whole discussion of rootless containers, but that is also yet another (different) story.


Anyway, this was mostly to start the discussion whether we should "open up" our KIC more (or not) ?
Currently it is rather focused on Docker, and many aspects are still Docker-only (not even Podman).

I will probably look into completing the Podman driver, so that it is comparable with the Docker driver.
And then look at building a docker image from our regular (ISO) buildroot, rather than using Ubuntu.

Most likely we will still have 80% of the user base still using Docker (both for driver and for runtime).
And then offer Podman / CRI-O as an alternative to that... (containerd is not really an alternative*).

* basically containerd is what you end up with if you start with docker and stop the dockerd.
You can still run your containers, but not build them anymore. So it's just a slimmed-down variant.

@gattytto
Copy link

gattytto commented May 6, 2020

Note for newcomers: "KIC" refers to Kubernetes-in-Container, an implementation where minikube creates a "system container" rather than a virtual machine like with the regular libmachine drivers.

I want to add some detail about this scenario. IF KIC meaning you will have minikube create the "VM" using a KIC driver, so a LXC container gets provisioned. I think I've been getting it wrong the whole time, because what I mean for the LXD part is minikube actually running with the "none" driver already inside a LXD container, and not running in the actual host using a KIC driver got get a LXC provisioned.

I think this are different things and could be addressed in different ways, or maybe in a nested situation where the KIC driver just creates the LXD container, adds profiles to it (for kenel modules and stuff) and then runs minikube driver=none inside the container to continue the setup

@gattytto
Copy link

gattytto commented May 6, 2020

For our KIC drivers (docker and podman) we hardly did any modification in our process than we do for VMs.

pls consider adding cri-o to the mix since it doesn't rely on podman nor docker at all. AND since recently we have ubuntu 20.04 repos with .deb for cri-o 1.17

@afbjorklund
Copy link
Collaborator Author

think this are different things and could be addressed in different ways, or maybe in a nested situation where the KIC driver just creates the LXD container, adds profiles to it (for kenel modules and stuff) and then runs minikube driver=none inside the container to continue the setup

I have no idea what is the difference between LXC and LXD, so you might have to explain that. Basically the biggest blocker right now is that KIC uses the Docker API, and even Podman (that tries to copy that API) is slightly incompatible and needs special code. There would be many more such cases here.

For docker-machine there was a "driver" abstraction over the hypervisors and the cloud providers, that we are now abandoning (moving in the code) - but that's another story entirely. If the support for these system containers should grow outside docker/podman, it would also need such a API facade ?

@afbjorklund
Copy link
Collaborator Author

pls consider adding cri-o to the mix since it doesn't rely on podman nor docker at all. AND since recently we have ubuntu 20.04 repos with .deb for cri-o 1.17

I'm not sure it makes sense to launch these privileged containers on cri-o, since that is mostly intended for kubernetes use. So I think think we will leave it as one of the container runtimes inside...

It has the same docker API "problem" as above, so I guess we could add CRI next to LXC eventually. But since cri-o and podman share so much of their code anyway, it is not really a priority.

https://www.openshift.com/blog/crictl-vs-podman

And 80% of the users would still use Docker anyway.

@afbjorklund afbjorklund added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label May 11, 2020
@afbjorklund
Copy link
Collaborator Author

Apparently some cloud providers are still using OpenVZ, even if Travis doesn't anymore... But I'm not sure that is a valid reason, to add more KIC drivers. So I think docker and podman will be "enough".

Claire: [When asked what music is played at Bob's Country Bunker] Oh we got both kinds. We got Country and Western.
From https://en.wikiquote.org/wiki/The_Blues_Brothers

It's a pity that "kubeadm" does not support these, but maybe it's a hint that minikube shouldn't either ? But if it is just adding a link to the documentation and some solution message, I think we can do that.

i.e. running OpenVZ and LXC through the "none" driver is probably plenty. and I still think running these machines remotely (over SSH) is the way to go here, and the leave running locally to for instance CI.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 22, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 22, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

6 participants