Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube "start" does not support unconnected cluster startup #2825

Closed
mlgibbons opened this issue May 19, 2018 · 4 comments
Closed

Minikube "start" does not support unconnected cluster startup #2825

mlgibbons opened this issue May 19, 2018 · 4 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@mlgibbons
Copy link
Contributor

mlgibbons commented May 19, 2018

Environment: Windows 10 & Windows 7
Minikube version: 0.26.1
OS :Windows
VM Driver: virtual-box
ISO version: 0.26.0

Trying to spin up a k8s cluster using minikube when not connected to the Internet (either because one does not have an internet connection or one is behind a proxy) fails.

It's important for me to be able to do this as I need to run minikube in a corporate environment in which the dynamic downloading of software is not permitted.

Please note that setting the proxy env variables is not a viable solution to this problem as it does not cover the disconnected case or meet the corporate security requirements.

The would seem to be a number of possible ways to address this problem. My thoughts and current finding are noted below. I'd be interested in hearing what other people think or have found.

It appears that minikube (mk) on running "start" attempts to download a number of different items and cache them before calling kubeadm to start the cluster itself:

  1. A set of images which are loaded into the Docker daemon before the call to kubeadm
  2. The ISO for the VM which will host the cluster
  3. kubeadm and kubelet

They are all required for mk "start" to succeed.

Precaching

The first thought is that one can simply do a run of "start" while connected to the net, cache the items, disconnect, delete the cluster and then run "start" again and all should work. The next step would be to take the cached items and have them scanned, approved and distributed as part of the internal minikube installation. However, this does not turn out to be true for two reasons.

  1. mk fails to load the images into the docker daemon before "kubeadm init" due to it using an incorrect path in the "docker load" command e.g. "docker load -i \tmp\pause-amd64_3.0". The "\"s should be "/"s. This means that the docker daemon tries to load the images when kubeadm starts pods and these loads fail.

  2. mk only dynamically generates the image names for the four static pods (scheduler, api-server, controller-manager and proxy) correctly appending the k8s version to them. Unfortunately, the versions for the other pods are hard-coded and are incorrect for v1.8.x, v1.9.x and v1.10.x. where the versions required differ and are not keyed on k8s version. The result is that "kubeadm init" would cause a download of pods even if the pre-caching had been successful. These loads would fail. See these links for background info - https://github.com/kubernetes/minikube/blob/master/pkg/minikube/constants/constants.go and https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file

I have raised two issues against these items. #2826 #2827

Registry Mirror
IN the corporate env - an alternative - at least for the images - would be to use a registry mirror. This would not handle the ISO or the executables but- combined with the caching for ISO and exes - it would provide a simpler mechanism for pulling the images in dynamically through a secure image chain. I have not yet tried this but would like to once I have exhausted the "Pre-cached" option mentioned above. It would appear that the mk Registry-Mirror flag combined with a private secure registry with either preloading of the images required or dynamic loading and scanning would provide a good solution.

Regards
Mark

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 17, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 16, 2018
@ivan-section-io
Copy link
Contributor

I think this may "work" is some way now. Addressed by #2827, #2844, #2847, #2845, #2849.
I think it's safe to:
/close

@k8s-ci-robot
Copy link
Contributor

@ivan-section-io: Closing this issue.

In response to this:

I think this may "work" is some way now. Addressed by #2827, #2844, #2847, #2845, #2849.
I think it's safe to:
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants