Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube start does not check if it is already running #2646

Closed
nebrass opened this issue Mar 24, 2018 · 11 comments
Closed

minikube start does not check if it is already running #2646

nebrass opened this issue Mar 24, 2018 · 11 comments
Labels
co/virtualbox good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. triage/obsolete Bugs that no longer occur in the latest stable release

Comments

@nebrass
Copy link

nebrass commented Mar 24, 2018

Environment:

  • Minikube version : v0.25.2
  • OS : Mac OS X 10.13.3
  • VM Driver : virtualbox
  • ISO version : v0.25.1

What happened:
When I start minikube minikube start, I got:

Starting local Kubernetes v1.9.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

And the minikube is started and I can use it perfectly.
But even I repeat the minikube start, i get the same message:

Starting local Kubernetes v1.9.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

What you expected to happen:
It's expected to get:

The 'minikube' VM is already running.

Or something else?!

How to reproduce it (as minimally and precisely as possible):
Normal installation, no special configuration or tweaks applied

Output of minikube logs (if applicable):
N/A

Anything else do we need to know:
N/A

@afbjorklund
Copy link
Collaborator

What does minikube status say ?

@nebrass
Copy link
Author

nebrass commented Mar 24, 2018

The minikube status gives:

minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100

@afbjorklund
Copy link
Collaborator

My bad, I thought there was something wrong with the detection.

But I see that there is nothing in start that is checking the status...

@nebrass
Copy link
Author

nebrass commented Mar 25, 2018

Yep 😭 The function already exists in minishift:

func ensureNotRunning(client *libmachine.Client, machineName string) {
	if !cmdUtil.VMExists(client, machineName) {
		return
	}

	hostVm, err := client.Load(constants.MachineName)
	if err != nil {
		atexit.ExitWithMessage(1, err.Error())
	}

	if cmdUtil.IsHostRunning(hostVm.Driver) {
		atexit.ExitWithMessage(0, fmt.Sprintf("The '%s' VM is already running.", machineName))
	}
}

I don't have any golang skills 😢 otherwise, I would do the refactor to solve the issue.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 25, 2018

I don't think any of those utils (cmdUtil) even exist in minikube, so probably minishift only.

The problem with the detection is that it doesn't remember which bootstrapper was used...

ivanbrennan added a commit to ivanbrennan/nixbox that referenced this issue May 27, 2018
Set the default kubernetes version in minikube:

  minikube config set kubernetes-version v1.10.3

Configure clusters and credentials:

  PKI=$HOME/Development/code/SumAll/pki
  CLUSTERS=( tng-stage tng-prod )
  USERNAME=ibrennan

  mkdir -p $PKI

  for CLUSTER in ${CLUSTERS[@]}; do
    CRT=${PKI}/${CLUSTER}-ca.crt
    lpass show --notes "SumAll kubernetes ${CLUSTER} client ca.crt" > $CRT

    LPASS=$(lpass show --notes "SumAll kubernetes ${CLUSTER} client config ${USERNAME}")
    SERVER=$(grep -oP '^server: \K\S+' <<< $LPASS)
    TOKEN=$(grep -oP '^token: \K\S+' <<< $LPASS)

    kubectl config set-cluster ${CLUSTER} --server=${SERVER} --certificate-authority=${CRT} --embed-certs=true
    kubectl config set-credentials ${USERNAME}-${CLUSTER} --token=${TOKEN}
    kubectl config set-context ${CLUSTER} --cluster=${CLUSTER} --user=${USERNAME}-${CLUSTER}
  done

  unset PKI CLUSTERS CLUSTER USERNAME CRT LPASS SERVER TOKEN

Initialize cluster resources:

  RESOURCES=$HOME/Development/code/SumAll/k8s-cluster-resources
  if [ ! -e $RESOURCES ]; then
    git clone [email protected]:SumAll/k8s-cluster-resources.git $RESOURCES
  fi
  kubectl --context=minikube create -f $RESOURCES/ms-config-dev.yml
  kubectl --context=minikube create -f $RESOURCES/k8s-generic-pod-user-dev.yml
  kubectl --context=minikube create -f $RESOURCES/mongo/mongo-dev.yml
  kubectl --context=minikube create -f $RESOURCES/redis/redis-dev.yml
  kubectl --context=minikube create -f $RESOURCES/site-proxy/ingress-dev.yaml
  unset RESOURCES

  minikube service mongo --url
  minikube service redis --url

Set up tng-workspace:

  if ! systemctl --quiet is-active openvpn-sumall.service; then
    systemctl start openvpn-sumall.service
  fi

  # `minikube status` is broken: kubernetes/minikube#2743
  # `minikube start` is not idempotent: kubernetes/minikube#2646
  ps x | grep -q [m]inikube || minikube start

  WORKSPACE=$HOME/Development/code/SumAll/k8s-workspace
  if [ ! -e $WORKSPACE ]; then
    git clone [email protected]:SumAll/k8s-workspace.git $WORKSPACE
  fi
  pushd $WORKSPACE >/dev/null

  export TNG_WORKSPACE=$HOME/Development/code/SumAll/tng-workspace
  mkdir -p $TNG_WORKSPACE

  for f in config.sh manage-services.sh setup-serviceyml-configmap.sh; do
    sed -i '1 s,#!/bin/bash,#!/usr/bin/env bash,' $f
  done

  ./manage-services.sh -c setup

  for f in config.sh manage-services.sh setup-serviceyml-configmap.sh; do
    sed -i '1 s,#!/usr/bin/env bash,#!/bin/bash,' $f
  done

  popd
  unset WORKSPACE f
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 23, 2018
@nebrass
Copy link
Author

nebrass commented Jun 25, 2018

The issue cannot be closed, as the issue is active until now 🔢
minikube version: v0.28.0

@aelbarkani
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 14, 2018
@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. area/product-excellence labels Sep 19, 2018
@tstromberg tstromberg changed the title Bug: minikube start is not checking if it is already running minikube start does not check if it is already running Sep 19, 2018
@tstromberg tstromberg added the good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. label Oct 30, 2018
@ravsa
Copy link

ravsa commented Nov 1, 2018

/assign @ravsa

@k8s-ci-robot
Copy link
Contributor

@ravsa: GitHub didn't allow me to assign the following users: ravsa.

Note that only kubernetes members and repo collaborators can be assigned.
For more information please see the contributor guide

In response to this:

/assign @ravsa

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tstromberg
Copy link
Contributor

tstromberg commented Jan 24, 2019

There seems to be an implicit behavior expectation that a second 'minikube start' certifies that all the components are up and running, making any changes necessary to do so. I think that behavior is OK.

We however don't hint on the console that this is the case, except by saying that we're 'restarting components'. We can do better than that, I think.

That said, this bug is obsolete - minikube start does in fact check nowadays.

@tstromberg tstromberg added the triage/obsolete Bugs that no longer occur in the latest stable release label Jan 24, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/virtualbox good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. triage/obsolete Bugs that no longer occur in the latest stable release
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants