-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
minikube start does not check if it is already running #2646
Comments
What does |
The
|
My bad, I thought there was something wrong with the detection. But I see that there is nothing in start that is checking the status... |
Yep 😭 The function already exists in minishift: func ensureNotRunning(client *libmachine.Client, machineName string) {
if !cmdUtil.VMExists(client, machineName) {
return
}
hostVm, err := client.Load(constants.MachineName)
if err != nil {
atexit.ExitWithMessage(1, err.Error())
}
if cmdUtil.IsHostRunning(hostVm.Driver) {
atexit.ExitWithMessage(0, fmt.Sprintf("The '%s' VM is already running.", machineName))
}
} I don't have any golang skills 😢 otherwise, I would do the refactor to solve the issue. |
I don't think any of those utils ( The problem with the detection is that it doesn't remember which bootstrapper was used... |
Set the default kubernetes version in minikube: minikube config set kubernetes-version v1.10.3 Configure clusters and credentials: PKI=$HOME/Development/code/SumAll/pki CLUSTERS=( tng-stage tng-prod ) USERNAME=ibrennan mkdir -p $PKI for CLUSTER in ${CLUSTERS[@]}; do CRT=${PKI}/${CLUSTER}-ca.crt lpass show --notes "SumAll kubernetes ${CLUSTER} client ca.crt" > $CRT LPASS=$(lpass show --notes "SumAll kubernetes ${CLUSTER} client config ${USERNAME}") SERVER=$(grep -oP '^server: \K\S+' <<< $LPASS) TOKEN=$(grep -oP '^token: \K\S+' <<< $LPASS) kubectl config set-cluster ${CLUSTER} --server=${SERVER} --certificate-authority=${CRT} --embed-certs=true kubectl config set-credentials ${USERNAME}-${CLUSTER} --token=${TOKEN} kubectl config set-context ${CLUSTER} --cluster=${CLUSTER} --user=${USERNAME}-${CLUSTER} done unset PKI CLUSTERS CLUSTER USERNAME CRT LPASS SERVER TOKEN Initialize cluster resources: RESOURCES=$HOME/Development/code/SumAll/k8s-cluster-resources if [ ! -e $RESOURCES ]; then git clone [email protected]:SumAll/k8s-cluster-resources.git $RESOURCES fi kubectl --context=minikube create -f $RESOURCES/ms-config-dev.yml kubectl --context=minikube create -f $RESOURCES/k8s-generic-pod-user-dev.yml kubectl --context=minikube create -f $RESOURCES/mongo/mongo-dev.yml kubectl --context=minikube create -f $RESOURCES/redis/redis-dev.yml kubectl --context=minikube create -f $RESOURCES/site-proxy/ingress-dev.yaml unset RESOURCES minikube service mongo --url minikube service redis --url Set up tng-workspace: if ! systemctl --quiet is-active openvpn-sumall.service; then systemctl start openvpn-sumall.service fi # `minikube status` is broken: kubernetes/minikube#2743 # `minikube start` is not idempotent: kubernetes/minikube#2646 ps x | grep -q [m]inikube || minikube start WORKSPACE=$HOME/Development/code/SumAll/k8s-workspace if [ ! -e $WORKSPACE ]; then git clone [email protected]:SumAll/k8s-workspace.git $WORKSPACE fi pushd $WORKSPACE >/dev/null export TNG_WORKSPACE=$HOME/Development/code/SumAll/tng-workspace mkdir -p $TNG_WORKSPACE for f in config.sh manage-services.sh setup-serviceyml-configmap.sh; do sed -i '1 s,#!/bin/bash,#!/usr/bin/env bash,' $f done ./manage-services.sh -c setup for f in config.sh manage-services.sh setup-serviceyml-configmap.sh; do sed -i '1 s,#!/usr/bin/env bash,#!/bin/bash,' $f done popd unset WORKSPACE f
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
The issue cannot be closed, as the issue is active until now 🔢 |
/remove-lifecycle stale |
/assign @ravsa |
@ravsa: GitHub didn't allow me to assign the following users: ravsa. Note that only kubernetes members and repo collaborators can be assigned. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There seems to be an implicit behavior expectation that a second 'minikube start' certifies that all the components are up and running, making any changes necessary to do so. I think that behavior is OK. We however don't hint on the console that this is the case, except by saying that we're 'restarting components'. We can do better than that, I think. That said, this bug is obsolete - minikube start does in fact check nowadays. |
Environment:
What happened:
When I start minikube
minikube start
, I got:And the minikube is started and I can use it perfectly.
But even I repeat the
minikube start
, i get the same message:What you expected to happen:
It's expected to get:
Or something else?!
How to reproduce it (as minimally and precisely as possible):
Normal installation, no special configuration or tweaks applied
Output of
minikube logs
(if applicable):N/A
Anything else do we need to know:
N/A
The text was updated successfully, but these errors were encountered: