Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dashboard does not launch with different service cluster IP range #1536

Closed
basilfx opened this issue May 29, 2017 · 5 comments
Closed

Dashboard does not launch with different service cluster IP range #1536

basilfx opened this issue May 29, 2017 · 5 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@basilfx
Copy link

basilfx commented May 29, 2017

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Minikube version (use minikube version): v0.19.0

Environment:

  • OS (e.g. from /etc/os-release): Ubuntu 17.04
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): VirtualBox
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): v0.18.0
  • Install tools: None
  • Others: None

What happened: I try to change the service cluster IP range (which works) using the --extra-config command line option suggested here. However, with a clean setup, it does not launch the dashboard:

$ minikube dashboard
Waiting, endpoint for service is not ready yet...
...
...
Waiting, endpoint for service is not ready yet...
Could not find finalized endpoint being pointed to by kubernetes-dashboard: Temporary Error: No endpoints for service are ready yet
Temporary Error: No endpoints for service are ready yet
...
...
Temporary Error: No endpoints for service are ready yet
Error validating service: Error getting service kubernetes-dashboard: Get https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kubernetes-dashboard: dial tcp 192.168.99.100:8443: getsockopt: connection refused

Also, running kubectl proxy and navigating to http://localhost:8001/ui shows the following:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "no endpoints available for service \"kubernetes-dashboard\"",
  "reason": "ServiceUnavailable",
  "code": 503
}

What you expected to happen:

The dashboard should open.

How to reproduce it (as minimally and precisely as possible):

minikube stop && minikube delete && minikube start --extra-config=apiserver.ServiceClusterIPRange=172.66.0.0/24

When ready, run minikube dashboard.

Anything else do we need to know:

It does work when I create a clean cluster using only minikube start.

@aaron-prindle aaron-prindle added the kind/bug Categorizes issue or PR as related to a bug. label May 30, 2017
@r2d4
Copy link
Contributor

r2d4 commented Jun 8, 2017

Can you run minikube logs and ensure that ServiceClusterIPRange is actually getting set?

@basilfx
Copy link
Author

basilfx commented Jun 9, 2017

Got the following (using v0.19.1):

basilfx:~/ $ minikube logs | grep ServiceClusterIPRange
Jun 09 11:41:23 minikube localkube[3358]: I0609 11:41:23.345293    3358 localkube.go:119] Setting ServiceClusterIPRange to 172.66.0.0/24 on apiserver.
Jun 09 11:41:23 minikube localkube[3358]: I0609 11:41:23.345465    3358 localkube.go:119] Setting ServiceClusterIPRange to 172.66.0.0/24 on apiserver.

basilfx:~/ $ minikube dashboard                   
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
^C

basilfx:~/ $ kubectl get services --all-namespaces
NAMESPACE     NAME                   CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
default       kubernetes             172.66.0.1     <none>        443/TCP        11m
kube-system   kubernetes-dashboard   172.66.0.113   <nodes>       80:30000/TCP   11m

basilfx:~/ $ minikube version
minikube version: v0.19.1

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 26, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 25, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants