Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Defer dashboard deployment until "minikube dashboard" is executed #3485

Merged
merged 6 commits into from
Jan 11, 2019

Conversation

tstromberg
Copy link
Contributor

@tstromberg tstromberg commented Dec 21, 2018

This simplifies cluster startup, and saves about 20MB resident memory (1% of our default VM size). More importantly, it avoids starting unused services, downloading unnecessary files, and gumming the logs up with possibly unimportant error messages relating to them.

This PR doesn't change the behavior substantially, as currently minikube starts the dashboard pod, but only blocks until it's healthy when the "dashboard" command is executed. This PR simply defers the startup as well as the health check.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Dec 21, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: tstromberg

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 21, 2018
Copy link
Contributor

@balopat balopat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with UX nits.

@@ -65,19 +65,30 @@ var dashboardCmd = &cobra.Command{
}
cluster.EnsureMinikubeRunningOrExit(api, 1)

fmt.Fprintln(os.Stderr, "Enabling dashboard ...")
// Enable the dashboard add-on
err = configcmd.Set("dashboard", "true")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this idempontent? If yes, then we don't need "Enabling dashboard ..." the second time (I think addons enable is idempotent) - would be nice to check for enablement.

ns := "kube-system"
svc := "kubernetes-dashboard"
if err = util.RetryAfter(30, func() error { return service.CheckService(ns, svc) }, 1*time.Second); err != nil {
fmt.Fprintln(os.Stderr, "Verifying dashboard health ...")
if err = util.RetryAfter(180, func() error { return service.CheckService(ns, svc) }, 1*time.Second); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be interesting to keep a "watch" on the dashboard pod + on a separate line the result of the service status? But this idea might be better off in a separate PR. It's just 3 minutes of potential waiting is definitely long. Out of those 1 minute is almost guaranteed to be just the addon-manager kicking in...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A good idea, but due to the complexity l will leave this for another PR. This step generally only takes a second or two, but to avoid flakes in places with poor connectivity, I wanted to make sure to wait an extended period for the pod to come up.

fmt.Fprintf(os.Stderr, "%s:%s is not running: %v\n", ns, svc, err)
os.Exit(1)
}

fmt.Fprintln(os.Stderr, "Launching proxy ...")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
fmt.Fprintln(os.Stderr, "Launching proxy ...")
fmt.Fprintln(os.Stderr, "Launching dashboard proxy ...")

p, hostPort, err := kubectlProxy()
if err != nil {
glog.Fatalf("kubectl proxy: %v", err)
}
url := dashboardURL(hostPort, ns, svc)

fmt.Fprintln(os.Stderr, "Verifying proxy health ...")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
fmt.Fprintln(os.Stderr, "Verifying proxy health ...")
fmt.Fprintln(os.Stderr, "Verifying dashboard proxy health ...")

@afbjorklund
Copy link
Collaborator

A lot of people loved having features like the cache and the dashboard available by default, but I guess if it clearly documented how to turn them on again they can still do it on the command-line (or auto-enable). Guess it was features like heapster that made it too heavy , don't recall the dashboard itself being too bad ? Especially not if comparing to the boot time and resource consumption of the VM...

@tstromberg tstromberg changed the title Disable dashboard by default Defer dashboard deployment until "minikube dashboard" is executed Jan 4, 2019
@tstromberg
Copy link
Contributor Author

@afbjorklund - Thanks for raising the point that the PR title/description was not well crafted. This doesn't effectively change the behavior to users: the dashboard command will still work just as it has. It simply delays the dashboard pod startup until the user first wants to use it.

@tstromberg
Copy link
Contributor Author

@minikube-bot OK to test

@tstromberg tstromberg merged commit d69fb28 into kubernetes:master Jan 11, 2019
@tstromberg tstromberg deleted the no-dashboard branch January 11, 2019 18:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants