-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to use existing k8s cluster instead of minikube #442
Allow to use existing k8s cluster instead of minikube #442
Conversation
Thanks for doing this! One of the reasons I wrote the logic with the ClusterProvisioner interface we could easily sub out minikube. I am glad it is being used. I'm not going to jump on this review (unless someone wants me to). The only high level point you may want to think through is cleanup. As it is, after bootstrap is done, the external cluster will have a bunch of leftover controllers/objects sitting in it. We will want some cleanup logic if you plan on leaving the external cluster running. For minikube, the cleanup was handled in the deletion of the minikube cluster. |
@alejandroEsc can you please add some notes on what if any we need to run on the "external cluster"? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great! Just added a few notes.
clusterctl/cmd/create_cluster.go
Outdated
return err | ||
} | ||
} else { | ||
exernalProvider = minikube.New(co.VmDriver) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo, should be externalProvider
clusterctl/cmd/create_cluster.go
Outdated
@@ -112,6 +124,7 @@ func init() { | |||
createClusterCmd.Flags().BoolVarP(&co.CleanupExternalCluster, "cleanup-external-cluster", "", true, "Whether to cleanup the external cluster after bootstrap") | |||
createClusterCmd.Flags().StringVarP(&co.VmDriver, "vm-driver", "", "", "Which vm driver to use for minikube") | |||
createClusterCmd.Flags().StringVarP(&co.KubeconfigOutput, "kubeconfig-out", "", "kubeconfig", "Where to output the kubeconfig for the provisioned cluster") | |||
createClusterCmd.Flags().StringVarP(&co.ExistingClusterKubeconfigPath, "existing-cluster-kubeconfig", "k", "", "path to an existing cluster's kubeconfig (intead of using minikube)") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Can you use uppercase for "path" to match the other flags values?
-a, --addon-components string A yaml file containing cluster addons to apply to the internal cluster | ||
--cleanup-external-cluster Whether to cleanup the external cluster after bootstrap (default true) | ||
-c, --cluster string A yaml file containing cluster object definition | ||
-k, --existing-cluster-kubeconfig string path to an existing cluster's kubeconfig (intead of using minikube) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to use the -k? Since it's not a required flag, it seems like we could force people to use the full "existing-cluster-kubeconfig" string
clusterctl/cmd/create_cluster.go
Outdated
@@ -112,6 +124,7 @@ func init() { | |||
createClusterCmd.Flags().BoolVarP(&co.CleanupExternalCluster, "cleanup-external-cluster", "", true, "Whether to cleanup the external cluster after bootstrap") | |||
createClusterCmd.Flags().StringVarP(&co.VmDriver, "vm-driver", "", "", "Which vm driver to use for minikube") | |||
createClusterCmd.Flags().StringVarP(&co.KubeconfigOutput, "kubeconfig-out", "", "kubeconfig", "Where to output the kubeconfig for the provisioned cluster") | |||
createClusterCmd.Flags().StringVarP(&co.ExistingClusterKubeconfigPath, "existing-cluster-kubeconfig", "k", "", "path to an existing cluster's kubeconfig (intead of using minikube)") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit of a confusing concept for the external user, but it's unclear from the flag name what the existing cluster will be used for. Maybe it could be called "existing-bootstrap-cluster-kubeconfig" or change the explanation text to "path to an existing cluster's kubeconfig for bootstrapping (intead of using minikube)"
@jessicaochen would it be ok to review the cleanup process in another PR? Its definitely something im considering, including maybe adding an additional function to the interface that would do this. But, I want to be careful first and observe which items are safe to cleanup and which are not. |
@dims Do you mean what are the min requirements for the external cluster? Where would you like to see the notes? Also would be ok to include that in another PR? |
@jessicaochen @dims @mkjelland thank you so much for taking your time to review! I have made the changes requested for this PR at the moment. I renamed things a bit for clarity, let me know if thats ok or not. |
@alejandroEsc - It is fine to do cleanup in another PR. If you have not already, create an issue to track this cleanup work and add a comment in the code referencing the issue. That way folks looking at the code will understand the situation till the cleanup is done. If you want my 2-cents on how the cleanup should be - It should probably be in the ClusterDeployer. Since that class creates all the objects in the external cluster, it should have enough context to (optionally) clean them as well. |
@jessicaochen agreed on all accounts. Issue: #448 |
…to be addressed soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, can someone else read through this and add the lgtm label?
/lgtm |
@jessicaochen @dims can i get this merged? or is there something missing? |
The box says you need still in an approval from at least one of your reviewers before it will automerge. |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alejandroEsc, mkjelland The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
right, i am not an approver .. ah looks like @mkjelland took care of it |
Cluster actuator needs delete permission for secret.
What this PR does / why we need it:
Allows us to use an existing cluster with a valid kubeconfig to launch resources instead of using minikube
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
Release note: