Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multiple kubeconfigs for a provider #3661

Open
richardcase opened this issue Sep 18, 2020 · 30 comments
Open

Support multiple kubeconfigs for a provider #3661

richardcase opened this issue Sep 18, 2020 · 30 comments
Labels
area/clusterctl Issues or PRs related to clusterctl help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@richardcase
Copy link
Member

richardcase commented Sep 18, 2020

User Story

As a Cluster API Provider developer I would like Cluster API to support multiple kubeconfigs for the situation where the kubeconfig used by CAPI needs to be different to the kubeconfig that a user would use to connect to the created Kubernetes cluster.

Detailed Description

The implementation of EKS support in CAPA generates 2 kubeconfig files. We did this as the kubeconfig for EKS usually uses the aws-iam-authenticator or aws binary to generate the authentication token. This caused issues with CAPI as those binaries don't exists in the capi images (and shouldn't exist in the images). So we decided to generate 2 different kubeconfigs, one for use by CAPI that has a short lived token (max 15mins validity) and adheres to the current kubeconfig secret naming convention. A second kubeconfig is generated and stored in the management cluster for use by a user and this uses aws-iam-authentictor or the aws cli and we add a -user to the name of the secret.

Currently this difference is documented in the EKS specific documentation as the feature is experimental.

It would good to allow this distinction in CAPI and define a naming convention etc. And clusterctl get kubeconfig should probably return the user focused kubeconfig by default with an option to return the one used by capi. Both configs could be the same as well.

Anything else you would like to add:

There has been a feature request raised about this in capa about this

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 18, 2020
@fabriziopandini
Copy link
Member

I see some potential problems in having a short liven kubeconfig for CAPI, because -kubeconfig is part of the contract and it is used in many places to access the workload cluster like e.g. the cluster cache tracker used by many controllers -> the Machine controller when triggering drain, MachineHealthChecks...

WRT to additional kubeconfig for clusterctl get kubeconfig only, +1 (default to -user, fall back on CAPI default, eventually a new flag to allow users to select for other names)

@fabriziopandini
Copy link
Member

/milestone Next
as per kubernetes-sigs/cluster-api-provider-aws#1951 (comment)

@k8s-ci-robot k8s-ci-robot added this to the Next milestone Sep 18, 2020
@richardcase
Copy link
Member Author

For clarification, the "non-user" kubeconfig that is generated for use by CAPI for a EKS cluster contains a token that is only valid for 15 mins. However, the kubeconfig is regenerated every sync-period and so there is always a kubeconfig available for CAPI.

@vincepri
Copy link
Member

vincepri commented Oct 8, 2020

/milestone v0.4.0

@k8s-ci-robot k8s-ci-robot modified the milestones: Next, v0.4.0 Oct 8, 2020
@richardcase
Copy link
Member Author

I can look at this.

/assign

@richardcase
Copy link
Member Author

@fabriziopandini @vincepri - would this require a CAEP or is ok to just go ahead and make the clusterctl change?

@fabriziopandini
Copy link
Member

fabriziopandini commented Dec 1, 2020

The most important thing to me is to document this new feature

If we address the above points, IMO a CAEP is not necessary.

@richardcase
Copy link
Member Author

Thanks @fabriziopandini - i will make sure the documentation is updated as part of the change.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 1, 2021
@richardcase
Copy link
Member Author

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 1, 2021
@richardcase
Copy link
Member Author

I haven't progressed this, so unassigning and opening it up for others to help.

/unassign
/area clusterctl
/help
/good-first-issue

@k8s-ci-robot
Copy link
Contributor

@richardcase:
This request has been marked as suitable for new contributors.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.

In response to this:

I haven't progressed this, so unassigning and opening it up for others to help.

/unassign
/area clusterctl
/help
/good-first-issue

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added area/clusterctl Issues or PRs related to clusterctl good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Oct 8, 2021
@vincepri
Copy link
Member

/milestone v1.0

@k8s-ci-robot k8s-ci-robot modified the milestones: v0.4, v1.0 Oct 19, 2021
@vincepri vincepri removed this from the v1.0 milestone Oct 22, 2021
@richardcase
Copy link
Member Author

@valaparthvi - this is an updated link: https://cluster-api-aws.sigs.k8s.io/topics/eks/creating-a-cluster.html#kubeconfig.

If it was me working on it i would do something like this:

  1. Create an EKS based cluster using CAPI/CAPA
  2. Debug through the clusterctl get kubeconfig command, starting here to understand the current flow.

The change will probably require:

  • a new flag on the command so the user can indicate which "type" of kubeconfig to get. Perhaps it has 2 options
  • update the logic in GetKubeconfig so that it gets the secret with the name depending on the type of kubeconfig.
  • Update the docs for users and providers

I would say we need to make sure that by default the command behaves as it does currently (i.e. gets the clustername-kubeconfig secret).

@valaparthvi
Copy link
Contributor

Awesome! This is exactly what I was looking for. Thanks 🚀

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. labels Jun 22, 2022
@richardcase
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 23, 2022
@richardcase
Copy link
Member Author

/lifecycle active

@k8s-ci-robot k8s-ci-robot added the lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. label Jun 23, 2022
@fabriziopandini fabriziopandini added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jul 29, 2022
@fabriziopandini fabriziopandini removed this from the v1.2 milestone Jul 29, 2022
@fabriziopandini fabriziopandini removed the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jul 29, 2022
@fabriziopandini
Copy link
Member

/triage accepted

Copying here a comment from the PR for better visibility

Discussed in Aug 3rd Office hours
We are open to idea on how to make UX nice but we cannot trade on security concern
Also, it will be great if we can discuss this in terms of contract all the control plane providers should abide to.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. labels Nov 3, 2022
@fabriziopandini
Copy link
Member

/lifecycle frozen
@richardcase @jackfrancis this is something to be considered for the managed Kubernetes work

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 3, 2022
@sbueringer sbueringer removed the good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. label Jan 10, 2024
@sbueringer
Copy link
Member

Removed good first issue, because I don't think this issue is actually a good first issue

@fabriziopandini
Copy link
Member

/priority important-longterm

@k8s-ci-robot k8s-ci-robot added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Apr 11, 2024
@valaparthvi valaparthvi removed their assignment Apr 12, 2024
@fabriziopandini fabriziopandini removed the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Apr 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/clusterctl Issues or PRs related to clusterctl help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants