Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stop registering Kube cluster named after Teleport cluster #6786

Merged
merged 1 commit into from
May 26, 2021

Conversation

nklaassen
Copy link
Contributor

Issue #6226

There are essentially 3 ways a Teleport Kubernetes service can be configured inside a Kubernetes cluster, listed below.

This PR only changes the first case where the new style Proxy Service is used (kube_listen_addr set, no kubernetes: subsection) inside of a Kubernetes cluster. Currently, in this case, the Proxy service will publish a Kubernetes cluster with the same name as the Teleport cluster. This is not desired - the Kubernetes cluster name should only be published by the kubernetes_service.

1. New Proxy Service style with Kubernetes Service

auth_service:
  cluster_name: peach

proxy_service:
  kube_listen_addr: 0.0.0.0:3026

kubernetes_service:
  enabled: yes
  listen_addr: 0.0.0.0:3027
  kube_cluster_name: banana

Current behaviour:

$ tsh kube ls
peach
banana

New behaviour:

$ tsh kube ls
banana

2. Legacy Proxy Service style without Kubernetes Service

auth_service:
  cluster_name: peach

proxy_service:
  kubernetes:
    enabled: yes
    listen_addr: 0.0.0.0:3026

Current behaviour (unchanged):

$ tsh kube ls
peach

3. Legacy Proxy Service style with Kubernetes Service

auth_service:
  cluster_name: peach

proxy_service:
  kubernetes:
    enabled: yes
    listen_addr: 0.0.0.0:3026

kubernetes_service:
  enabled: yes
  listen_addr: 0.0.0.0:3027
  kube_cluster_name: banana

Current behaviour (unchanged):

$ tsh kube ls
peach
banana

Backwards Compatability

Anyone currently using the new style Proxy Service with kube_listen_addr set, inside of a Kubernetes cluster, will have the current listed Kubernetes cluster named after the teleport cluster "disappear". This may break automation that relies on that "cluster".

Testing

  • configuration_test.go and auth_test.go updated
  • manually tested each configuration on a local Kubernetes cluster

@nklaassen nklaassen requested review from r0mant, awly, tcsc and Joerger May 8, 2021 00:07
lib/kube/proxy/forwarder.go Outdated Show resolved Hide resolved
lib/kube/proxy/forwarder.go Outdated Show resolved Hide resolved
Copy link
Contributor

@tcsc tcsc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks OK to me.

lib/kube/proxy/auth.go Outdated Show resolved Hide resolved
lib/kube/proxy/forwarder.go Show resolved Hide resolved
@russjones russjones added the ux label May 12, 2021
@nklaassen
Copy link
Contributor Author

Tagging @klizhentas for UX approval

@klizhentas
Copy link
Contributor

@nklaassen LGTM after you update CHANGELOG.md with a breaking change notification similar to others (see examples in changelog) (so we don't forget)

Copy link
Contributor

@klizhentas klizhentas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM assuming changelog update

@nklaassen
Copy link
Contributor Author

@nklaassen LGTM after you update CHANGELOG.md with a breaking change notification similar to others (see examples in changelog) (so we don't forget)

Thanks, I started a placeholder 7.0 section in CHANGELOG.md with a Breaking Changes section including this

@nklaassen nklaassen force-pushed the nklaassen/teleport-kube-cluster branch from 0644fbd to 5171690 Compare May 25, 2021 23:40
@nklaassen nklaassen merged commit f268ba1 into master May 26, 2021
@nklaassen nklaassen deleted the nklaassen/teleport-kube-cluster branch May 26, 2021 00:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants