Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl config Error from server (BadRequest) on leaf K8s #3716

Closed
benarent opened this issue May 14, 2020 · 5 comments
Closed

kubectl config Error from server (BadRequest) on leaf K8s #3716

benarent opened this issue May 14, 2020 · 5 comments
Labels
bug c-sn Internal Customer Reference kubernetes-access

Comments

@benarent
Copy link
Contributor

Description

Created on behalf of a customer.

What happened:
'm seeing some weird behavior in 4.2.9 with kubctl config but I'm testing with 4.2.3 servers
so I wonder if the versions are not 100% compatible

I do the following:

  • log out with tsh logout
  • log in to a "leaf" cluster tsh login sfffff-dev
    I can see ~/.kube/config after that and it seems fine, but if I do anything with kubectl I'm getting Error from server (BadRequest): Unable to list "/v1, Resource=pods": the server rejected our request for an unknown reason (get pods) or Error from server (BadRequest): the server rejected our request for an unknown reason ; The root cluster doesn't show any requests to K8s in logs

If I log in to another cluster after that with tsh login sfffff-azdev everything begins work as expected. I can see requests in the logs, I can switch between contexts

when I'm observing the problem I can confirm that the server address is correct in kubeconfig server: https://tproxy.sffff-teleport.aws.reg.RETRACTED.com:3026

➜  ~ kubectl version
Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.10-dispatcher", GitCommit:"f5757a1dee5a89cc5e29cd7159076648bf21a02b", GitTreeState:"clean", BuildDate:"2020-02-06T03:31:35Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.9-eks-502bfb", GitCommit:"502bfb383169b124d87848f89e17a04b9fc1f6f0", GitTreeState:"clean", BuildDate:"2020-02-07T01:31:02Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

The behavior is the same with 4.2.9 servers

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Environment

  • Teleport version (use teleport version):

  • Tsh version (use tsh version):

  • OS (e.g. from /etc/os-release):

  • Where are you running Teleport? (e.g. AWS, GCP, Dedicated Hardware):

Browser environment

  • Browser Version (for UI-related issues):
  • Install tools:
  • Others:

Relevant Debug Logs If Applicable

  • tsh --debug
  • teleport --debug
@benarent
Copy link
Contributor Author

@awly @russjones Do you happen to know if we made any changes in the later versions of 4.2.3+?

@benarent benarent added the c-sn Internal Customer Reference label May 15, 2020
@benarent benarent removed this from the Reliability Improvements for S milestone May 15, 2020
@awly
Copy link
Contributor

awly commented May 18, 2020

These are the only remotely-relevant changes since 4.2.3:

It would be very useful to get root/leaf proxy debug logs here. And also more specifics on the versions of tsh, root proxy, leaf proxy and auth servers.

@paul91
Copy link

paul91 commented May 19, 2020

Can confirm I am experiencing the same behavior with tsh 4.2.9:

  1. tsh logged out/cache deleted
  2. Login specifying desired leaf cluster
  3. Kubectl returns BadRequest
  4. Switch to another leaf cluster, kubectl works as expected
  5. Switch back to original desired leaf cluster, kubectl works as expected

Before when I was using what I think was 4.2.8, the initial login always pointed me at my main cluster instead of the desired leaf, the second login would point me where I expected. This was supposedly fixed in 4.2.9, possible that #3639 didn't completely resolve the regression.

@webvictim
Copy link
Contributor

webvictim commented May 19, 2020

I saw this same issue in testing myself (once). Trying to track it down and reproduce it is what led me to find another bug and open #3693

@webvictim webvictim changed the title kubctl config Error from server (BadRequest) on leaf K8s kubectl config Error from server (BadRequest) on leaf K8s May 19, 2020
@awly
Copy link
Contributor

awly commented May 19, 2020

I believe #3735 should resolve this

@awly awly closed this as completed Jun 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug c-sn Internal Customer Reference kubernetes-access
Projects
None yet
Development

No branches or pull requests

4 participants