Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl periodically fails to connect to the API server with no such host #882

Closed
alika opened this issue Mar 27, 2019 · 11 comments
Closed

Comments

@alika
Copy link
Contributor

alika commented Mar 27, 2019

What happened:
I periodically cannot connect to our AKS cluster with kubectl:

$ kubectl cluster-info

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: dial tcp: lookup global-260e2f2d.hcp.westus2.azmk8s.io: no such host

$ kubectl get no
Unable to connect to the server: dial tcp: lookup global-260e2f2d.hcp.westus2.azmk8s.io: no such host

After about 5-10 minutes it successfully connects. This is occurring several times a day, and being experienced by multiple engineers.

What you expected to happen:
I expect to be able to connect to the AKS consistently with kubectl.

How to reproduce it (as minimally and precisely as possible):
kubectl cluster-info

Anything else we need to know?:

Environment:

  • Kubernetes: 1.11.8
  • Size: 5 x Standard_DS3_v2
  • Workloads: jenkins, cert-manager, harbor, cockroachdb
@jnoller
Copy link
Contributor

jnoller commented Mar 27, 2019

@alika please open an azure support ticket in the portal so the team can RCA the issue, also I'd recommend deleting the subscription ID/region from your ticket ASAP (this is why we ask for support requests so this information doesn't get out).

@alika
Copy link
Contributor Author

alika commented Mar 28, 2019

@jnoller, unfortunately this particular subscription does not have a support plan in which i can create technical support requests.

@robinkb
Copy link

robinkb commented Mar 28, 2019

FYI, GitHub saves a history of all edits made. That means your information is still easily retrieved.

@alika
Copy link
Contributor Author

alika commented Mar 28, 2019

@robinkb, do you have a suggestion on how to purge this somehow? Or do I need to scrap this cluster?

@robinkb
Copy link

robinkb commented Mar 28, 2019

@alika Please refer to the GitHub documentation.

The problem is not really the ID of your cluster, but your subscription ID. You cannot change that by scrapping the cluster. See this StackOverflow question on why it might be considered sensitive.

There is no way to change the ID of a subscription, I don't think. So it is up to you to judge whether or not the ID of this particular subscription must be kept secret. If it is, I think that you will have to delete the subscription and create a new one.

@alika
Copy link
Contributor Author

alika commented Mar 28, 2019

@robinkb, Thanks for the delete history reference.

@jnoller
Copy link
Contributor

jnoller commented Apr 5, 2019

Duplicate of #232

@jnoller jnoller marked this as a duplicate of #232 Apr 5, 2019
@jnoller jnoller closed this as completed Apr 5, 2019
@speciesunknown
Copy link

@jnoller

This is not a duplicate of #232

The error here is "no such host" whereas #232 is about timeouts.

@sappojisetty
Copy link

I'm trying to create a kubernetes cluster and the namespace using terraform and I see this issue. Can somebody help me to find solution for this?
I'm using Terraform v0.12.21

@m-osh
Copy link

m-osh commented Apr 23, 2020

I just encountered the same issue in v0.12.21 as well
@sappojisetty did you figure it out?

@sappojisetty
Copy link

I just encountered the same issue in v0.12.21 as well
@sappojisetty did you figure it out?

I fixed this issue by running the command kubectl config use-context <cluster-name> before applying the terraform.

@ghost ghost locked as resolved and limited conversation to collaborators Jul 24, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants