Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout: request did not complete within requested timeout 30s #2545

Closed
guitcastro opened this issue Feb 11, 2020 · 3 comments
Closed

Timeout: request did not complete within requested timeout 30s #2545

guitcastro opened this issue Feb 11, 2020 · 3 comments

Comments

@guitcastro
Copy link

Bug Report

What did you do?

Installed eck and tried to create a cluster

What did you expect to see?

The cluster working as expected

What did you see instead? Under which circumstances?

No pod are created.

Environment

  • ECK version:

    1.0.1

  • Kubernetes information:

Running o GKE 1.15

$ kubectl version

Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:16:51Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.7-gke.23", GitCommit:"06e05fd0390a51ea009245a90363f9161b6f2389", GitTreeState:"clean", BuildDate:"2020-01-17T23:10:45Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}

  • Logs:
{"level":"info","@timestamp":"2020-02-11T18:45:20.024Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.1-bcb74688","iteration":3,"namespace":"elasticsearch","name":"elasticsearch"}
{"level":"info","@timestamp":"2020-02-11T18:45:46.210Z","logger":"license-controller","message":"Starting reconciliation run","ver":"1.0.1-bcb74688","iteration":2,"namespace":"elasticsearch","name":"elasticsearch"}
{"level":"info","@timestamp":"2020-02-11T18:45:46.210Z","logger":"license-controller","message":"Ending reconciliation run","ver":"1.0.1-bcb74688","iteration":2,"namespace":"elasticsearch","name":"elasticsearch","took":0.000077254}
{"level":"info","@timestamp":"2020-02-11T18:45:50.028Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.1-bcb74688","iteration":3,"namespace":"elasticsearch","name":"elasticsearch","took":30.004570223}
{"level":"error","@timestamp":"2020-02-11T18:45:50.028Z","logger":"controller-runtime.controller","message":"Reconciler error","ver":"1.0.1-bcb74688","controller":"elasticsearch-controller","request":"elasticsearch/elasticsearch","error":"Timeout: request did not complete within requested timeout 30s","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88"}
{"level":"info","@timestamp":"2020-02-11T18:45:51.029Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.1-bcb74688","iteration":4,"namespace":"elasticsearch","name":"elasticsearch"}
{"level":"info","@timestamp":"2020-02-11T18:45:51.029Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.1-bcb74688","iteration":4,"namespace":"elasticsearch","name":"elasticsearch","took":0.000116193}
{"level":"info","@timestamp":"2020-02-11T19:44:18.039Z","logger":"license-controller","message":"Starting reconciliation run","ver":"1.0.1-bcb74688","iteration":3,"namespace":"elasticsearch","name":"elasticsearch"}
{"level":"info","@timestamp":"2020-02-11T19:44:18.040Z","logger":"license-controller","message":"Ending reconciliation run","ver":"1.0.1-bcb74688","iteration":3,"namespace":"elasticsearch","name":"elasticsearch","took":0.000372628}
@sebgl
Copy link
Contributor

sebgl commented Feb 13, 2020

I think it could be one of the following:

  • The operator does not have RBAC permission to reach the apiserver.
  • Something else (eg. service mesh or CNI) prevents the operator from connecting to the apiserver.
  • The webhook causes troubles.

Besides Pods, did ECK create any resource at all (secrets, configmaps, services, etc.)?

Could you try disabling the webhook and/or enabling ECK debug logs?

@guitcastro
Copy link
Author

Disabling the webhook solves the problem. Thanks!

@sebgl
Copy link
Contributor

sebgl commented Feb 13, 2020

@guitcastro chances are something on your end prevents the webhook to be called by the apiserver. Can be network restrictions, RBAC restrictions, apiserver webhook misconfiguration, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants