Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

csi.trident pluginregistration.Registration errors in kubelet logs #333

Closed
ravilr opened this issue Jan 21, 2020 · 3 comments
Closed

csi.trident pluginregistration.Registration errors in kubelet logs #333

ravilr opened this issue Jan 21, 2020 · 3 comments
Labels

Comments

@ravilr
Copy link

ravilr commented Jan 21, 2020

Describe the bug

On Kubernetes versions 1.15 and greater, kubelet has a dynamic plugin registration mechanism that discovers all sockets in /var/lib/kubelet/plugin using grpc GetInfo calls. But, trident-csi node-driver-registrar container already registers the plugin. So, the Kubelet's default discovery leads to lot of noise in kubelet logs like below:

Can you please confirm this is harmless in case of trident csi also ?

kubelet: I0121 13:40:31.450593   36037 reconciler.go:156] operationExecutor.RegisterPlugin started for plugin at "/var/lib/kubelet/plugins/csi.trident.netapp.io/csi.sock" (plugin details: &{/var/lib/kubelet/plugins/csi.trident.netapp.io/csi.sock true 2020-01-21 13:05:48.192444146 -0800 PST m=+3.508661885})
kubelet: I0121 13:40:31.450638   36037 operation_generator.go:193] parsed scheme: ""
kubelet: I0121 13:40:31.450667   36037 operation_generator.go:193] scheme "" not registered, fallback to default scheme
kubelet: I0121 13:40:31.450706   36037 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/lib/kubelet/plugins/csi.trident.netapp.io/csi.sock 0  <nil>}]
kubelet: I0121 13:40:31.450720   36037 clientconn.go:796] ClientConn switching balancer to "pick_first"
kubelet: I0121 13:40:31.450766   36037 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000d73140, CONNECTING
kubelet: I0121 13:40:31.450904   36037 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000d73140, READY
kubelet: E0121 13:40:31.451325   36037 goroutinemap.go:150] Operation for "/var/lib/kubelet/plugins/csi.trident.netapp.io/csi.sock" failed. No retries permitted until 2020-01-21 13:42:33.451301322 -0800 PST m=+2208.767519081 (durationBeforeRetry 2m2s). Error: "RegisterPlugin error -- failed to get plugin info using RPC GetInfo at socket /var/lib/kubelet/plugins/csi.trident.netapp.io/csi.sock, err: rpc error: code = Unimplemented desc = unknown service pluginregistration.Registration"
kubelet: I0121 13:40:31.451328   36037 controlbuf.go:382] transport: loopyWriter.run returning. connection error: desc = "transport is closing"

Environment
Provide accurate information about the environment to help us reproduce the issue.

  • Trident version: 19.10
  • Container runtime: Docker 18.09.9-CE
  • Kubernetes version: [e.g. 1.15.9

To Reproduce
Install trident-csi on Kubernetes > v1.14.x

Expected behavior
See kubernetes/kubernetes#70485 (comment). Trident should either move the registration path outside of /var/lib/kubelet/plugin/ or name the file not ending with .sock.

Additional context
See kubernetes/kubernetes#70485 (comment)

Can you please confirm this is harmless in case of trident csi also ? If so, can we move the registration path outside of /var/lib/kubelet/plugin/ ?

@ravilr ravilr added the bug label Jan 21, 2020
@ravilr
Copy link
Author

ravilr commented Jan 21, 2020

Found out that this is also documented at https://github.com/kubernetes-csi/node-driver-registrar#usage

Note that before Kubernetes v1.17, if the csi socket is in the /var/lib/kubelet/plugins/ path, kubelet may log a lot of harmless errors regarding grpc GetInfo call not implemented (fix in kubernetes/kubernetes#84533). The /var/lib/kubelet/csi-plugins/ path is preferred in Kubernetes versions prior to v1.17.

@titansmc
Copy link

I am getting those errors and the specific nodes are not comming back to "Ready" after the upgrade, they stay at Ready,SchedulingDisabled

@gnarl
Copy link
Contributor

gnarl commented Jan 19, 2022

Closing this issue as the reported log message was harmless and has been fixed in Kubernetes.

@gnarl gnarl closed this as completed Jan 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants