-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix indefinite stuck Pending pod on a deleted node #139
Fix indefinite stuck Pending pod on a deleted node #139
Conversation
1d00341
to
2b698d9
Compare
/assign @msau42 |
controller/controller.go
Outdated
@@ -1404,10 +1404,14 @@ func (ctrl *ProvisionController) provisionClaimOperation(ctx context.Context, cl | |||
if nodeName, ok := getString(claim.Annotations, annSelectedNode, annAlphaSelectedNode); ok { | |||
if ctrl.nodeLister != nil { | |||
selectedNode, err = ctrl.nodeLister.Get(nodeName) | |||
// if node does not exist, remove volume.kubernetes.io/selected-node annotation | |||
if apierrs.IsNotFound(err) { | |||
delete(claim.Annotations, annSelectedNode) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you validate the error and event that is generated at L1438?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With logs and this pr change: master...sunnylovestiramisu:sig-storage-lib-external-provisioner:fixProvisionerWithLogs
k describe pvc task-pv-claim
Normal ExternalProvisioning 30s (x3 over 30s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "pd.csi.storage.gke.io" or manually created by system administrator
Warning ProvisioningFailed 30s pd.csi.storage.gke.io_e2e-test-songsunny-minion-group-6dnk_a248f4cc-9002-4021-a4b2-36b16d19c0b2 node "non-exist-node" not found
Normal Provisioning 30s pd.csi.storage.gke.io_e2e-test-songsunny-minion-group-6dnk_a248f4cc-9002-4021-a4b2-36b16d19c0b2 External provisioner is provisioning volume for claim "default/task-pv-claim"
Normal ProvisioningSucceeded 26s pd.csi.storage.gke.io_e2e-test-songsunny-minion-group-6dnk_a248f4cc-9002-4021-a4b2-36b16d19c0b2 Successfully provisioned volume pvc-b677fa97-f016-4b0f-870b-054f1fb1a6ef
k logs -n gce-pd-csi-driver csi-gce-pd-controller-77df58ff84-j8zhr csi-provisioner
I0320 18:46:18.972132 1 controller.go:1334] =================== Sunny Test ===================
I0320 18:46:18.972161 1 controller.go:1338] provision "default/task-pv-claim" class "in-tree-volume": started
I0320 18:46:18.972210 1 controller.go:1388] ================================= nodeLister error is node "non-exist-node" not found =================================
I0320 18:46:18.972223 1 controller.go:1392] ================================= is not found error =================================
I0320 18:46:18.972230 1 controller.go:1451] =================== err is node "non-exist-node" not found ===================
I0320 18:46:18.972865 1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"task-pv-claim", UID:"b677fa97-f016-4b0f-870b-054f1fb1a6ef", APIVersion:"v1", ResourceVersion:"1828", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' node "non-exist-node" not found
I0320 18:46:18.981944 1 controller.go:1470] provision "default/task-pv-claim" class "in-tree-volume": volume rescheduled because: node "non-exist-node" not found
I0320 18:46:18.981965 1 controller.go:1471] =================== volume rescheduled because: node "non-exist-node" not found ===================
I0320 18:46:18.981983 1 controller.go:1071] Stop provisioning, removing PVC b677fa97-f016-4b0f-870b-054f1fb1a6ef from claims in progress
I0320 18:46:18.982019 1 controller.go:1334] =================== Sunny Test ===================
I0320 18:46:18.982034 1 controller.go:1338] provision "default/task-pv-claim" class "in-tree-volume": started
I0320 18:46:18.982060 1 controller.go:560] translating storage class for in-tree plugin kubernetes.io/gce-pd to CSI
I0320 18:46:18.982193 1 connection.go:193] GRPC call: /csi.v1.Controller/CreateVolume
I0320 18:46:18.982260 1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"task-pv-claim", UID:"b677fa97-f016-4b0f-870b-054f1fb1a6ef", APIVersion:"v1", ResourceVersion:"1828", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/task-pv-claim"
I0320 18:46:18.982413 1 connection.go:194] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.gke.io/zone":"us-central1-b"}}],"requisite":[{"segments":{"topology.gke.io/zone":"us-central1-b"}}]},"capacity_range":{"required_bytes":5368709120},"name":"pvc-b677fa97-f016-4b0f-870b-054f1fb1a6ef","parameters":{"csi.storage.k8s.io/pv/name":"pvc-b677fa97-f016-4b0f-870b-054f1fb1a6ef","csi.storage.k8s.io/pvc/name":"task-pv-claim","csi.storage.k8s.io/pvc/namespace":"default","replication-type":"none","type":"pd-standard"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]}
I0320 18:46:20.533232 1 leaderelection.go:278] successfully renewed lease gce-pd-csi-driver/pd-csi-storage-gke-io
I0320 18:46:22.753194 1 connection.go:200] GRPC response: {"volume":{"accessible_topology":[{"segments":{"topology.gke.io/zone":"us-central1-b"}}],"capacity_bytes":5368709120,"volume_id":"projects/songsunny-joonix/zones/us-central1-b/disks/pvc-b677fa97-f016-4b0f-870b-054f1fb1a6ef"}}
I0320 18:46:22.753218 1 connection.go:201] GRPC error: <nil>
I0320 18:46:22.753229 1 controller.go:826] create volume rep: {CapacityBytes:5368709120 VolumeId:projects/songsunny-joonix/zones/us-central1-b/disks/pvc-b677fa97-f016-4b0f-870b-054f1fb1a6ef VolumeContext:map[] ContentSource:<nil> AccessibleTopology:[segments:<key:"topology.gke.io/zone" value:"us-central1-b" > ] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0320 18:46:22.753379 1 controller.go:898] createVolumeOperation: set annotation [volume.kubernetes.io/provisioner-deletion-secret-namespace/volume.kubernetes.io/provisioner-deletion-secret-name] on pv [pvc-b677fa97-f016-4b0f-870b-054f1fb1a6ef].
I0320 18:46:22.753436 1 controller.go:923] successfully created PV pvc-b677fa97-f016-4b0f-870b-054f1fb1a6ef for PVC task-pv-claim and csi volume name projects/songsunny-joonix/zones/us-central1-b/disks/pvc-b677fa97-f016-4b0f-870b-054f1fb1a6ef
I0320 18:46:22.753507 1 controller.go:939] successfully created PV {GCEPersistentDisk:&GCEPersistentDiskVolumeSource{PDName:pvc-b677fa97-f016-4b0f-870b-054f1fb1a6ef,FSType:ext4,Partition:0,ReadOnly:false,} AWSElasticBlockStore:nil HostPath:nil Glusterfs:nil NFS:nil RBD:nil ISCSI:nil Cinder:nil CephFS:nil FC:nil Flocker:nil FlexVolume:nil AzureFile:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil PortworxVolume:nil ScaleIO:nil Local:nil StorageOS:nil CSI:nil}
I0320 18:46:22.753648 1 controller.go:1426] provision "default/task-pv-claim" class "in-tree-volume": volume "pvc-b677fa97-f016-4b0f-870b-054f1fb1a6ef" provisioned
I0320 18:46:22.753668 1 controller.go:1439] provision "default/task-pv-claim" class "in-tree-volume": succeeded
I0320 18:46:22.753680 1 volume_store.go:154] Saving volume pvc-b677fa97-f016-4b0f-870b-054f1fb1a6ef
I0320 18:46:22.761435 1 volume_store.go:157] Volume pvc-b677fa97-f016-4b0f-870b-054f1fb1a6ef saved
I0320 18:46:22.761498 1 controller.go:1069] Claim processing succeeded, removing PVC b677fa97-f016-4b0f-870b-054f1fb1a6ef from claims in progress
I0320 18:46:22.761716 1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"task-pv-claim", UID:"b677fa97-f016-4b0f-870b-054f1fb1a6ef", APIVersion:"v1", ResourceVersion:"1828", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b677fa97-f016-4b0f-870b-054f1fb1a6ef
controller/controller.go
Outdated
} else { | ||
selectedNode, err = ctrl.client.CoreV1().Nodes().Get(ctx, nodeName, metav1.GetOptions{}) // TODO (verult) cache Nodes | ||
} | ||
if err != nil { | ||
if err != nil && !apierrs.IsNotFound(err) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in the node not found scenario, we should jump straight to the handling at L1439.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, when the node does not exist, the controller should go to ctrl.rescheduleProvisioning
path and send events and whatnot. It may need some bigger refactoring to get to that code path easily.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So if there is NotFound error, we set the result == ProvisioningReschedule? and then call ctrl.rescheduleProvisioning?
controller/controller.go
Outdated
@@ -1404,10 +1404,14 @@ func (ctrl *ProvisionController) provisionClaimOperation(ctx context.Context, cl | |||
if nodeName, ok := getString(claim.Annotations, annSelectedNode, annAlphaSelectedNode); ok { | |||
if ctrl.nodeLister != nil { | |||
selectedNode, err = ctrl.nodeLister.Get(nodeName) | |||
// if node does not exist, remove volume.kubernetes.io/selected-node annotation | |||
if apierrs.IsNotFound(err) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is an interesting corner case. What if the provisioning did start already on the backend and the Node got deleted afterwards? Should we do non-final error handling instead and wait for the driver to return a final error?
cc @jsafrane
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can imagine provisionClaimOperation()
to check if the PVC is in claimsInProgress
and keeps calling the CSI driver with an unavailable node until it gets a final answer. But it would need to store the Node
somewhere.
I mean, it can be solved, I am not sure this is the right PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So you mean we can ignore final/non-final handling for now, and look into it later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the provisioning has started, shouldn't the state changed to ProvisioningInBackground?
If it is ProvisioningInBackground and node is not found, we do not go down the code path: return ctrl.provisionVolumeErrorHandling(ctx, ProvisioningReschedule, err, claim, operation)
Instead we continue to the code:
err = fmt.Errorf("failed to get target node: %v", err)
ctrl.eventRecorder.Event(claim, v1.EventTypeWarning, "ProvisioningFailed", err.Error())
return ProvisioningNoChange, err
/assign @jsafrane |
ef56854
to
40f0c77
Compare
controller/controller.go
Outdated
@@ -1404,6 +1404,10 @@ func (ctrl *ProvisionController) provisionClaimOperation(ctx context.Context, cl | |||
if nodeName, ok := getString(claim.Annotations, annSelectedNode, annAlphaSelectedNode); ok { | |||
if ctrl.nodeLister != nil { | |||
selectedNode, err = ctrl.nodeLister.Get(nodeName) | |||
// if node does not exist, remove volume.kubernetes.io/selected-node annotation | |||
if apierrs.IsNotFound(err) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can check for IsNotFound
in both cases with and without nodeLister
, and have the same error handling
3d57ace
to
084e6cc
Compare
@@ -549,7 +549,7 @@ func TestController(t *testing.T) { | |||
provisionerName: "foo.bar/baz", | |||
provisioner: newBadTestProvisioner(), | |||
expectedClaims: []v1.PersistentVolumeClaim{ | |||
*newClaim("claim-1", "uid-1-1", "class-1", "foo.bar/baz", "", map[string]string{annBetaStorageProvisioner: "foo.bar/baz", annSelectedNode: "node-wrong"}), | |||
*newClaim("claim-1", "uid-1-1", "class-1", "foo.bar/baz", "", map[string]string{annBetaStorageProvisioner: "foo.bar/baz"}), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you need to add a test Node object to the original test cases, and then add new test cases where the Node object is missing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a case below: "provision with selected node, but node does not exist"
Do you mean add a check there to make sure the annSelectedNode is removed from the claim?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These were the test cases you had to modify and the name no longer makes sense:
"do not remove selectedNode after final error, only the claim"
"do not remove selectedNode if nothing changes"
I think we need to add the Node object to these original test cases so that it's actually testing the path it wants to test. Right now, with this change, we're hitting the Node doesn't exist error first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking we want to test the annotation is removed after node doesn't exist? If node does not exist, why do we add to the original test setup?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I'm suggesting is to keep the original test cases (where Node should exist but it seems like it's a bug in the test cases), and add a new one for Node not existng.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adjusted
084e6cc
to
438f665
Compare
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: msau42, sunnylovestiramisu The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
specifically, pod can now be rescheduled if the node is deleted. kubernetes-sigs/sig-storage-lib-external-provisioner#139
specifically, pod can now be rescheduled if the node is deleted. kubernetes-sigs/sig-storage-lib-external-provisioner#139
Remove the annotation when the provisioner tries to access a Node that doesn't exist by detecting errors.NewNotFound
Fix issue #121 we need to get this in for 1.27 release