-
Notifications
You must be signed in to change notification settings - Fork 547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CSI driver fails to clean up deleted PVs after intree migration #4242
Comments
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
No, thank you! |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
Jeez.... |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
😞 |
This mostly happens due to the permission issue , can you please check and update ceph user caps as per https://github.com/ceph/ceph-csi/blob/devel/docs/capabilities.md @phoerious we really dont have solid E2E for the migration, if you have logs we can try to debug and see what is happening |
These are the permissions of both the new CSI user and the old legacy user:
I create a PVC with the old storage class name, which gets rerouted to the new CSI driver. When I try to delete that PVC, the associated PV gets stuck "Terminating" with this:
The provisioner log is littered with this:
The associated RBD in the pool has long been deleted.
That's all I have. |
can you please remove extra profile from the osd caps and see if that is the one causing the issue, can you make it as below
|
Same thing. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
Nope, still there. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
🎺 |
Describe the bug
I recently migrated from the in-tree Ceph storage driver to the CSI driver and wanted to enable the migration plugin for existing kubernetes.io/rbd volumes.
I used these two documents for reference:
I noticed that both are relatively incomplete and grammatically highly confusing. I think I did everything that was required for the migration, but I don't really know whether the legacy plugin is really redirected to the CSI driver or not. I believe it is, since I tried what was written in the first document above:
and I got errors in the provisioner log about it not finding the correct cluster ID. I do not get an error when I generate the hash without a trailing
\n
usingecho -n "<monaddress[es]:port>" | md5sum
instead (I think this is a bug in the docs!).My main issue, however, is that when I create a new RBD using the legacy storage class, an RBD gets provisioned and cleaned up, but the PV spec gets stuck in a
Terminating
state with the following error:The provisioner logs this
The existence of this error seems to indicate that the CSI plugin does indeed handle the kubernetes.io/rbd requests, although with an error.
I did verify with
rbd ls rbd.k8s-pvs | grep VOLUME_NAME
that the RBD volume gets created and deleted correctly, so this is a bogus "Permission denied" error. It is annoying nonetheless, since the only way to get rid of the PV is to edit the spec and remove thefinalizer
.Environment details
Steps to reproduce
Steps to reproduce the behavior:
Actual results
RBD volume gets created and deleted, PVC is deleted as well, but PV gets stuck in
Terminating
state with a bogus Permission denied error.The text was updated successfully, but these errors were encountered: