-
Notifications
You must be signed in to change notification settings - Fork 273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Datavolume is failed occasionally when clone a datavolume from a pvc for concurrency #3259
Comments
@mhenriks Any thoughts on this? Looks like something is going wrong with the namespace transfer since we are seeing the lost claim? |
@mengyu1987 what version of CDI is this? EDIT I see 1.58 |
@awels lost claims are expected but should be deleted |
The description says it is CDI 1.58.1 |
I have the same issue using version 1.59.0 of CDI. Please note that when the source PVC and cloned DV are in the same namespace, the issue does not occur. |
@mengyu1987 I was unable to reproduce with kubevirtci with: Can you post the following: Any relevant errors in cdi-deployment log? |
@mhenriks Thanks for your reply. I just can reproduce when the count of cloned dvs is more than 20 for concurrency. My k8s cluster with: the yaml follows:
|
@mengyu1987 looks like you are not using the ceph provisioner included in kubevirtci I could not reproduce with the following params
KUBEVIRT_STORAGE=rook-ceph-default will install rook/ceph provisioner I noticed your StorageClass has I suggest you run with |
thanks @mhenriks. I run with |
@mhenriks It is reproduced once more with rook-ceph-default.
And ceph-block-default yaml:
I dont know why the dv is reconciled to failed after it is succeeded. |
@mengyu1987 this setting is not compatible with
there is no blockpool named |
This seems related to your custom implementation of rook-ceph and not an issue with CDI. Therefore I am going to close the issue. |
What happened:
A clear and concise description of what the bug is.
2. clone 20 dvs for concurrency
What you expected to happen:
A clear and concise description of what you expected to happen.
all dvs are success.
How to reproduce it (as minimally and precisely as possible):
Steps to reproduce the behavior.
Additional context:
Add any other context about the problem here.
Environment:
kubectl get deployments cdi-deployment -o yaml
): 1.58.1kubectl version
): N/Auname -a
): N/AThe text was updated successfully, but these errors were encountered: