-
Notifications
You must be signed in to change notification settings - Fork 272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PVC Image Upload Volume Node Affinity Conflict #3631
Comments
Is the BTW, the point of WaitForFirstConsumer is that the scheduling is delayed until a workload tries to use the volume. |
Yes, I've updated the post to include the PV details. I initially had the storageclass set to |
Looks like they end up using the wrong storage class,
Nope, that is not a requirement. You can force bind if there's no workload going to consume the volume using --force-bind. |
That was actually an artifact from an older run, they're all kasm-storage now... So my full workflow is, use image-upload to create a 'golden-image' to the cluster, that I then use when creating multiple copies of a VM from the same image, using DataVolume cloning. This all works on GKE no problems, but the last time I tested it was KubeVirt v1.2.2. I changed the storageclass back to Immediate without the annotation, and get the same issue. |
Right, in that case, you create the golden image using virtctl image-upload with the |
Yes, using |
I have KubeVirt and CDI running on OKE, but when attempting to use virtctl to do an image-upload of a PVC the upload pod reports this:
I followed the advice of this ticket (#3287) and changed my storageclass to be waitForFirstConsumer with the annotation mentioned in the comment, but my PV are still assigned to different nodes.
storageclass:
The PVs:
Thanks for the help!
k8s version:
CDI Version:
KubeVirt Version:
VirtCTL Version:
The text was updated successfully, but these errors were encountered: