-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support restoring a PV into the same cluster #192
Comments
@rosskukulinski would you find this useful? |
Our usage is planned to always delete a namespace after backup. (as we're using it to archive it) As such I don't think we'd hit a scenario where we backed up, but still had a live namespace using the PV. However restoring with a remapped namespace and preserving the PV would definitely be useful, we don't plan to do this for a bit but it is on the radar. (I forget if Ark can handle remapped namespace with PV snapshots if nothing is still using the PV) |
@ncdc reading through #189, yes I think this would be a useful (and expected IMO) feature for CI/CD and testing use-cases. As a more specific example, this feature request enables what you describe in the ephemeral ci testing env w/ ark blog post but supports this behavior within a single cluster. |
wow, awesome! So as @rosskukulinski mentioned , it looks like in Your blog post @ncdc, ephemeral ci testing env w/ ark - it is supported to do this type of restore from one cluster to another? (without deleting original PV/PVC?). I am going to test this! Regarding new feature, I think there should be optional flag , so we can select to re-create PVC/PV or not. Something like --restore-volumes-recreate (to recreate PVC/PV) flag. |
Current thinking is |
cc @jbeda on UX |
It might be worthwhile to think about this in terms of scenarios and the flags that will usually exist together. 90% of the time that users remap namespaces they'll want to rename/clone PVs too. We don't want to have people flip a zillion flags to do the common case. I'm reminded of the flags for
Another analogy/pattern: |
Is this on a near-term roadmap, or should I be finding a workaround? |
(Also, are you interested in outside contribution, or are you tackling this internally?) |
We’d love to get to this soon if possible. We definitely could use some
help with UX ideas. And we always welcome contributions!
…On Fri, Apr 20, 2018 at 3:40 PM Nathaniel Eliot ***@***.***> wrote:
(Also, are you interested in outside contribution, or are you tackling
this internally?)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#192 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAABYhS5ZV7SO1T3TCdUQx908gy7Fxuaks5tqjmugaJpZM4QW2E5>
.
|
Following up on this: I think this is a critical feature. Key use-case is enabling duplication of namespaces within the same cluster. This should not be done automatically [for cross-cluster restores or for DR, should keep the PV names the same]. Suggestion: |
Are there any plans on implementing this? I'm happy to submit a PR with a few guidelines from authors. |
@agolomoodysaada we do hope to be able to include this feature. We'll be doing some more project planning in the next week or two, and hopefully we'll have time to discuss this one. We'd also be thrilled if you wanted to contribute a PR for this. But before starting, I do think this is a feature in particular that needs a solid design for UI/UX. Please read the comments in this issue, and let us know if you have any ideas. Thanks! |
Any updates on this ? |
@skriss any update on this, have you found an approach, is it a good idea for us to start looking at implementation for this? |
@arianitu I've been looking into it. Definitely interested in your view on what the right UX is. Should this be automatic behavior when using the |
The 3 use cases where you’d use the
Only the fork scenario requires new PV names I think - so an additional |
I like your first suggestion @skriss . velero backup create --include-namespaces ns1 ns1-backup
# in the same cluster
velero restore create --namespace-mappings ns1:ns2 --from-backup ns1-backup The problem with same-namespace restores and the Unless... unless we can do something like velero restore create --prefix-names "restored-" --from-backup ns1-backup
# mydeployment -> restored-mydeployment
# mypvc -> restored-mypvc
# mystatefulset -> restored-mystatefulset |
@skriss I think the behaviour should be automatic if you're going from one namespace to another. In most cases, if you are using a PVC in one namespace, the next namespace is not going to work correctly since almost always you're going to run into an issue where containers will complain about the PVC already being mounted. Would there be a specific reason to having a —rename-pvs flag? Also, —rename-pvs makes it sound like the old PVs are no longer going to be used when I think we want new PVs from a snapshot to be used and we would keep the old PVs around. (is this possible with Velero?) |
@robertgates55 re: #192 (comment) -- you make a good point that it's only necessary to rename the PV in one of those scenarios; however, do you see a problem if we also rename the PV for the other scenarios? Given that PV names are usually system-generated random IDs, I wonder how much it matters if the name gets changed on restore (as long as its related PVC gets updated as well). One option is to have this be automatic behavior when using the |
I'm not sure if it's applicable here. But cloning a volume could be done using the new PVC DataSource (aka Volume Cloning) introduced in Kubernetes 1.15 as an alpha feature. More info: https://github.com/kubernetes/enhancements/pull/1147/files |
@skriss I think that renaming the pv only if the existing ones are already in use in the cluster would be ideal |
Thank you for your work @skriss, looking forward to testing this, is there a version I can test on our systems? |
Yes! you could try using the You shouldn't need an updated client, as the changes are all server-side. |
@skriss Sorry swinging back on this after a long time, but I am on:
But I still get
Do you know which Velero version supports this? |
Also to describe what we are doing:
Is this use case supported? We are basically looking for a namespace clone for testing purposes. |
@skriss Hey, is it still not possible to restore a backup with PVCs to a new namespace on the same cluster ? |
I found this issue because I want to verify that my restic restores actually contain the data which I expect. But there seems to be no way for me to inspect at the file level, or to restore without first deleting the PV which I want to verify. There's an old saying: "backups that haven't been tested don't exist". I'm just trying to figure out how to safely test the backups. |
Hello @jpuskar ! I have the same case, I don't know how to verify the backups without deleting/modifying the original deployment... |
Still facing the same problem here |
Hello 👋, It appears that Velero with Restic doesn't support the restoration of PV/PVC in the same cluster but in a different namespace. I'm currently using Velero version Is there any update regarding the support for this feature? 🙏 If there's nothing in progress on your end, please let me know and I'll try to add support of this feature. Thanks ! |
@liogate We already support namespace mappings on restore -- so this should do what you want. Have you tested this? Maybe you've found a bug with this functionality? |
Hi @sseago, thank you for your feedback. We're testing a PV with a
velero restore create \
--from-backup $VELERO_SELECTED_BACKUP \
--include-namespaces $VELERO_SELECTED_NAMESPACE \
--namespace-mappings $VELERO_SELECTED_NAMESPACE:$VELERO_RESTORE_NAMESPACE \
--include-resources pods,configmaps,secrets,serviceaccount,persistentvolumeclaims,persistentvolumes \
--restore-volumes=true The result -- PVC Restore is stuck with Thanks. |
@liogate Hmm. That seems like a bug, possibly a recent regression. See the PR linked above (#1779 ) -- that was the fix for the problem you are describing. It might be worth opening a new issue reporting this as a regression. It's possible that the kopia refactor broke this, but I'm not really sure. |
@sseago i also used the version v1.9.1 currently and tried to restore using namespace mapping.
i restored it on cluster 2 after backed up it from cluster 1, the restore was working but i can't use the PVC because the PV pvc-786d37a0-796d-4fd3-8236-5b9c107fe1f5 is actually existing in cluster 1, and now i try to use the same named PV from cluster 2 and i think this causes problems for the CSI driver since it looks in the datastore of cluster 1? Is there a Velero feature or workaround to handle the PV naming conflicts across clusters? Any quick help would be greatly appreciated. Thank you! Log while trying to mount: |
@mikephung Hmm. So it's possible that this use case doesn't work properly with CSI, so while the fix associated with this (now closed) issue worked fine with native snapshots and fs backup, there may be a separate problem with CSI snapshots and namespace mappings. This issue was fixed and closed four years ago, before the CSI plugin even existed. I'd suggest that if you're having a CSI-specific problem, a new bug should be opened that is specific to the problem you're hitting. |
Forked from #189
Scenario:
Currently, this is not possible, because Ark does not change the name of the PV when restoring it. The result is 2 PVCs both trying to claim the same (original) PV P1.
Questions:
cc @evaldasou
The text was updated successfully, but these errors were encountered: