-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Epic] Backup Replication #103
Comments
I just was going to post this as a feature request. :) I just tried to do this from eastus to westus in Azure and started to think about how we could copy the snapshot and create the disk in the correct region. We could possibly have a restore target config? I also like the idea of creating multiple backups to other regions in case a region goes down or a cluster and its resources get deleted. |
@jimzim this is definitely something we need to spec out and do! We've been kicking around the idea of a "backup target", which would replace the current |
@ncdc Maybe we can discuss this briefly at KubeCon? I have begun to make this work on Azure, but before I go too much further it would be good to talk about what your planned architecture is. |
Sounds great!
…On Wed, Nov 29, 2017 at 5:59 PM Jim Zimmerman ***@***.***> wrote:
@ncdc <https://github.com/ncdc> Maybe we can discuss this briefly at
KubeCon? I have begun to make this work on Azure, but before I go too much
further it would be good to talk about what your planned architecture is.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#103 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAABYoh9BCcUoAc0fXI8AEDsX5VP9gqjks5s7eHsgaJpZM4PingG>
.
|
This is very much what i'm thinking. We need to think about backup targets, restore sources and ways to munge stuff with a pipeline. Sounds like we are all thinking similar things. |
On Azure, you can create a snapshot into a different resource group than the one that the persistent disk is on, which means the snapshots could be created directly into the Then, cross-RG restores should be quite simple as the source of the data will always be consistent and there should be no refs to I'm not sure if same-Location is a limitation of this -- I've only tried this on two resource groups that are in the same Azure Location. The command/output I used to test this:
and the |
For reference, this is the current Ark Backup Replication design. |
We've created a document of scenarios that we'll use to inform the design decisions for this project. We also have a document where we're discussing more detailed changes to the Ark codebase from which we'll generate a list of specific work items. Members of the [email protected] google group have comment access to both of these documents for anyone who would like to share their thoughts on these. |
Similar scenario for us, I think, and we are using the following manual workaround: # Make a backup on the first cluster
kubectx my-first-cluster
velero backup create my-backup
# Switch to new cluster and restore the backup
kubectx my-second-cluster
velero restore create --from-backup my-backup
# Find the restored disk name
gcloud config configurations activate my-second-project
gcloud compute disks list
# Move the disk to the necessary region
gcloud compute disks move restore-xyz --destination-zone $my-second-cluster-zone
# Ensure the PV is set to use the retain reclaim policy then delete the old resources
kubectl patch pv mongo-volume-mongodb-0 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
kubectl delete statefulset mongodb
kubectl delete pvc mongo-volume-mongodb-0
# Recreate the restored stateful set with references for the new volume
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongodb
spec:
selector:
matchLabels:
app: mongodb
serviceName: "mongodb"
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongo
image: mongo
command:
- mongod
- "--bind_ip"
- 0.0.0.0
- "--smallfiles"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-volume
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
storageClassName: ""
volumeName: "mongo-volume-mongodb-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-volume-mongodb-0
spec:
storageClassName: ""
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: "restore-xyz"
fsType: ext4
EOF |
Hi, is there any ETA for this? It sounds like a basic feature to be able to use backup to recover from an AZ failure. https://docs.google.com/document/d/1vGz53OVAPynrgi5sF0xSfKKr32NogQP-xgXA1PB6xMc/edit#heading=h.yuq6zfblfpvs sounded promising |
@jujugrrr we have cross-AZ/region backup & restore on our roadmap. If you're interested in contributing in any way (requirements, design work, etc), please let us know! cc @stephbman |
You don't need backup replication to support multi-zone and multi-region for GCP/GKE with the K8s VolumeSnapshot beta support of Velero v1.4. See #1624 (comment) |
Hey, I was wondering if there was any update on this? Or a breakdown of tasks required to complete this epic? My team is running an AKS cluster with the csi plugin, we've tried rustic as well as restoring VHD from blob to move the snapshots into another region which resulted in: StatusCode: 409, RawError: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 409, RawError: {
"error": {
"code": "OperationNotAllowed",
"message": "Addition of a blob based disk to VM with managed disks is not supported.",
"target": "dataDisk"
}
} |
…-1.5.0 skip backuping projected volume
Is there any update to this? I feel like this could be easily solved by not storing a specific volume ID (snapshot id in the case of AWS) that you want to restore from, but to make a custom tag with a randomly generated ID that Velero uses as a reference when trying to restore. This would make it so that no matter what region or az you copy the storage backup to, Velero would still be able to restore from it when it has the correct ID tag. Just a thought. |
Any update to this? We are looking into helping customers replicate volume backups across Cloud Regions (e.g., AWS |
Hi is there any updates in regards to this? Any way someone can help with this? |
My very limited understanding from comments by @dsu-igeek at the community meeting of 2021-11-02 is that this sort of feature is on hold pending #4077 and a rewrite of volume snapshotters to a new architecture based on Astrolabe, because while it is not particularly hard to implement replication in a particular plugin without a general framework, subtle timing issues (#2888) could lead to anomalous behaviors in certain applications which do not tolerate a simple copy of volumes. |
Hello, also wondering if there are any updates on this and wondering how I can help. |
I too am wondering about an update. Was this accepted into 1.10? |
any updates on the topic? |
Hello, Any updates on this? We are hoping get this feature soon. Right now we are trying to implement this by copying the azure disk snapshots to other region with shell/python scripts and update velero output files(to make restore smooth in case). I was also wondering, anyone tried using CSI Snapshot Data Movement to make backups available in cross region? UPDATE 16.05.2024
|
* Use CDI api CDI API has smaller/simpler dependencies. This all we need to cooperate with the kubevirt cluster. Signed-off-by: Bartosz Rybacki <[email protected]> * Update code to use cdi-api Signed-off-by: Bartosz Rybacki <[email protected]> * Go mod tidy & vendor Signed-off-by: Bartosz Rybacki <[email protected]> Signed-off-by: Bartosz Rybacki <[email protected]>
User Stories
As a cluster administrator, I would like to define a replication policy for my backups which will ensure that copies exist in other availability zones or regions. This will allow me to restore a cluster in case of an AZ or region failure.
Non-Goals
Features
Original Issue Description
There are a few different dimensions of a DR strategy that may be worth consideration. For AWS deployments the trade-offs the complexity of running Multi-AZ are fairly negligible if you stay in the same region. As such the Single Region/Multi-AZ deployment is extremely common.
An additional requirement often is having the ability to restore in another region with more relaxed RTO/RPO in the case of an entire region going down.
Looking over #101 brought a few things to mind, and a large wish list might include:
us-east-1a -> us-west-2b
.Some of these are certainly available today to users (copying snapshots and s3 data) but require additional external integrations to function properly. As a user it would be more convenient if this were able to be done in a consolidated way.
The text was updated successfully, but these errors were encountered: