-
Notifications
You must be signed in to change notification settings - Fork 223
Restore repo-path1
descriptions is confusing
#615
Comments
I have also found the documentation about restore from backup a bit confusing. After experimentation this is what I found:
|
Hi @bastienmenis. Thanks for sharing your insights! 👍 I have also tested further and can confirm your observations. My use-case was migrating the TSDB into a new K8s cluster so I chose to keep namespaces and deployment names the same. Because I wasn't sure whether it's safe to have restore and backup point to the same location I'm using different S3 buckets. With that, it helped a lot to use |
I have the same goal, to bootstrap a second unique deployment into a new namespace from the first's S3 backup. My results failed with the following logs. Looks like something (not sure what yet) is not actually downloading the archive. The path referenced does exist in S3. I can restore a new pod from it in the first cluster, so the creds + files are correct.
Looks like there's a bunch of open bugs with the same issue. I'm already using a forked chart because the current release has a broken pgbackrest initialization procedure. I could try and debug it from there. |
Hi TSDB team,
we are running TSDN single in our K8s cluster and have the backup set up with S3.
As we move to a new cluster we want to test the restore from backup use-case and found that we couldn't really make sense of the comment in the Helm values:
https://github.com/timescale/helm-charts/blob/7ded6b654c956a3f6dc119d90b47a0262eba600e/charts/timescaledb-single/values.yaml#L159C1-L159C1
Are we supposed to set it to the path where the current backup is, so it can be found? In that case, how does it protect the backup from being overwritten? And if it needs to be something different, how can the backup location be found?
It would be great to be more explicit on the use.
Thanks!
The text was updated successfully, but these errors were encountered: