-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restoring from S3 on a different machine #1066
Comments
This is covered here: https://pgbackrest.org/user-guide-centos7.html#replication/hot-standby. You just need to modify the recovery settings to whatever you need to recover your primary. PITR instructions are here: https://pgbackrest.org/user-guide-centos7.html#pitr
There's no need to create the stanza again -- it's already created. All you need is an empty PGDATA dir or specify |
Thanks, I managed to do the restore. |
For others finding this, just wanted to mention that if you're getting the
Then try using the alternate restore approach:
API reference: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorial/disaster-recovery/ It appears the second type can work even if the database system-id differs between the backup and the target cluster, whereas the first cannot. (However, don't be like me and assume the backup is failing if it sits there for a while; in my case, I have to wait 2.5 minutes before any of the backup's files start actually being restored. So be patient before changing further settings or the like.) For reference, here is the code that causes the error: pgbackrest/src/command/check/common.c Lines 152 to 157 in bd0081f
|
After more experimenting, I found that the the error can occur for the
When a new postgres cluster is launched (with a new system-identifier), I'd tend to expect that the cluster would look into the repo, and either: Instead, the postgres-operator notices the backup-repo, and complains about it, but doesn't offer an easy way to solve it:
A third option, which does work, is to delete the backup-repo folder in the cloud manually. Then PGO sees there is no mismatch, creates a new cluster, and populates the backup-repo with its own configuration. This works, but is not terribly obvious to new users; perhaps a special error message could be displayed for the EDIT: I put some further (arguably more helpful) notes on stanza-related issues here: https://github.com/debate-map/app/blob/56180dca95148d3af65aa14626093d62dca432fc/README.md?plain=1#L618 |
…_ovh, read/write to different buckets (for the db-backups). * Finally figured out why the system-id mismatch-error is necessary, and how to avoid/deal-with it. (basically, if you're going to be using a backup-repo contents, you need to initialize your database instance from one of its backups; this is necessary because of the way postgres physical backups work; see here for some more info: pgbackrest/pgbackrest#1066 (comment)) Because of the limitations of physical backups, I plan to set up weekly (or so) logical backups as well. That's for another time though, as physical backups should be fine for now. (ie. while I'm on the same postgres version)
The system identifier cannot be updated in Postgres.
The thing to do here is issue a stanza-upgrade or maybe better, a stanza-delete/stanza-create since the repo is pretty useless without backups.
This seems like something you should suggest at https://github.com/CrunchyData/postgres-operator. Actually, that pretty much goes for all of this. |
Please provide the following information when submitting an issue (feature requests or general comments can skip this):
version 2.25
12.3
CentOS 7
package (CrunchyData Postgres Operator)
Stanza-create fails with this:
I am having trouble understanding how one can restore in the following situation:
I didn't find information in the Guide for this situation.
How do I create the stanza and restore to the new machine?
The text was updated successfully, but these errors were encountered: