-
Notifications
You must be signed in to change notification settings - Fork 223
ERROR: Error creating replica using method pgbackrest #574
Comments
Manually running |
It seems like the code
Commenting out this line results in backup being restored from archive on startup with a new volume.
Edit: this restore results in a non-working standby node. Idk what I am doing wrong. Need help... |
@paulfantom I see a lot of backup/restore related commits from you in december. Maybe something changed which I do not understand? Or maybe there is a regression? |
+1 |
Funny, you're posting this just in the moment where I have struggle again and no clue what's going on. Idk how much time I have to investigate, bcs I am not really getting paid anymore for this stuff. Do you mind elaborating on your |
Essentially I have the same issue - backups are not being triggered in the replicas even though
The backup jobs themselves to s3 are also continuously running (but not sure that's an issue):
|
I don't see where your issue is related. This Ticket was about creating replicas, but you're talking about making backups. Furthermore, it looks like your backup Job is running. Backups always run on master, never on replicas/slaves. |
According to the docs every new replica will attempt to copy from an s3 backup (if available) but on creation of the pod I get this:
So the replicas aren't been created using pgbrackrest caused by |
Yes, that look more like the problem. I can confirm this behaviour here as well. Te replica creates itself from the master or another replica if available as a fallback. This takes way longer (in my case) which can be pain on large instances. |
Turned out, that I forgot to bump my chart but reverting my local changes to fix that issue. Testing with most recent one now. So It is supposed to be fixed in |
oof, they never released the |
To use the most recent changes with chart
@mathisve why so sloppy? :( Also see #596 Don't leave the community behind.... |
I'm using |
That seems to have done it for me @mindrunner 👍 |
It doesn't make sense to me why the new script works but the existing one (/etc/timescaledb/scripts/pgbackrest_restore.sh) doesn't when the only change I can see is just that the |
See PR for explanation |
Ah I see, sourcing in the env_file is required to access |
What did you do?
When starting a pod with empty storage it usually restores from azure backup. This works without issues for the first pod in statefulset after installing the chart. However, on every subsequent pod, I only see the error:
and patroni restores the database with
which comes with several downsides, (e.g. very slow compared to pgbackrest, error prone, wal volume fills up, etc)
This used to be different. Every new pod was restored by pgbackrest without any issues.
I am not sure if a config change on my side is the problem or if the chart might have a bug.
Environment
values.yaml
?The text was updated successfully, but these errors were encountered: