-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bitnami/postgresql-ha] Cloning data from primary node fails due to liveness/rediness probes #3556
Comments
Hi, thanks for using this bitnami chart, did you try modifying the parameters of the probe instead of disabling them? Maybe this action is taking so long and you need to increase the probes' parameters, see https://github.com/bitnami/charts/blob/master/bitnami/postgresql-ha/values.yaml#L189 Apart from that, what is the error that appears in the logs when the issue is reached? What says |
If I tune readiness/liveness I cannot predict even nearly when 1TB database will replicate. So it's better turn them off. My current issue for now is #3563 |
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback. |
Hello, I have exactly the same issue as @Antiarchitect. The primary node has 300Gi of data, I want to have a second replica (1 -> 2) but the time of liveness is too short during data sync. Have you any solution? Changing/disabling liveness seems to be strange. In other hand, it's logic that the replica should not be live until the data are not fully synchronized. Which chart: |
Which chart:
bitnami/postgresql-ha 3.5.9
Describe the bug
Upscaling fails on "Cloning data from primary node..." with large db (18GB) probably due to liveness/readiness probes
To Reproduce
Expected behavior
Some mechanism to avoid this.
P.S. If I turn off liveness/readiness - all is OK and both replicas have these last strings in log:
But when I try to turn on liveness/readiness the second pod (pg-ha-postgresql-1) have to fully resync by some reason and starts failing again due to liveness/readiness turned on again
Version of Helm and Kubernetes:
helm version
:kubectl version
:Additional context
NONE
The text was updated successfully, but these errors were encountered: