-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bitnami/postgresql-ha] Cloning huge data from primary node fails due to livenessProbe #29948
Conversation
Thank you for initiating this pull request. We appreciate your effort. This is just a friendly reminder that signing your commits is important. Your signature certifies that you either authored the patch or have the necessary rights to contribute to the changes. You can find detailed information on how to do this in the “Sign your work” section of our contributing guidelines. Feel free to reach out if you have any questions or need assistance with the signing process. |
…s are in "data standby clone" Signed-off-by: Axel <[email protected]>
Signed-off-by: Bitnami Containers <[email protected]>
@carrodher Hi, I re-submit with signed commit :) Is is ok ? |
Signed-off-by: Axel <[email protected]>
Here you can see some tips about how you can sign the previous commits |
Signed-off-by: Axel <[email protected]>
Signed-off-by: Bitnami Containers <[email protected]>
@carrodher "All checks have passed" :) I don't know when you will merge but since the time flies, there is conflict appears on CHANGELOG and chart.yaml. |
Signed-off-by: Carlos Rodríguez Hernández <[email protected]>
Signed-off-by: Bitnami Containers <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, this step involving syncing data on standby nodes should be done in a init-container so it doesn't affect probes.
Description of the change
Update livenessProbe in order to check if process are in "data standby clone" step. If the synchronization from primary are longer than the livenessProbe fails, the replicas will restart in loop. It works for small data since it's fast but not with more than 20Go (I have 300 Go on primary on SSD and it has taken around 25 min to sync).
Maybe there is a special command from PG to check but I haven't found any.
Personal thanks to @exename (#4894 (comment))
Benefits
Allows flexibility to add replicas during the run with existing data.
Applicable issues
Checklist
Chart.yaml
according to semver. This is not necessary when the changes only affect README.md files.