Skip to content

Commit

Permalink
Improvements and simplification of the cluster scaling process. (#290)
Browse files Browse the repository at this point in the history
Playbook add_pgnode.yml

- Previously, you had to add a new node yourself to the pg_hba.conf file on all nodes of existing cluster before adding a new node to the cluster. Now it will be done automatically.

- Previously, it was necessary to add a new node (which you want to add to existing cluster) in the [replica] group and at the same time remove other nodes from the replica group from inventory in order to add_pgnode.yml playbook was executed specifically for the new server. 
Now you do not need to do this, you just need to specify the variable new_node=true for the server that you are adding to the cluster and the playbook will be executed only on this server.

Playbook add_balancer.yml

- Previously, configuration files were copied from the server from the master group. This is not very convenient, for example, when the load balancers are placed on separate servers. Now the files are copied from the first server specified in the inventory file in the balancers group.

- Specify the variable new_node=true for the balancer server that you are adding to the cluster and the playbook will be executed only on this server.
  • Loading branch information
vitabaks authored Mar 29, 2023
1 parent 59b6100 commit 53c8a2e
Show file tree
Hide file tree
Showing 9 changed files with 257 additions and 78 deletions.
77 changes: 39 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,8 @@ In addition to deploying new clusters, this playbook also support the deployment
- [Deployment: quick start](#deployment-quick-start)
- [Variables](#variables)
- [Cluster Scaling](#cluster-scaling)
- [Preparation:](#preparation)
- [Steps to add a new node:](#steps-to-add-a-new-node)
- [Steps to add a new banlancer node:](#steps-to-add-a-new-banlancer-node)
- [Steps to add a new postgres node](#steps-to-add-a-new-postgres-node)
- [Steps to add a new balancer node](#steps-to-add-a-new-balancer-node)
- [Restore and Cloning](#restore-and-cloning)
- [Create cluster with pgBackRest:](#create-cluster-with-pgbackrest)
- [Create cluster with WAL-G:](#create-cluster-with-wal-g)
Expand Down Expand Up @@ -287,67 +286,69 @@ See the vars/[main.yml](./vars/main.yml), [system.yml](./vars/system.yml) and ([


## Cluster Scaling
Add new postgresql node to existing cluster
<details><summary>Click here to expand...</summary><p>

After you successfully deployed your PostgreSQL HA cluster, you may need to scale it further. \
Use the `add_pgnode.yml` playbook for this.

> :grey_exclamation: This playbook does not scaling the etcd cluster and haproxy balancers.
During the run this playbook, the new nodes will be prepared in the same way as when first deployment the cluster. But unlike the initial deployment, all the necessary **configuration files will be copied from the master server**.
<details><summary>Add new postgresql node to existing cluster</summary><p>

###### Preparation:
> This playbook does not scaling the etcd cluster or consul cluster.
1. Add a new node (*or subnet*) to the `pg_hba.conf` file on all nodes in your cluster
2. Apply pg_hba.conf for all PostgreSQL (see `patronictl reload --help`)
During the run this playbook, the new nodes will be prepared in the same way as when first deployment the cluster. But unlike the initial deployment, all the necessary **configuration files will be copied from the master server**.

###### Steps to add a new node:
###### Steps to add a new Postgres node:

3. Go to the playbook directory
4. Edit the inventory file
1. Add a new node to the inventory file with the variable `new_node=true`
2. Run `add_pgnode.yml` playbook

Specify the ip address of one of the nodes of the cluster in the [master] group, and the new node (which you want to add) in the [replica] group.
In this example, we add a node with the IP address 10.128.64.144

5. Edit the variable files
```
[master]
10.128.64.140 hostname=pgnode01 postgresql_exists='true'
Variables that should be the same on all cluster nodes: \
`with_haproxy_load_balancing`,` postgresql_version`, `postgresql_data_dir`,` postgresql_conf_dir`.
[replica]
10.128.64.142 hostname=pgnode02 postgresql_exists='true'
10.128.64.143 hostname=pgnode03 postgresql_exists='true'
10.128.64.144 hostname=pgnode04 postgresql_exists='false' new_node=true
```

6. Run playbook:
Run playbook:

`ansible-playbook add_pgnode.yml`
```
ansible-playbook add_pgnode.yml
```

</p></details>

Add new haproxy balancer node
<details><summary>Click here to expand...</summary><p>

Use the `add_balancer.yml` playbook for this.
<details><summary>Add new haproxy balancer node</summary><p>

During the run this playbook, the new balancer node will be prepared in the same way as when first deployment the cluster. But unlike the initial deployment, **all necessary configuration files will be copied from the server specified in the [master] group**.

> :heavy_exclamation_mark: Please test it in your test enviroment before using in a production.
During the run this playbook, the new balancer node will be prepared in the same way as when first deployment the cluster. But unlike the initial deployment, all necessary **configuration files will be copied from the first server specified in the inventory file in the "balancers" group**.

###### Steps to add a new banlancer node:
###### Steps to add a new balancer node:

1. Go to the playbook directory
Note: Used if the `with_haproxy_load_balancing` variable is set to `true`

2. Edit the inventory file
1. Add a new node to the inventory file with the variable `new_node=true`

Specify the ip address of one of the existing balancer nodes in the [master] group, and the new balancer node (which you want to add) in the [balancers] group.
2. Run `add_balancer.yml` playbook

> :heavy_exclamation_mark: Attention! The list of Firewall ports is determined dynamically based on the group in which the host is specified. \
If you adding a new haproxy balancer node to one of the existing nodes from the [etcd_cluster] or [master]/[replica] groups, you can rewrite the iptables rules! \
See firewall_allowed_tcp_ports_for.balancers variable in the system.yml file.

3. Edit the `main.yml` variable file
In this example, we add a balancer node with the IP address 10.128.64.144

Specify `with_haproxy_load_balancing: true`
```
[balancers]
10.128.64.140
10.128.64.142
10.128.64.143
10.128.64.144 new_node=true
```

4. Run playbook:
Run playbook:

`ansible-playbook add_balancer.yml`
```
ansible-playbook add_balancer.yml
```

</p></details>

Expand Down
89 changes: 78 additions & 11 deletions add_balancer.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---

- name: Add haproxy balancer node
- name: Add haproxy balancer node (to the cluster "{{ patroni_cluster_name }}")
hosts: balancers
become: true
become_method: sudo
Expand All @@ -9,8 +9,6 @@
vars_files:
- vars/main.yml
- vars/system.yml
vars:
add_balancer: true

pre_tasks:
- name: Include OS-specific variables
Expand All @@ -25,16 +23,36 @@
when: ansible_os_family == 'Rocky' or ansible_os_family == 'AlmaLinux'
tags: always

- name: Checking Linux distribution
- name: '[Pre-Check] Checking Linux distribution'
fail:
msg: "{{ ansible_distribution }} is not supported"
when: ansible_distribution not in os_valid_distributions

- name: Checking version of OS Linux
- name: '[Pre-Check] Checking version of OS Linux'
fail:
msg: "{{ ansible_distribution_version }} of {{ ansible_distribution }} is not supported"
when: ansible_distribution_version is version_compare(os_minimum_versions[ansible_distribution], '<')

- name: '[Pre-Check] Check if there is a node with new_node set to true'
set_fact:
new_nodes: "{{ new_nodes | default([]) + [item] }}"
when: hostvars[item]['new_node'] | default(false) | bool
loop: "{{ groups['balancers'] }}"
tags: always

# Stop, if no nodes found with new_node variable
- name: "Pre-Check error. No nodes found with new_node set to true"
run_once: true
fail:
msg: "Please specify the new_node=true variable for the new balancer server to add it to the existing cluster."
when: new_nodes | default([]) | length < 1

- name: Print a list of new balancer nodes
run_once: true
debug:
var: new_nodes
tags: always

- name: Update apt cache
apt:
update_cache: true
Expand All @@ -44,7 +62,10 @@
delay: 5
retries: 3
environment: "{{ proxy_env | default({}) }}"
when: ansible_os_family == "Debian" and installation_method == "repo"
when:
- new_node | default(false) | bool
- ansible_os_family == "Debian"
- installation_method == "repo"

- name: Make sure the gnupg and apt-transport-https packages are present
apt:
Expand All @@ -57,20 +78,27 @@
delay: 5
retries: 3
environment: "{{ proxy_env | default({}) }}"
when: ansible_os_family == "Debian" and installation_method == "repo"
when:
- new_node | default(false) | bool
- ansible_os_family == "Debian"
- installation_method == "repo"

- name: Build a firewall_ports_dynamic_var
set_fact:
firewall_ports_dynamic_var: "{{ firewall_ports_dynamic_var | default([]) + (firewall_allowed_tcp_ports_for[item]) }}"
loop: "{{ hostvars[inventory_hostname].group_names }}"
when: firewall_enabled_at_boot|bool
when:
- new_node | default(false) | bool
- firewall_enabled_at_boot | bool
tags: firewall

- name: Build a firewall_rules_dynamic_var
set_fact:
firewall_rules_dynamic_var: "{{ firewall_rules_dynamic_var | default([]) + (firewall_additional_rules_for[item]) }}"
loop: "{{ hostvars[inventory_hostname].group_names }}"
when: firewall_enabled_at_boot|bool
when:
- new_node | default(false) | bool
- firewall_enabled_at_boot | bool
tags: firewall

roles:
Expand All @@ -79,12 +107,51 @@
vars:
firewall_allowed_tcp_ports: "{{ firewall_ports_dynamic_var | unique }}"
firewall_additional_rules: "{{ firewall_rules_dynamic_var | unique }}"
when: firewall_enabled_at_boot|bool
when:
- new_node | default(false) | bool
- firewall_enabled_at_boot | bool
tags: firewall

- role: sysctl
when:
- new_node | default(false) | bool

tasks:
- name: Add host to group new_balancer (in-memory inventory)
add_host:
name: "{{ item }}"
groups: new_balancer
loop: "{{ new_nodes }}"
changed_when: false
tags: always

- hosts: new_balancer
become: true
become_method: sudo
gather_facts: true
any_errors_fatal: true
vars_files:
- vars/main.yml
- vars/system.yml
vars:
add_balancer: true

pre_tasks:
- name: Include OS-specific variables
include_vars: "vars/{{ ansible_os_family }}.yml"
when: not ansible_os_family == 'Rocky' and not ansible_os_family == 'AlmaLinux'
tags: always

# For compatibility with Ansible old versions
# (support for RockyLinux and AlmaLinux has been added to Ansible 2.11)
- name: Include OS-specific variables
include_vars: "vars/RedHat.yml"
when: ansible_os_family == 'Rocky' or ansible_os_family == 'AlmaLinux'
tags: always

roles:
- role: hostname
- role: resolv_conf
- role: sysctl

- role: haproxy
when: with_haproxy_load_balancing|bool
Expand Down
Loading

0 comments on commit 53c8a2e

Please sign in to comment.