diff --git a/README.md b/README.md index 2f60fc283..40d51402d 100644 --- a/README.md +++ b/README.md @@ -36,9 +36,8 @@ In addition to deploying new clusters, this playbook also support the deployment - [Deployment: quick start](#deployment-quick-start) - [Variables](#variables) - [Cluster Scaling](#cluster-scaling) - - [Preparation:](#preparation) - - [Steps to add a new node:](#steps-to-add-a-new-node) - - [Steps to add a new banlancer node:](#steps-to-add-a-new-banlancer-node) + - [Steps to add a new postgres node](#steps-to-add-a-new-postgres-node) + - [Steps to add a new balancer node](#steps-to-add-a-new-balancer-node) - [Restore and Cloning](#restore-and-cloning) - [Create cluster with pgBackRest:](#create-cluster-with-pgbackrest) - [Create cluster with WAL-G:](#create-cluster-with-wal-g) @@ -287,67 +286,69 @@ See the vars/[main.yml](./vars/main.yml), [system.yml](./vars/system.yml) and ([ ## Cluster Scaling -Add new postgresql node to existing cluster -
Click here to expand...

After you successfully deployed your PostgreSQL HA cluster, you may need to scale it further. \ Use the `add_pgnode.yml` playbook for this. -> :grey_exclamation: This playbook does not scaling the etcd cluster and haproxy balancers. - -During the run this playbook, the new nodes will be prepared in the same way as when first deployment the cluster. But unlike the initial deployment, all the necessary **configuration files will be copied from the master server**. +

Add new postgresql node to existing cluster

-###### Preparation: +> This playbook does not scaling the etcd cluster or consul cluster. -1. Add a new node (*or subnet*) to the `pg_hba.conf` file on all nodes in your cluster -2. Apply pg_hba.conf for all PostgreSQL (see `patronictl reload --help`) +During the run this playbook, the new nodes will be prepared in the same way as when first deployment the cluster. But unlike the initial deployment, all the necessary **configuration files will be copied from the master server**. -###### Steps to add a new node: +###### Steps to add a new Postgres node: -3. Go to the playbook directory -4. Edit the inventory file +1. Add a new node to the inventory file with the variable `new_node=true` +2. Run `add_pgnode.yml` playbook -Specify the ip address of one of the nodes of the cluster in the [master] group, and the new node (which you want to add) in the [replica] group. +In this example, we add a node with the IP address 10.128.64.144 -5. Edit the variable files +``` +[master] +10.128.64.140 hostname=pgnode01 postgresql_exists='true' -Variables that should be the same on all cluster nodes: \ -`with_haproxy_load_balancing`,` postgresql_version`, `postgresql_data_dir`,` postgresql_conf_dir`. +[replica] +10.128.64.142 hostname=pgnode02 postgresql_exists='true' +10.128.64.143 hostname=pgnode03 postgresql_exists='true' +10.128.64.144 hostname=pgnode04 postgresql_exists='false' new_node=true +``` -6. Run playbook: +Run playbook: -`ansible-playbook add_pgnode.yml` +``` +ansible-playbook add_pgnode.yml +```

-Add new haproxy balancer node -
Click here to expand...

- -Use the `add_balancer.yml` playbook for this. +

Add new haproxy balancer node

-During the run this playbook, the new balancer node will be prepared in the same way as when first deployment the cluster. But unlike the initial deployment, **all necessary configuration files will be copied from the server specified in the [master] group**. - -> :heavy_exclamation_mark: Please test it in your test enviroment before using in a production. +During the run this playbook, the new balancer node will be prepared in the same way as when first deployment the cluster. But unlike the initial deployment, all necessary **configuration files will be copied from the first server specified in the inventory file in the "balancers" group**. -###### Steps to add a new banlancer node: +###### Steps to add a new balancer node: -1. Go to the playbook directory +Note: Used if the `with_haproxy_load_balancing` variable is set to `true` -2. Edit the inventory file +1. Add a new node to the inventory file with the variable `new_node=true` -Specify the ip address of one of the existing balancer nodes in the [master] group, and the new balancer node (which you want to add) in the [balancers] group. +2. Run `add_balancer.yml` playbook -> :heavy_exclamation_mark: Attention! The list of Firewall ports is determined dynamically based on the group in which the host is specified. \ -If you adding a new haproxy balancer node to one of the existing nodes from the [etcd_cluster] or [master]/[replica] groups, you can rewrite the iptables rules! \ -See firewall_allowed_tcp_ports_for.balancers variable in the system.yml file. -3. Edit the `main.yml` variable file + In this example, we add a balancer node with the IP address 10.128.64.144 -Specify `with_haproxy_load_balancing: true` +``` +[balancers] +10.128.64.140 +10.128.64.142 +10.128.64.143 +10.128.64.144 new_node=true +``` -4. Run playbook: +Run playbook: -`ansible-playbook add_balancer.yml` +``` +ansible-playbook add_balancer.yml +```

diff --git a/add_balancer.yml b/add_balancer.yml index ee04eb3eb..cf35e4a70 100644 --- a/add_balancer.yml +++ b/add_balancer.yml @@ -1,6 +1,6 @@ --- -- name: Add haproxy balancer node +- name: Add haproxy balancer node (to the cluster "{{ patroni_cluster_name }}") hosts: balancers become: true become_method: sudo @@ -9,8 +9,6 @@ vars_files: - vars/main.yml - vars/system.yml - vars: - add_balancer: true pre_tasks: - name: Include OS-specific variables @@ -25,16 +23,36 @@ when: ansible_os_family == 'Rocky' or ansible_os_family == 'AlmaLinux' tags: always - - name: Checking Linux distribution + - name: '[Pre-Check] Checking Linux distribution' fail: msg: "{{ ansible_distribution }} is not supported" when: ansible_distribution not in os_valid_distributions - - name: Checking version of OS Linux + - name: '[Pre-Check] Checking version of OS Linux' fail: msg: "{{ ansible_distribution_version }} of {{ ansible_distribution }} is not supported" when: ansible_distribution_version is version_compare(os_minimum_versions[ansible_distribution], '<') + - name: '[Pre-Check] Check if there is a node with new_node set to true' + set_fact: + new_nodes: "{{ new_nodes | default([]) + [item] }}" + when: hostvars[item]['new_node'] | default(false) | bool + loop: "{{ groups['balancers'] }}" + tags: always + + # Stop, if no nodes found with new_node variable + - name: "Pre-Check error. No nodes found with new_node set to true" + run_once: true + fail: + msg: "Please specify the new_node=true variable for the new balancer server to add it to the existing cluster." + when: new_nodes | default([]) | length < 1 + + - name: Print a list of new balancer nodes + run_once: true + debug: + var: new_nodes + tags: always + - name: Update apt cache apt: update_cache: true @@ -44,7 +62,10 @@ delay: 5 retries: 3 environment: "{{ proxy_env | default({}) }}" - when: ansible_os_family == "Debian" and installation_method == "repo" + when: + - new_node | default(false) | bool + - ansible_os_family == "Debian" + - installation_method == "repo" - name: Make sure the gnupg and apt-transport-https packages are present apt: @@ -57,20 +78,27 @@ delay: 5 retries: 3 environment: "{{ proxy_env | default({}) }}" - when: ansible_os_family == "Debian" and installation_method == "repo" + when: + - new_node | default(false) | bool + - ansible_os_family == "Debian" + - installation_method == "repo" - name: Build a firewall_ports_dynamic_var set_fact: firewall_ports_dynamic_var: "{{ firewall_ports_dynamic_var | default([]) + (firewall_allowed_tcp_ports_for[item]) }}" loop: "{{ hostvars[inventory_hostname].group_names }}" - when: firewall_enabled_at_boot|bool + when: + - new_node | default(false) | bool + - firewall_enabled_at_boot | bool tags: firewall - name: Build a firewall_rules_dynamic_var set_fact: firewall_rules_dynamic_var: "{{ firewall_rules_dynamic_var | default([]) + (firewall_additional_rules_for[item]) }}" loop: "{{ hostvars[inventory_hostname].group_names }}" - when: firewall_enabled_at_boot|bool + when: + - new_node | default(false) | bool + - firewall_enabled_at_boot | bool tags: firewall roles: @@ -79,12 +107,51 @@ vars: firewall_allowed_tcp_ports: "{{ firewall_ports_dynamic_var | unique }}" firewall_additional_rules: "{{ firewall_rules_dynamic_var | unique }}" - when: firewall_enabled_at_boot|bool + when: + - new_node | default(false) | bool + - firewall_enabled_at_boot | bool tags: firewall + - role: sysctl + when: + - new_node | default(false) | bool + + tasks: + - name: Add host to group new_balancer (in-memory inventory) + add_host: + name: "{{ item }}" + groups: new_balancer + loop: "{{ new_nodes }}" + changed_when: false + tags: always + +- hosts: new_balancer + become: true + become_method: sudo + gather_facts: true + any_errors_fatal: true + vars_files: + - vars/main.yml + - vars/system.yml + vars: + add_balancer: true + + pre_tasks: + - name: Include OS-specific variables + include_vars: "vars/{{ ansible_os_family }}.yml" + when: not ansible_os_family == 'Rocky' and not ansible_os_family == 'AlmaLinux' + tags: always + + # For compatibility with Ansible old versions + # (support for RockyLinux and AlmaLinux has been added to Ansible 2.11) + - name: Include OS-specific variables + include_vars: "vars/RedHat.yml" + when: ansible_os_family == 'Rocky' or ansible_os_family == 'AlmaLinux' + tags: always + + roles: - role: hostname - role: resolv_conf - - role: sysctl - role: haproxy when: with_haproxy_load_balancing|bool diff --git a/add_pgnode.yml b/add_pgnode.yml index 89e0a169f..d442ccbdb 100644 --- a/add_pgnode.yml +++ b/add_pgnode.yml @@ -1,11 +1,13 @@ --- -- name: PostgreSQL High-Availability Cluster Scaling (add replica node) - hosts: replica +- name: PostgreSQL High-Availability Cluster Scaling (add a replica node to the cluster "{{ patroni_cluster_name }}") + hosts: postgres_cluster become: true become_method: sudo any_errors_fatal: true gather_facts: true + handlers: + - include: roles/patroni/handlers/main.yml vars_files: - vars/main.yml - vars/system.yml @@ -23,16 +25,42 @@ when: ansible_os_family == 'Rocky' or ansible_os_family == 'AlmaLinux' tags: always - - name: Checking Linux distribution + - name: '[Pre-Check] Checking Linux distribution' fail: msg: "{{ ansible_distribution }} is not supported" when: ansible_distribution not in os_valid_distributions - - name: Checking version of OS Linux + - name: '[Pre-Check] Checking version of OS Linux' fail: msg: "{{ ansible_distribution_version }} of {{ ansible_distribution }} is not supported" when: ansible_distribution_version is version_compare(os_minimum_versions[ansible_distribution], '<') + - name: '[Pre-Check] Check if there is a node with new_node set to true' + set_fact: + new_nodes: "{{ new_nodes | default([]) + [item] }}" + when: hostvars[item]['new_node'] | default(false) | bool + loop: "{{ groups['replica'] }}" + tags: always + + # Stop, if no nodes found with new_node variable + - name: "Pre-Check error. No nodes found with new_node set to true" + run_once: true + fail: + msg: "Please specify the new_node=true variable for the new server to add it to the existing cluster." + when: new_nodes | default([]) | length < 1 + + - name: Print a list of new nodes + run_once: true + debug: + var: new_nodes + tags: always + + - name: Add a new node to pg_hba.conf on existing cluster nodes + include_role: + name: patroni/config + tasks_from: pg_hba + when: not new_node | default(false) | bool + - name: Update apt cache apt: update_cache: true @@ -42,7 +70,10 @@ delay: 5 retries: 3 environment: "{{ proxy_env | default({}) }}" - when: ansible_os_family == "Debian" and installation_method == "repo" + when: + - new_node | default(false) | bool + - ansible_os_family == "Debian" + - installation_method == "repo" - name: Make sure the gnupg and apt-transport-https packages are present apt: @@ -55,20 +86,27 @@ delay: 5 retries: 3 environment: "{{ proxy_env | default({}) }}" - when: ansible_os_family == "Debian" and installation_method == "repo" + when: + - new_node | default(false) | bool + - ansible_os_family == "Debian" + - installation_method == "repo" - name: Build a firewall_ports_dynamic_var set_fact: firewall_ports_dynamic_var: "{{ firewall_ports_dynamic_var | default([]) + (firewall_allowed_tcp_ports_for[item]) }}" loop: "{{ hostvars[inventory_hostname].group_names }}" - when: firewall_enabled_at_boot|bool + when: + - new_node | default(false) | bool + - firewall_enabled_at_boot | bool tags: firewall - name: Build a firewall_rules_dynamic_var set_fact: firewall_rules_dynamic_var: "{{ firewall_rules_dynamic_var | default([]) + (firewall_additional_rules_for[item]) }}" loop: "{{ hostvars[inventory_hostname].group_names }}" - when: firewall_enabled_at_boot|bool + when: + - new_node | default(false) | bool + - firewall_enabled_at_boot | bool tags: firewall roles: @@ -77,9 +115,51 @@ vars: firewall_allowed_tcp_ports: "{{ firewall_ports_dynamic_var | unique }}" firewall_additional_rules: "{{ firewall_rules_dynamic_var | unique }}" - when: firewall_enabled_at_boot|bool + when: + - new_node | default(false) | bool + - firewall_enabled_at_boot | bool tags: firewall + - role: sysctl + when: + - new_node | default(false) | bool + + - role: ssh-keys + when: + - enable_ssh_key_based_authentication | default(false) | bool + + tasks: + - name: Add host to group new_replica (in-memory inventory) + add_host: + name: "{{ item }}" + groups: new_replica + loop: "{{ new_nodes }}" + changed_when: false + tags: always + +- hosts: new_replica + become: true + become_method: sudo + gather_facts: true + any_errors_fatal: true + vars_files: + - vars/main.yml + - vars/system.yml + + pre_tasks: + - name: Include OS-specific variables + include_vars: "vars/{{ ansible_os_family }}.yml" + when: not ansible_os_family == 'Rocky' and not ansible_os_family == 'AlmaLinux' + tags: always + + # For compatibility with Ansible old versions + # (support for RockyLinux and AlmaLinux has been added to Ansible 2.11) + - name: Include OS-specific variables + include_vars: "vars/RedHat.yml" + when: ansible_os_family == 'Rocky' or ansible_os_family == 'AlmaLinux' + tags: always + + roles: - role: hostname - role: resolv_conf - role: etc_hosts @@ -87,17 +167,15 @@ - role: packages - role: sudo - role: swap - - role: sysctl - role: transparent_huge_pages - role: pam_limits - role: io-scheduler - role: locales - role: timezone - role: ntp - - role: ssh-keys - role: copy -- hosts: pgbackrest:postgres_cluster +- hosts: pgbackrest:new_replica become: true become_method: sudo gather_facts: true @@ -124,7 +202,7 @@ when: dcs_type == "consul" tags: consul -- hosts: replica +- hosts: new_replica become: true become_method: sudo gather_facts: true diff --git a/inventory b/inventory index c17e2dd20..5e6ced838 100644 --- a/inventory +++ b/inventory @@ -3,6 +3,7 @@ # "postgresql_exists='true'" if PostgreSQL is already exists and running # "hostname=" variable is optional (used to change the server name) +# "new_node=true" to add a new server to an existing cluster using the add_pgnode.yml playbook # In this example, all components will be installed on PostgreSQL nodes. # You can deploy the haproxy balancers and the etcd or consul cluster on other dedicated servers (recomended). @@ -26,6 +27,7 @@ 10.128.64.140 10.128.64.142 10.128.64.143 +#10.128.64.144 new_node=true # PostgreSQL nodes [master] @@ -34,6 +36,7 @@ [replica] 10.128.64.142 hostname=pgnode02 postgresql_exists='false' 10.128.64.143 hostname=pgnode03 postgresql_exists='false' +#10.128.64.144 hostname=pgnode04 postgresql_exists='false' new_node=true [postgres_cluster:children] master diff --git a/roles/confd/tasks/main.yml b/roles/confd/tasks/main.yml index faec49548..7cf05e5df 100644 --- a/roles/confd/tasks/main.yml +++ b/roles/confd/tasks/main.yml @@ -57,7 +57,7 @@ tags: confd_conf, confd - block: # for add_balancer.yml - - name: Fetch confd.toml, haproxy.toml, haproxy.tmpl conf files from master + - name: "Fetch confd.toml, haproxy.toml, haproxy.tmpl conf files from {{ groups.balancers[0] }}" run_once: true fetch: src: "{{ item }}" @@ -68,7 +68,7 @@ - /etc/confd/confd.toml - /etc/confd/conf.d/haproxy.toml - /etc/confd/templates/haproxy.tmpl - delegate_to: "{{ groups.master[0] }}" + delegate_to: "{{ groups.balancers[0] }}" - name: Copy confd.toml, haproxy.toml, haproxy.tmpl conf files to replica copy: @@ -82,6 +82,17 @@ label: "{{ item.dest }}" notify: "restart confd" + - name: Remove confd.toml, haproxy.toml, haproxy.tmpl files from localhost + run_once: true + file: + path: "files/{{ item }}" + state: absent + loop: + - confd.toml + - haproxy.toml + - haproxy.tmpl + delegate_to: localhost + - name: Prepare haproxy.tmpl template file (replace "bind" for stats) lineinfile: path: /etc/confd/templates/haproxy.tmpl diff --git a/roles/haproxy/tasks/main.yml b/roles/haproxy/tasks/main.yml index 062448314..fc5833515 100644 --- a/roles/haproxy/tasks/main.yml +++ b/roles/haproxy/tasks/main.yml @@ -400,7 +400,7 @@ tags: haproxy, haproxy_service, load_balancing - block: # for add_balancer.yml - - name: Fetch haproxy.cfg file from master + - name: "Fetch haproxy.cfg file from {{ groups.balancers[0] }}" run_once: true fetch: src: /etc/haproxy/haproxy.cfg @@ -408,7 +408,7 @@ validate_checksum: true flat: true notify: "restart haproxy" - delegate_to: "{{ groups.master[0] }}" + delegate_to: "{{ groups.balancers[0] }}" - name: Copy haproxy.cfg file to replica copy: @@ -418,6 +418,13 @@ group: haproxy notify: "restart haproxy" + - name: Remove haproxy.cfg file from localhost + run_once: true + file: + path: files/haproxy.cfg + state: absent + delegate_to: localhost + - name: Prepare haproxy.cfg conf file (replace "bind") lineinfile: path: /etc/haproxy/haproxy.cfg diff --git a/roles/keepalived/tasks/main.yml b/roles/keepalived/tasks/main.yml index a0cba6df8..1ca1fde05 100644 --- a/roles/keepalived/tasks/main.yml +++ b/roles/keepalived/tasks/main.yml @@ -51,14 +51,14 @@ tags: keepalived_conf, keepalived - block: # for add_balancer.yml - - name: Fetch keepalived.conf conf file from master + - name: "Fetch keepalived.conf conf file from {{ groups.balancers[0] }}" run_once: true fetch: src: /etc/keepalived/keepalived.conf dest: files/keepalived.conf validate_checksum: true flat: true - delegate_to: "{{ groups.master[0] }}" + delegate_to: "{{ groups.balancers[0] }}" - name: Copy keepalived.conf conf file to replica copy: @@ -66,6 +66,13 @@ dest: /etc/keepalived/keepalived.conf notify: "restart keepalived" + - name: Remove keepalived.conf file from localhost + run_once: true + file: + path: files/keepalived.conf + state: absent + delegate_to: localhost + - name: Prepare keepalived.conf conf file (replace "interface") lineinfile: path: /etc/keepalived/keepalived.conf diff --git a/roles/patroni/config/tasks/main.yml b/roles/patroni/config/tasks/main.yml index d9a3cfb49..1b9ad71fc 100644 --- a/roles/patroni/config/tasks/main.yml +++ b/roles/patroni/config/tasks/main.yml @@ -39,16 +39,6 @@ notify: "reload patroni" tags: patroni, patroni_conf -- name: Update pg_hba.conf - template: - src: ../templates/pg_hba.conf.j2 - dest: "{{ postgresql_conf_dir }}/pg_hba.conf" - owner: postgres - group: postgres - mode: 0640 - notify: "reload postgres" - tags: patroni, patroni_conf, pg_hba, pg_hba_generate - - block: - name: Update postgresql parameters in DCS uri: @@ -79,4 +69,8 @@ when: item.value == "null" tags: patroni, patroni_conf +# Update pg_hba.conf +- import_tasks: pg_hba.yml + tags: patroni, patroni_conf, pg_hba, pg_hba_generate + ... diff --git a/roles/patroni/config/tasks/pg_hba.yml b/roles/patroni/config/tasks/pg_hba.yml new file mode 100644 index 000000000..02f8a10ad --- /dev/null +++ b/roles/patroni/config/tasks/pg_hba.yml @@ -0,0 +1,11 @@ +--- +- name: Update pg_hba.conf + template: + src: ../templates/pg_hba.conf.j2 + dest: "{{ postgresql_conf_dir }}/pg_hba.conf" + owner: postgres + group: postgres + mode: 0640 + notify: "reload postgres" + tags: pg_hba, pg_hba_generate +...