The goal of this project is to explore methods for improving the performance of web applications. Python3.10, FastApi, docker compose V2 are used.
How to:
To start project Build project:
bash build.sh
Run containers:
docker compose up
Get nginx Ip address
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' nginx-proxy
Append nginx address to the the hosts
echo "nginx-Ip-address climber-net' >> /etc/hosts
Get gateway ip address:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.Gateway}}{{end}}' nginx-proxy
Set gateway address to the proxy_pass in proxy/default.conf
And go to the browser to access it:
climber-net/docs
To fill db with fake users
(Note this'll create a user table also):
- create virtual environment in hl_utils
- activate virtual environment
cd /hl_utils
pip install requriments.txt
python user_faker.py --users xxx
(where xxx is number of users)- delete db_backend_mysql or pg_data if exist. It depends of which database is used.
cd ..
docker-compose up
(or just start db contaner)- wait till data from sql file will be populated
If you don't want to create fake users you should:
- build the the project
bash build.sh
- run containers
docker-compose up
- attach to the backend container
docker compose exec backend bash
- run a command
bash create_tables_and_superuser.sh
- now you can exit from backend container
exit
To start project with DB sharding capability
Run containers:
docker compose up -f docker-compose-citus.yml
-
then attach to the citus master
docker compose exec master bash
-
connect to the db:
psql -U ${POSTGRES_USER} -d ${POSTGRES_DB}
-
create reference table:
SELECT create_reference_table('user');
-
create distributed table from "dialogmessage" table:
SELECT create_distributed_table('dialogmessage', 'id');
-
You can change the number of shards. Default is 32 shards.
SELECT alter_distributed_table('dialogmessage', shard_count:=6 cascade_to_colocated:=true);
If you need to increase the number of workers
Run command below with desired worker number in parameter --scale worker=
POSTGRES_USER=your_postgres_user POSTGRES_PASSWORD=your_postgres_password docker compose -f docker-compose_add_citus_workers.yml up --scale worker=2
POSTGRES_USER=your_postgres_user POSTGRES_PASSWORD=your_postgres_password docker compose -f docker-compose_add_citus_workers.yml restart
SELECT master_get_active_worker_nodes();
SELECT nodename, count(*) FROM citus_shards GROUP BY nodename;
alter system set wal_level = logical;
SELECT run_command_on_workers('alter system set wal_level = logical');
Then detach from citus master and restart citus:
POSTGRES_USER=root POSTGRES_PASSWORD=pwd docker compose -f docker-compose_add_citus_workers.yml restart
Then attach again to citus master:
show wal_level;
Now you can see "logical level"
SELECT master_get_active_worker_nodes();
Then start rebalancing:
SELECT citus_rebalance_start();
SELECT * FROM citus_rebalance_status();
After rebalancing is completed, check that the data is evenly distributed across the shards:
SELECT nodename, count(*) FROM citus_shards GROUP BY nodename;
To disable inactive nodes use new worker number in --scale worker=
POSTGRES_USER=your_postgres_user POSTGRES_PASSWORD=your_postgres_password docker compose -f docker-compose_add_citus_workers.yml up --scale worker=1
Then attach to citus master as in step 1 and step 2 previosly described and run: SELECT * from citus_disable_node('name_of_your_inactive_node', 5432);
Scaling websocket service
Haproxy is used here to loadbalance websocket service.
After new instance of websocket service was started you should add host name and port to the haproxy.cfg in the haproxy folder of the project. After that haproxy have to be reloaded, not restarted.
Do it with command systemctl reload haproxy
inside haproxy container.
To make RabbitMq cluster
-
docker compose up
-
Go to rabbit-master admin. Start browser then past and go http://127.0.0.1:15672
-
username "guest", password "guest" (without doublequotes)
-
docker exec -it rmq-slave-1 bash
rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit@rabbit
rabbitmqctl start_app
-
To check the cluster operation launch another tab in the browser and then paste and go http://127.0.0.1:15673
-
Сheck that there are two nodes rabbit@rabbit and rabbit@rabbit-slave
In a RabbitMq cluster there is no concept of a “master” node. All nodes are equal and can be stopped and started in any order. Except for one exception. If we stop all the cluster services one at a time, then the one that stopped last should start first when we start turning them back on.