In this article we will learn how to deploy project which contains test monitoring environment.
This project will be deployed by Clusterdev to AWS,
managed by Kubernetes cluster(k3s) and monitored by Community monitoring stack.
We should have some client host with Ubuntu 20.04 to use this manual without any customization.
We should install Docker to client host.
Login or register new AWS account.
We should select AWS region to deploy our cluster in that region.
Add programmatic access key for new or existing user.
Open bash
terminal on client host.
Get example environment file env
to set our AWS credentials:
curl https://raw.githubusercontent.com/shalb/monitoring-examples/main/cdev/monitoring-cluster-blog/env > env
Add programmatic access key to environment file env
:
editor env
mkdir -p cdev && mv env cdev/ && cd cdev && chmod 777 ./
alias cdev='docker run -it -v $(pwd):/workspace/cluster-dev --env-file=env clusterdev/cluster.dev:v0.6.3'
cdev project create https://github.com/shalb/cdev-aws-k3s-test
curl https://raw.githubusercontent.com/shalb/monitoring-examples/main/cdev/monitoring-cluster-blog/stack.yaml > stack.yaml
curl https://raw.githubusercontent.com/shalb/monitoring-examples/main/cdev/monitoring-cluster-blog/project.yaml > project.yaml
Go to S3 and create new bucket.
Replace value of state_bucket_name
key in config file project.yaml
by state bucket's name:
editor project.yaml
We will set all needed settings of our project in config file project.yaml
We should customize all variables, which have # example
comment in the end of line.
We should replace value of region key in config file project.yaml
by our region.
By default we will use cluster.dev
domain as root domain for cluster ingresses.
We should replace value of cluster_name
key by unique string in config file project.yaml
, because default ingress will use it in resulting DNS name.
This command may help us to generate random name and check is it in use:
CLUSTER_NAME=$(echo "$(tr -dc a-z0-9 </dev/urandom | head -c 5)")
dig argocd.${CLUSTER_NAME}.cluster.dev | grep -q "^${CLUSTER_NAME}" || echo "OK to use cluster_name: ${CLUSTER_NAME}"
We should see message: OK to use cluster_name: ...
We should have access to cluster nodes via SSH.
To add existing SSH key we should replace value of public_key
key in config file project.yaml
.
If we have no SSH key, then we should create it.
In our project we have ArgoCD. It will help us to deploy our applications.
To secure ArgoCD we should replace value of argocd_server_admin_password
key
by unique password in config file project.yaml
. Default value is bcrypted password
string.
To encrypt our custom password we may use online tool or encrypt it by command:
docker run -it --entrypoint="" clusterdev/cluster.dev:v0.6.3 apt install -y apache2-utils && htpasswd -bnBC 10 "" myPassword | tr -d ':\n' ; echo ''
We should add custom password for Grafana.
To secure Grafana we should replace value of grafana_password
key
by unique password in config file project.yaml
.
This command may help us to generate random password:
echo "$(tr -dc a-zA-Z0-9,._! </dev/urandom | head -c 20)"
To avoid installation of all needed tools directly to client host - we will run all commands inside Clusterdev container.
We should run next commands to execute bash
inside cdev conainer and proceed to deploy:
alias cdev_bash='docker run -it -v $(pwd):/workspace/cluster-dev --env-file=env --network=host --entrypoint="" clusterdev/cluster.dev:v0.6.3 bash'
cdev_bash
Now we should deploy our project to AWS via cdev
command:
cdev apply -l debug | tee apply.log
Successfull deploy should provide further instructions how to access Kubernetes and URLs of ArgoCD, Grafana web UIs.
In some cases we should wait some time to access those web UIs.
DNS update delays may be source of problem.
In such case we able to forward all needed services via kubectl
to client host:
kubectl port-forward svc/argocd-server -n argocd 18080:443 > /dev/null 2>&1 &
kubectl port-forward svc/monitoring-grafana -n monitoring 28080:80 > /dev/null 2>&1 &
We may test our forwards via curl
:
curl 127.0.0.1:18080
curl 127.0.0.1:28080
If we see no errors from curl
, than client host should access those same endpoins via any browser.
If we want to destroy our cluster - we should run command:
cdev destroy -l debug | tee destroy.log
Now we able to deploy and destroy basic project with monitoring stack by simple commands to save our time.
This project allows us to use current project as test environment for monitoring related articles
and test many usefull monitoring cases before applying it to production environments.