Skip to content

Latest commit

 

History

History
204 lines (158 loc) · 7.26 KB

README.md

File metadata and controls

204 lines (158 loc) · 7.26 KB

eCarComparison

GitHub license Build Status code style: prettier

The application

intro screen

eCar comparison is a microservice based web application developed as the capstone project of the Udacity Cloud Engineering Nanodegree. It allows users to register and log into a web client, upload a photo of themselves (using signed urls provided by aws S3) post new reviews, and update or delete previous ones about a car, rate other users' reviews and compare the details of two cars of their choice.

car details

The repo of the project is split into five parts:

  1. Front end: A React client bootsrapped with Create React App.
  2. Cars: Node-Express microservice -> getting images and details of cars.
  3. Reviews: Node-Express microservice -> CRUD reviews about cars.
  4. Users: Node-Express microservice -> manage authentication and sign in / sign up.
  5. Reverse proxy: reverse-proxy -> configuration settings of Nginx, Docker, Kubernetes.
car reviews

Prerequisites to install the app locally

  1. Node (LTS version) and Node Package Manager (NPM). Before continuing, you must download and install Node (NPM is included) from https://nodejs.com/en/download.
  2. The Ionic Command Line Interface. Instructions for installing the CLI can be found in the Ionic Framework Docs.
  3. Database: Create a MongoDB database on Mongo Atlas. Set values for shell / environment variables (prefixed with DB_).
  4. S3 Create an AWS S3 bucket. Set values for shell / environment (prefixed with AWS_).
  5. Environment variables mentioned above will need to be set in deployment/docker/.env. These environment variables include database and S3 connection details. (See 'Setup Docker Environment' section).
car comparison

Travis

Set up Travis

The CI tool used for the project is TravisCI (you need to connect your repo to Travis on its website). Add .travis.yml file with the appropriate settings (after each commit to the 'main' branch a build process starts automatically).

result of build process with Travis

Docker

Set up Docker Environment

You'll need to install Docker. Open a new terminal within the project directory (in the 'deployment' folder):

cd deployment/docker

The following shell variables need to be set (in an .env file in the folder above with the appropriate values):

PORT=
DB_UNAME=
DB_PWD=
DB_PATH=
AWS_BUCKET=
AWS_REGION=
AWS_PROFILE=
AWS_ACCESS_KEY=
AWS_SECRET_KEY=
JWT_KEY=

Build the images:

docker-compose -f docker-compose-build.yaml build --parallel

Push the images:

docker logout
docker login
docker-compose -f docker-compose-build.yaml push

Run the containers:

docker-compose up
docker-compose up

After that you should open a browser at http://localhost:3003/

Stop the containers:

docker-compose stop

On a Linux system each of the docker commands above should be run as root (e.g. sudo docker-compose up).

The public DockerHub images:

DockerHub

Kubernetes

Deploy to Kubernetes cluster

You'll need to set up an EKS cluster and a corresponding node group.

You'll need to install kubectl and AWS CLI. An EKS cluster with proper node groups must be set up on AWS.

Connect the kubernetes cluster created on AWS to kubectl:

aws eks --region <aws-region> update-kubeconfig --name <project_name>

Set the correct values in env-secret.yaml and env-configmap.yaml files. Go to the 'deployment/k8s' folder and run the commands below in the following order.

kubectl apply -f env-secret.yaml
kubectl apply -f env-configmap.yaml

kubectl apply -f client-deployment.yaml
kubectl apply -f client-service.yaml

kubectl apply -f users-deployment.yaml
kubectl apply -f users-service.yaml

kubectl apply -f reviews-deployment.yaml
kubectl apply -f reviews-service.yaml

kubectl apply -f cars-deployment.yaml
kubectl apply -f cars-service.yaml

kubectl apply -f revproxy-deployment.yaml
kubectl apply -f revproxy-service.yaml

Verify that every container has been deployed correctly, the services have been set up and all pods are running:

kubectl get all
DockerHub
kubectl logs <pod_name>
DockerHub

Cloudwatch

DockerHub
DockerHub

Built with:

Front end:

Back end:

Full stack: