Skip to content

remla23-team08/model-service

Repository files navigation

Model-Service

Contains the wrapper service for an embedded ML model used for performing sentiment analysis on reviews for restaurants.

Table of Contents

Pre-requisites

Before following the steps elicited in the upcoming sections, you need to download the latest trained model from the remla23-team08/model-training repository.

Usage

  1. Clone the repository on your machine by executing the following command(s):
# When using SSH keys (recommended)
git clone [email protected]:remla23-team08/model-service.git

# When using HTTPS
git clone https://github.com/remla23-team08/model-service.git
  1. While in the root folder of this repository, install the required packages by executing the following command (not recommended):
pip install -r requirements.txt

NOTE: It is, however, preferred to use a virtual environment to install the required packages. To do so, execute the following commands:

# Create a virtual environment
# Actual pyton binary may vary depending on your system
# We do recommend using at least Python 3.7
python -m venv venv

# Activate the virtual environment
source venv/bin/activate

# Install the required packages
pip install -r requirements.txt
  1. Run the application from the root folder by executing the following command:
python app.py

Available endpoints

After successfully following the instructions in the previous section, the following endpoints will be available:

Docker Setup

The model-service application is also available as a Docker image. To manually build the docker image from your local environment, follow the steps below:

docker build -t ghcr.io/remla23-team08/model-service:VERSION .

NOTE: Build the docker image while located in the root folder of this repository. The VERSION string is the desired tag and should be in the format of x.y.z, following the Semantic Versioning standard.

Once properly built, to run the docker image, execute the following command:

docker run -p 8080:8080 ghcr.io/remla23-team08/model-service:VERSION

NOTES:

  • The first port number in the -p flag is the port on your local machine, while the second port number is the port on the docker container. The two port numbers should be the same, however, if the local port is already in use, you can change it to any other port number (consider taking a look at port numbering conventions depending on your operating system).
  • You can also run the docker image in detached mode by adding the -d flag to the docker run command. This is helpful when you want to issue other commands in the same terminal window.

Code Style

For coding style, we are following the PEP 8 -- Style Guide for Python Code. For automatic detection, we are using the flake8 tool. To install it, execute the following command:

pip install flake8

NOTE: It is preferred to use a venv to install packages, as mentioned in the Usage section.

To run the flake8 tool on the python files using our custom configuration, execute the following command from within the root folder of this repository:

flake8 --config=config/flake8.cfg *.py

To automatically enforce the code style, we are using the black tool. To install it, execute the following command:

pip install black

Afterwards, simply run the following command from within the root folder of this repository:

black *.py

Versioning

Versioning of this repository is done automatically using GitHub Actions. The versioning is done using the standard Semantic Versioning (SemVer) format. Version bumps are done automatically when a PR is merged to the main branch. To achieve this, we are using the GitVersion tool. For more information on how to use GitVersion, see this link.

Additional Information