Serving machine learning models production-ready, fast, easy and secure powered by the great FastAPI by Sebastián Ramírez](https://github.com/tiangolo).
This repository contains a skeleton app which can be used to speed-up your next machine learning project. The code is fully tested and provides a preconfigured tox
to quickly expand this sample code.
To experiment and get a feeling on how to use this skeleton, a sample regression model for house price prediction is included in this project. Follow the installation and setup instructions to run the sample model and serve it aso RESTful API.
- Python 3.11+
- Poetry
Install the required packages in your local environment (ideally virtualenv, conda, etc.).
poetry install
-
Duplicate the
.env.example
file and rename it to.env
-
In the
.env
file configure theAPI_KEY
entry. The key is used for authenticating our API.
A sample API key can be generated using Python REPL:
import uuid
print(str(uuid.uuid4()))
- Start your app with:
set -a
source .env
set +a
uvicorn fastapi_skeleton.main:app
- Go to http://localhost:8000/docs.
- Click
Authorize
and enter the API key as created in the Setup step. - You can use the sample payload from the
docs/sample_payload.json
file when trying out the house price prediction model using the API.
This skeleton code uses isort, mypy, flake, black, bandit for linting, formatting and static analysis.
Run linting with:
./scripts/linting.sh
Run your tests with:
./scripts/test.sh
This runs tests and coverage for Python 3.11 and Flake8, Autopep8, Bandit.
v.1.0.0 - Initial release
- Base functionality for using FastAPI to serve ML models.
- Full test coverage
v.1.1.0 - Update to Python 3.11, FastAPI 0.108.0
- Updated to Python 3.11
- Added linting script
- Updated to pydantic 2.x
- Added poetry as package manager