Skip to content

Repository for testing dynamic inference with Clipper

Notifications You must be signed in to change notification settings

tgcsantos/clipper-test

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Clipper-test

Repository for testing dynamic inference with Clipper Clipper

Dynamic Inference (Serving Models in Production via REST)

Inference is the term used to describe the process of using a pre-trained model to make predictions for unseen data. Dynamic Inference is the term used to describe making predictions on demand, using a server.

This notebook is a walk through for how to serve a machine learning model using a low latency prediction servering system called clipper.ai. clipper can be hosted on your favorite cloud provider or on-premise.

Overview

  • Model training
  • Clipper cluster creation
  • App creation & model deployment
  • Model query (single row, multiple rows) via Python requests & curl
  • Model versioning update
  • Model versioning rollback
  • Model replication

References:

About

Repository for testing dynamic inference with Clipper

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published