-
-
Notifications
You must be signed in to change notification settings - Fork 3
Creating and running a Docker image
This documentation is unstable (see Issue #3). We believe some of these issues exist in BentoML, so this documentation should be used for experimental purpose only.
From BentoML documentation To create a docker image and run it with a container:
# Find the local path of the latest version PyTorchModel saved bundle
saved_path=$(bentoml get PyTorchModel:latest --print-location --quiet)
echo $saved_path # your_path
# Build docker image using saved_path directory as the build context, replace the
# {username} below to your docker hub account name
docker build -t {username}/pytorch_model $saved_path
# Run a container with the docker image built and expose port 5000
docker run -p 5000:5000 {username}/pytorch_model
In your requirements.txt
located in the saved_path
mentioned above, check if the file contains torchvision==0.7.0+cpu
.
If it does, add the following line above it - --find-links https://download.pytorch.org/whl/torch_stable.html
See Issue #3 for a detailed response.
In your requirements.txt
located in the saved_path
mentioned above, check if the file contains torch==1.6.0+cpu
.
If it does, add the following line above it - --find-links https://download.pytorch.org/whl/torch_stable.html
3. I am getting this error when the container runs: ModuleNotFoundError: No module named 'transformers'
Check if your requirements.txt
located in the saved_path
mentioned above, contains transformers==3.2.0
. If not, add it.
Seems silly but, what worked for us was adding the meta.bin
file to all 3 directories in the saved_path
directory. See Issue #3 for a detailed response.
Adding scikit-learn==0.22 to the requirements.txt file in saved_path
directory fixes the issue.
As it stands, our model consumes a lot of memory and cannot be deployed with Heroku's free version (see Issue 5).