Skip to content

An Image Captioning📸⇒📝 project based on a sequence to sequence model built using PyTorch.

License

Notifications You must be signed in to change notification settings

Koushik0901/Image-Captioning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Image Captioning 📷 ➡️ 📝

An encoder-decoder based model to caption images built using PyTorch and deployed using Streamlit. This model uses inceptionV3 as encoder and LSTM layers as decoder. This model is trained on Flickr30k dataset.

Demo

Try it yourself here

Prediction: a man in wetsuit is surfing .

Prediction: a man in blue helmet is riding a dirt bike on a dirt track .

Prediction: a dog is running on the beach .

Running on native machine

dependencies

  • python3
  • python -m spacy download en - for tokenizing english sentences

pip packages

pip install -r requirements.txt

Steps to train your own model

Scripts

neuralnet/train.py - is used to train the model

engine.py - is used to perform inference

ui.py - is used to build the streamlit app

For more details make sure to visit these files to look at script arguments and description

  1. Dataset
    i. Download the Flickr30k dataset
    ii. Remove the duplicate images folder and csv file

  2. Training
    use train.py to train the model