Skip to content

We explored recent studies in Question Answering System. Then tried out 3 different QA models(BERT and DistilBERT) for the sake of learning.

Notifications You must be signed in to change notification settings

CSE-446-2016/Hands-On-QA

 
 

Repository files navigation

Welcome to Hands on QA framework


About this project


We explored recent studies in Question Answering System. Then tried out 3 different models for the sake of learning. Our steps were:

  1. Firstly, we studied recent works on QA. More precisely, we studied Zylich et al.'s Exploring Automated Question Answering Methods for Teaching Assistancewhich is published in AIE conf. in 2020 Link. The summary of the paper is uploaded here.

  2. After that, we studied about BERT, what is the input-output format of it and how it works in case of QA. Then, tried out pretrained & fine-tuned BERT model. This BERT model is fine-tuned using SQuAD v1.1 dataset. Then viewed our output.

  1. Next, we studied about DistilBERT, which is a distiled version of BERT. It is more smaller, faster, cheaper and lighter than BERT. It doesn't have token ids like BERT, as a result is gives 70% more faster output than BERT. Gives almost accurate result like BERT. Our used model is pretrained and fined-tuned with same dataset as BERT was. Then we compared the output with BERT and verify the output.
  1. Lastly, we wanted to use a DistilBERT pretrained model and fine-tuned it with a custom dataset SQuAD v2.0 trained dataset. Then tested our pretrained model with SQuAD v2.0 dev dataset and checked accuracy of the model.

Contributors

About

We explored recent studies in Question Answering System. Then tried out 3 different QA models(BERT and DistilBERT) for the sake of learning.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%