Skip to content

Latest commit

 

History

History
29 lines (22 loc) · 569 Bytes

README.md

File metadata and controls

29 lines (22 loc) · 569 Bytes

SQUAD Fine-Tuning

BERT, RoBERTa fine-tuning over SQuAD Dataset using pytorch-lighting, transformers & nlp.

Usage

Example Usage: python main.py --gpus 1, --qa_model distilroberta-base --workers 20 --bs 5 --max_epochs 10

Few Useful WANDB environment variables:

WANDB_MODE=dryrun
WANDB_ENTITY=nlp

Install

pip install -r requirements.txt

Features

  • ⚡️Pytorch-Lightning: Goodies
    • All Trainer flags as args
    • Multi-GPU support
  • 🤗 Transformer: easy plug-n-play
  • 🤗 NLP Dataset: easy data handling

TODO:

TBD