Skip to content

Latest commit

 

History

History
17 lines (10 loc) · 997 Bytes

README.md

File metadata and controls

17 lines (10 loc) · 997 Bytes

⚖️ Debiasing Language Models

Official code for Debiasing Pre-Trained Language Models via Efficient Fine-Tuning published in the Second Workshop on Language Technology for Equality, Diversity, Inclusion at ACL 2022.

View Demo | View Presentation

Currently placeholder. Code will be polished and published soon! In the meantime, you can take a look at the old code.

Dataset

Our fine-tuning dataset consists of the WinoBias and CrowS-Pairs datasets. After cloning the Git submodules for the respective datasets, run:

python dataset/prepare.py

prepare.py combines the datasets from each repository and splits them into a training (80%), cross-validation (10%), and testing sets (10%).