Implementation for the ACL 2022 paper KinyaBERT: a Morphology-aware Kinyarwanda Language Model.
- a pre-trained KinyaBERT model and the morphological analyzer have been released under a new repository: https://github.com/anzeyimana/DeepKIN
KinyaBERT implements a two-tier BERT architecture for modeling morphologically rich languages (MRLs). The current implementation is tailored to Kinyarwanda, a language spoken by more than 12M people in Central and Eastern Africa. Due to the complex morphology expressed by MRLs such as Kinyarwanda, conventional tokenization algorithms such as byte pair encoding (BPE) are suboptimal at handling MRLs. KinyaBERT leverages a morphological analyzer to extract morphemes and incorporates them in a two-tier transformer encoder architecture to explicitly express morphological compositionality. Empirical experiments indicate that KinyaBERT outperforms baseline BERT models on natural language inference (NLI) and named entity recognition (NER) tasks.
code
: main python codebaseconf
: vocabulary files for KinyaBERTdatasets
: evaluation datasets for Translated GLUE benchmark, Named Entity Recognition (NER) and NEWS categorization tasks.fairseq-tupe-tpu-pytorch-v1.9
: TPU-optimized fairseq code for baseline models. The package has been customized to use TUPE-R positional encodinglib
: shared library for Kinyarwanda morphological analysis and part-of-speech taggingresults
: Fine-tuning results in raw formatscripts
: data pre-processing scripts
code/morpho_model.py
: KinyaBERT model implementation in PyTorchcode/kinlpmorpho.py
: CFFI interface to the morphological analyzercode/morpho_data_loaders.py
: Data loading utilitiescode/train_exploratory_distributed_model.py
: KinyaBERT pre-training processcode/pretrained_kinyabert_model_fine_tune_eval.py
: KinyaBERT fine-tuning processcode/pretrained_roberta_model_fine_tune_eval.py
: Baseline models fine-tuning processlib/libkinlp.so
: Morphological analyzer/POS Tagger shared libraryresults/FINAL_AVERAGED_RESULTS.xlsx
: All experimental results aggregated in a spreadsheet
- PyTorch version >= 1.8.0
- Python version >= 3.6
- NVIDIA apex for faster GPU training
- Progressbar2:
pip install progressbar2
- CFFI:
pip install cffi
- YouTokenMe:
pip install youtokentome
The code in this repository is meant to be used for adapting KinyaBERT architecture to other languages and modeling scenarios. This experimental code is not intended to be used straight out of the repository, but to provide a guidance to custom implementation.
The code in this repository requires having access to a morphological analyzer and the code itself cannot work without an adaptation to the morphological analyzer. The adaptation can me made following the CFFI interface in code/kinlpmorpho.py and making other necessary adjustments related to vocabularies. The morphological analyzer for Kinyarwanda which is used in this work is a closed-source proprietary software. The current plan is to make it available to researchers in a software-as-a-service model. Currently, it can be provided by contacting the first author by E-mail ([email protected]).
Please cite as:
@inproceedings{nzeyimana-niyongabo-rubungo-2022-kinyabert,
title = "{K}inya{BERT}: a Morphology-aware {K}inyarwanda Language Model",
author = "Nzeyimana, Antoine and
Niyongabo Rubungo, Andre",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.367",
pages = "5347--5363",
}