Skip to content
/ ORIG Public

The implementation for ACL 2023 paper "Towards Robust Personalized Dialogue Generation via Order-Insensitive Representation Regularization".

Notifications You must be signed in to change notification settings

ChanLiang/ORIG

Repository files navigation

ORIG: Towards Robust Personalized Dialogue Generation via Order-Insensitive Representation Regularization

The implementation for ACL 2023 paper.

The repository is developed based on Microsoft DialoGPT, huggingface transformers and OpenAI GPT-2.

Setup & Installation (TL;DR)

Environment

Note: The script below may not be sufficient and missing packages need to be configured manually.

conda env create -f LSP-linux.yml -n LSP
conda activate LSP

Pipeline details

Training script

bash scripts/train_persona_gpt.sh
bash scripts/train_persona_gpt_kl.sh

Model inference

bash scripts/decode_pipeline.sh # for dialogpt
bash scripts/decode_pipeline_naive_gpt.sh # for gpt2
bash scripts/decode_naive_gpt_permutations.sh # decode for all persona permutations
or
python scripts/decdoing.py
Model evaluation

NLG metrics refer to nlg-eval

The Consistency metric is in PersonaClassifier

Evaluation pipeline:

bash scripts/eval_pipeline.sh
bash scripts/eval_permutations_pipeline.sh

Citation

@misc{chen2023robust,
      title={Towards Robust Personalized Dialogue Generation via Order-Insensitive Representation Regularization}, 
      author={Liang Chen and Hongru Wang and Yang Deng and Wai-Chung Kwan and Zezhong Wang and Kam-Fai Wong},
      year={2023},
      eprint={2305.12782},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

About

The implementation for ACL 2023 paper "Towards Robust Personalized Dialogue Generation via Order-Insensitive Representation Regularization".

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published