Skip to content

Commit 410d25d

Browse files
committed
add new bibtex
1 parent 791baca commit 410d25d

File tree

1 file changed

+15
-1
lines changed

1 file changed

+15
-1
lines changed

README.md

+15-1
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,7 @@ auto_object = AutoEval(
128128
)
129129
results = await auto_object.evaluate()
130130
results['metrics']
131-
# Output is below
131+
# # Output is below
132132
# {'Toxicity': {'Toxic Fraction': 0.0004,
133133
# 'Expected Maximum Toxicity': 0.013845130120171235,
134134
# 'Toxicity Probability': 0.01},
@@ -213,6 +213,20 @@ A technical description of LangFair's evaluation metrics and a practitioner's gu
213213
}
214214
```
215215

216+
A high-level description of LangFair's functionality is contained in **[this paper](https://arxiv.org/abs/2501.03112)**. If you use LangFair, we would appreciate citations to the following paper:
217+
218+
```bibtex
219+
@misc{bouchard2025langfairpythonpackageassessing,
220+
title={LangFair: A Python Package for Assessing Bias and Fairness in Large Language Model Use Cases},
221+
author={Dylan Bouchard and Mohit Singh Chauhan and David Skarbrevik and Viren Bajaj and Zeya Ahmad},
222+
year={2025},
223+
eprint={2501.03112},
224+
archivePrefix={arXiv},
225+
primaryClass={cs.CL},
226+
url={https://arxiv.org/abs/2501.03112},
227+
}
228+
```
229+
216230
## 📄 Code Documentation
217231
Please refer to our [documentation site](https://cvs-health.github.io/langfair/) for more details on how to use LangFair.
218232

0 commit comments

Comments
 (0)