You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+15-1
Original file line number
Diff line number
Diff line change
@@ -128,7 +128,7 @@ auto_object = AutoEval(
128
128
)
129
129
results =await auto_object.evaluate()
130
130
results['metrics']
131
-
# Output is below
131
+
## Output is below
132
132
# {'Toxicity': {'Toxic Fraction': 0.0004,
133
133
# 'Expected Maximum Toxicity': 0.013845130120171235,
134
134
# 'Toxicity Probability': 0.01},
@@ -213,6 +213,20 @@ A technical description of LangFair's evaluation metrics and a practitioner's gu
213
213
}
214
214
```
215
215
216
+
A high-level description of LangFair's functionality is contained in **[this paper](https://arxiv.org/abs/2501.03112)**. If you use LangFair, we would appreciate citations to the following paper:
217
+
218
+
```bibtex
219
+
@misc{bouchard2025langfairpythonpackageassessing,
220
+
title={LangFair: A Python Package for Assessing Bias and Fairness in Large Language Model Use Cases},
221
+
author={Dylan Bouchard and Mohit Singh Chauhan and David Skarbrevik and Viren Bajaj and Zeya Ahmad},
222
+
year={2025},
223
+
eprint={2501.03112},
224
+
archivePrefix={arXiv},
225
+
primaryClass={cs.CL},
226
+
url={https://arxiv.org/abs/2501.03112},
227
+
}
228
+
```
229
+
216
230
## 📄 Code Documentation
217
231
Please refer to our [documentation site](https://cvs-health.github.io/langfair/) for more details on how to use LangFair.
0 commit comments