diff --git a/README.md b/README.md index 0307113..2c6ffb4 100644 --- a/README.md +++ b/README.md @@ -31,7 +31,7 @@ pip install langfair ``` ### Usage Examples -Below are the code samples illustrating how to use LangFair to assess bias and fairness risks in text generation and summarization use cases. The below examples assumes the user has already defined a list of prompts from their use case `prompts`. +Below are code samples illustrating how to use LangFair to assess bias and fairness risks in text generation and summarization use cases. The below examples assume the user has already defined a list of prompts from their use case, `prompts`. ##### Generate LLM responses To generate responses, we can use LangFair's `ResponseGenerator` class. First, we must create a `langchain` LLM object. Below we use `ChatVertexAI`, but **any of [LangChain’s LLM classes](https://js.langchain.com/docs/integrations/chat/) may be used instead**. Note that `InMemoryRateLimiter` is to used to avoid rate limit errors. @@ -85,7 +85,7 @@ stereo_result['metrics'] # # Output is below # {'Stereotype Association': 0.3172750176745329, # 'Cooccurrence Bias': 0.44766333654278373, -# 'Stereotype Fraction - gender': 0.15452} +# 'Stereotype Fraction - gender': 0.08} ``` ##### Generate counterfactual responses and compute metrics @@ -118,7 +118,7 @@ cf_result ``` ##### Alternative approach: Semi-automated evaluation with `AutoEval` -To streamline assessments for text generation and summarization use cases, the `AutoEval` class conducts that completes all of the aforementioned steps with two lines of code. +To streamline assessments for text generation and summarization use cases, the `AutoEval` class conducts a multi-step process that completes all of the aforementioned steps with two lines of code. ```python from langfair.auto import AutoEval auto_object = AutoEval( @@ -129,18 +129,18 @@ auto_object = AutoEval( results = await auto_object.evaluate() results # Output is below -# {'Toxicity': {'Toxic Fraction': 0.0, -# 'Expected Maximum Toxicity': 0.08870933699654415, -# 'Toxicity Probability': 0}, -# 'Stereotype': {'Stereotype Association': 0.42777777777777776, -# 'Cooccurrence Bias': 0.37655962458699777, +# {'Toxicity': {'Toxic Fraction': 0.0004, +# 'Expected Maximum Toxicity': 0.013845130120171235, +# 'Toxicity Probability': 0.01}, +# 'Stereotype': {'Stereotype Association': 0.3172750176745329, +# 'Cooccurrence Bias': 0.44766333654278373, # 'Stereotype Fraction - gender': 0.08, -# 'Expected Maximum Stereotype - gender': 0.580355167388916, -# 'Stereotype Probability - gender': 1}, -# 'Counterfactual': {'male-female': {'Cosine Similarity': 0.31671187, -# 'RougeL Similarity': 0.2882948246689143, -# 'Bleu Similarity': 0.13248873839336991, -# 'Sentiment Bias': 0.0114}}} +# 'Expected Maximum Stereotype - gender': 0.60355167388916, +# 'Stereotype Probability - gender': 0.27036}, +# 'Counterfactual': {'male-female': {'Cosine Similarity': 0.8318708, +# 'RougeL Similarity': 0.5195852482361165, +# 'Bleu Similarity': 0.3278433712872481, +# 'Sentiment Bias': 0.0009947145187601957}}} ``` ## 📚 Example Notebooks diff --git a/pyproject.toml b/pyproject.toml index c1d9fc0..ed255a1 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "langfair" -version = "0.2.0" +version = "0.2.1" description = "LangFair is a Python library for conducting use-case level LLM bias and fairness assessments" readme = "README.md" authors = ["Dylan Bouchard ",