Skip to content

Commit 9d94d74

Browse files
committed
rename to chatvertexai and fix hyphenation
1 parent 957fc67 commit 9d94d74

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

paper/paper.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Furthermore, LangFair is designed for real-world LLM-based systems that require
4646

4747

4848
# Generation of Evaluation Datasets
49-
The `langfair.generator` module offers two classes, `ResponseGenerator` and `CounterfactualGenerator`, which aim to enable user-friendly construction of evaluation datasets for text generation use cases.
49+
The `langfair.generator` module offers two classes, `ResponseGenerator` and \hyphenateddigits[\unknown]{`CounterfactualGenerator`}, which aim to enable user-friendly construction of evaluation datasets for text generation use cases.
5050

5151

5252
### `ResponseGenerator` class
@@ -90,14 +90,14 @@ When LLMs are used to solve classification problems, traditional machine learnin
9090
# Semi-Automated Evaluation
9191

9292
### `AutoEval` class
93-
To streamline assessments for text generation use cases, the `AutoEval` class conducts a multi-step process (each step is described in detail above) for a comprehensive fairness assessment. Specifically, these steps include metric selection (based on whether FTU is satsified), evaluation dataset generation from user-provided prompts with a user-provided LLM, and computation of applicable fairness metrics. To implement, the user is required to supply a list of prompts and an instance of `langchain` LLM. Below we provide a basic example demonstrating the execution of `AutoEval.evaluate` with a `gemini-pro` instance.^[Note that this example assumes the user has already set up their VertexAI credentials and sampled a list of prompts from their use case prompts.]
93+
To streamline assessments for text generation use cases, the `AutoEval` class conducts a multi-step process (each step is described in detail above) for a comprehensive fairness assessment. Specifically, these steps include metric selection (based on whether FTU is satsified), evaluation dataset generation from user-provided prompts with a user-provided LLM, and computation of applicable fairness metrics. To implement, the user is required to supply a list of prompts and an instance of a `langchain` LLM. Below we provide a basic example demonstrating the execution of `AutoEval.evaluate` with a `gemini-pro` instance.^[Note that this example assumes the user has already set up their VertexAI credentials and sampled a list of prompts from their use case prompts.]
9494

9595

9696
```python
97-
from langchain_google_vertexai import VertexAI
97+
from langchain_google_vertexai import ChatVertexAI
9898
from langfair.auto import AutoEval
9999

100-
llm = VertexAI(model_name='gemini-pro')
100+
llm = ChatVertexAI(model_name='gemini-pro')
101101
auto_object = AutoEval(prompts=prompts, langchain_llm=llm)
102102
results = await auto_object.evaluate()
103103
```

0 commit comments

Comments
 (0)