You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -237,6 +237,7 @@ Summary: create a **line summary** of your evaluation, in `src/lighteval/tasks/t
237
237
-`metric` (list), the metrics you want to use for your evaluation (see next section for a detailed explanation)
238
238
-`output_regex` (str), A regex string that will be used to filter your generation. (Genrative metrics will only select tokens that are between the first and the second sequence matched by the regex. For example, for a regex matching `\n` and a generation `\nModel generation output\nSome other text` the metric will only be fed with `Model generation output`)
239
239
-`frozen` (bool), for now is set to False, but we will steadily pass all stable tasks to True.
240
+
-`trust_dataset` (bool), set to True if you trust the dataset.
240
241
241
242
Make sure you can launch your model with your new task using `--tasks lighteval|yournewtask|2|0`.
0 commit comments