-
Notifications
You must be signed in to change notification settings - Fork 359
Closed
Description
Perhaps this is a user error on my part, but I am having trouble evaluating models on the math:xxx
subsets, e.g. this command
accelerate launch --multi_gpu --num_processes=8 run_evals_accelerate.py \
--tasks="lighteval|math:algebra|5|0" \
--output_dir "./scratch/evals" \
--model_args "pretrained=Qwen/Qwen1.5-0.5B" \
--override_batch_size 1
throws the following error:
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'
main(args)
File "/fsx/lewis/git/hf/lighteval/src/lighteval/logging/hierarchical_logger.py", line 144, in wrapper
return fn(*args, **kwargs)
File "/fsx/lewis/git/hf/lighteval/src/lighteval/main_accelerate.py", line 91, in main
evaluation_tracker = evaluate(
File "/fsx/lewis/git/hf/lighteval/src/lighteval/evaluator.py", line 64, in evaluate
full_resps = lm.greedy_until(requests, override_bs=override_bs)
File "/fsx/lewis/git/hf/lighteval/src/lighteval/models/base_model.py", line 346, in greedy_until
dataset = GenerativeTaskDataset(requests=requests, dataset_splits=self.DATASET_SPLITS)
File "/fsx/lewis/git/hf/lighteval/src/lighteval/data.py", line 44, in __init__
sorted_enumerated_requests = sorted(enumerated_requests, key=lambda x: self._sorting_criteria(x[1]))
File "/fsx/lewis/git/hf/lighteval/src/lighteval/data.py", line 44, in <lambda>
sorted_enumerated_requests = sorted(enumerated_requests, key=lambda x: self._sorting_criteria(x[1]))
File "/fsx/lewis/git/hf/lighteval/src/lighteval/data.py", line 198, in _sorting_criteria
return -(len(toks) + gen_length)
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'
Metadata
Metadata
Assignees
Labels
No labels