-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
At least one of model_deployment and model must be specified #3184
Comments
Could you provide your Generally speaking, there are two styles for specifying your model. You can either provide the model name in the run entry: helm-run --run-entries mmlu:subject=anatomy,model=openai/gpt-4o-mini-2024-07-18 --suite debug -m 10 Or you can set helm-run --run-entries mmlu:subject=anatomy,model=text --models-to-run openai/gpt-4o-mini-2024-07-18 --suite debug -m 10 The above also applies to the run entries in the conf file if you're using |
When i got the error, the command that i used was this one: I tried out both of your suggestions (using --run-entries) and this is what i have as error:
This is the full run_spec code that i have: @run_spec_function("enem_challenge")
def get_enem_spec() -> RunSpec:
scenario_spec = ScenarioSpec(
class_name="helm.benchmark.scenarios.enem_challenge_scenario.ENEMChallengeScenario", args={}
)
adapter_spec = get_multiple_choice_adapter_spec(
method=ADAPT_MULTIPLE_CHOICE_JOINT,
instructions="Dê uma resposta selecionando uma letra entre as opções fornecidas. "
"Se as opções forem A, B, C, D e E, "
"sua resposta deve consistir em uma única letra que corresponde a resposta correta.\n"
"Exemplo: Qual é a capital da França?\nA. Londres\nB. Paris\nC. Roma\nD. Berlim\nE. Sydney\n"
"Resposta: B",
input_noun="Pergunta",
output_noun="Resposta",
# model="maritaca-ai/sabia-7b",
)
return RunSpec(
name="enem_challenge",
scenario_spec=scenario_spec,
adapter_spec=adapter_spec,
metric_specs=get_exact_match_metric_specs(),
groups=["enem_challenge"],
) |
Could you try this instead: helm-run --run-entries enem_challenge:model=text --models-to-run maritaca-ai/sabia-7b -m 10 --suite testing Edit: Correct comment to use |
It worked!! I just needed to delete the "s" in models and worked as expected! Final command line:
Thank you very much! Appreciate the patience and attention! |
Sorry, looks like I made the same mistake in my previous comment as well. Glad you figured it out! |
I'm trying to make a pull request to add a PT-BR benchmark into HELM but before doing it i'm getting a curious error. When i saw the run_specs files and all of those examples for each type of task, i realized that none of them needs to set a model name in the AdapterSpec, but for some reason that i don't know if i do not set a model i get an error and can not run the benchmark. If i uncomment this line and set a model i'm able to run the benchmark with that model.
I don't know if i missed something when reading the documentation, but i would like to know what i need to do to make sure that i could run this benchmark with any other model and not having this error again?
The text was updated successfully, but these errors were encountered: