Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

At least one of model_deployment and model must be specified #3184

Open
thallysonjsa opened this issue Nov 25, 2024 · 5 comments
Open

At least one of model_deployment and model must be specified #3184

thallysonjsa opened this issue Nov 25, 2024 · 5 comments

Comments

@thallysonjsa
Copy link
Contributor

I'm trying to make a pull request to add a PT-BR benchmark into HELM but before doing it i'm getting a curious error. When i saw the run_specs files and all of those examples for each type of task, i realized that none of them needs to set a model name in the AdapterSpec, but for some reason that i don't know if i do not set a model i get an error and can not run the benchmark. If i uncomment this line and set a model i'm able to run the benchmark with that model.

adapter_spec = get_multiple_choice_adapter_spec(
    method=ADAPT_MULTIPLE_CHOICE_JOINT,
    instructions="Dê uma resposta selecionando uma letra entre as opções fornecidas. "
    "Se as opções forem A, B, C, D e E, "
    "sua resposta deve consistir em uma única letra que corresponde a resposta correta.\n"
    "Exemplo: Qual é a capital da França?\nA. Londres\nB. Paris\nC. Roma\nD. Berlim\nE. Sydney\n"
    "Resposta: B",
    input_noun="Pergunta",
    output_noun="Resposta",
    # model="maritaca-ai/sabia-7b"  without this line (or commented line) i get the error below
)
Traceback (most recent call last):
  File "/home/thallyson/helm-experiments/helm-venv/bin/helm-run", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/home/thallyson/helm-experiments/helm/src/helm/common/hierarchical_logger.py", line 104, in wrapper
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/thallyson/helm-experiments/helm/src/helm/benchmark/run.py", line 319, in main
    run_specs = run_entries_to_run_specs(
                ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/thallyson/helm-experiments/helm/src/helm/benchmark/run.py", line 43, in run_entries_to_run_specs
    for run_spec in construct_run_specs(parse_object_spec(entry.description)):
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/thallyson/helm-experiments/helm/src/helm/benchmark/run_spec_factory.py", line 165, in construct_run_specs
    run_specs = [alter_run_spec(run_spec) for run_spec in run_specs]
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/thallyson/helm-experiments/helm/src/helm/benchmark/run_spec_factory.py", line 165, in <listcomp>
    run_specs = [alter_run_spec(run_spec) for run_spec in run_specs]
                 ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/thallyson/helm-experiments/helm/src/helm/benchmark/run_spec_factory.py", line 74, in alter_run_spec
    raise ValueError("At least one of model_deployment and model must be specified")
ValueError: At least one of model_deployment and model must be specified

I don't know if i missed something when reading the documentation, but i would like to know what i need to do to make sure that i could run this benchmark with any other model and not having this error again?

@yifanmai
Copy link
Collaborator

yifanmai commented Dec 2, 2024

Could you provide your helm-run command that is producing this error?

Generally speaking, there are two styles for specifying your model. You can either provide the model name in the run entry:

helm-run --run-entries mmlu:subject=anatomy,model=openai/gpt-4o-mini-2024-07-18 --suite debug -m 10

Or you can set model=text and set the model using the --models-to-run flag:

helm-run --run-entries mmlu:subject=anatomy,model=text --models-to-run openai/gpt-4o-mini-2024-07-18 --suite debug -m 10

The above also applies to the run entries in the conf file if you're using --conf-paths instead of --run-entries.

@thallysonjsa
Copy link
Contributor Author

When i got the error, the command that i used was this one:
helm-run --run-specs enem_challenge --models-to-run maritaca-ai/sabia-7b -m 10 --suite testing
When i uncomment that line that sets the model, the same command works as expected (only for the model that i setted in the code) and does not return any error.

I tried out both of your suggestions (using --run-entries) and this is what i have as error:

Traceback (most recent call last):
  File "/home/thallyson/helm-experiments/helm-venv/bin/helm-run", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/home/thallyson/helm-experiments/helm/src/helm/common/hierarchical_logger.py", line 104, in wrapper
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/thallyson/helm-experiments/helm/src/helm/benchmark/run.py", line 319, in main
    run_specs = run_entries_to_run_specs(
                ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/thallyson/helm-experiments/helm/src/helm/benchmark/run.py", line 43, in run_entries_to_run_specs
    for run_spec in construct_run_specs(parse_object_spec(entry.description)):
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/thallyson/helm-experiments/helm/src/helm/benchmark/run_spec_factory.py", line 58, in construct_run_specs
    raise ValueError(f"Unknown run spec name: {name}")
ValueError: Unknown run spec name: enem_challenge,model=text

This is the full run_spec code that i have:

@run_spec_function("enem_challenge")
def get_enem_spec() -> RunSpec:
    scenario_spec = ScenarioSpec(
        class_name="helm.benchmark.scenarios.enem_challenge_scenario.ENEMChallengeScenario", args={}
    )

    adapter_spec = get_multiple_choice_adapter_spec(
        method=ADAPT_MULTIPLE_CHOICE_JOINT,
        instructions="Dê uma resposta selecionando uma letra entre as opções fornecidas. "
        "Se as opções forem A, B, C, D e E, "
        "sua resposta deve consistir em uma única letra que corresponde a resposta correta.\n"
        "Exemplo: Qual é a capital da França?\nA. Londres\nB. Paris\nC. Roma\nD. Berlim\nE. Sydney\n"
        "Resposta: B",
        input_noun="Pergunta",
        output_noun="Resposta",
        # model="maritaca-ai/sabia-7b",
    )

    return RunSpec(
        name="enem_challenge",
        scenario_spec=scenario_spec,
        adapter_spec=adapter_spec,
        metric_specs=get_exact_match_metric_specs(),
        groups=["enem_challenge"],
    )

@yifanmai
Copy link
Collaborator

yifanmai commented Dec 3, 2024

Could you try this instead:

helm-run --run-entries enem_challenge:model=text --models-to-run maritaca-ai/sabia-7b -m 10 --suite testing

Edit: Correct comment to use model= instead of models=

@thallysonjsa
Copy link
Contributor Author

It worked!! I just needed to delete the "s" in models and worked as expected! Final command line:

helm-run --run-entries enem_challenge:model=text --models-to-run maritaca-ai/sabia-7b -m 10 --suite testing

Thank you very much! Appreciate the patience and attention!

@yifanmai
Copy link
Collaborator

yifanmai commented Dec 3, 2024

Sorry, looks like I made the same mistake in my previous comment as well. Glad you figured it out!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants