Skip to content

[FEA]: Update examples to use optimized config for nim_rag_eval_llm #202

@yczhang-nv

Description

@yczhang-nv

Is this a new feature, an improvement, or a change to existing functionality?

Improvement

How would you describe the priority of this feature request

Low (would be nice)

Please provide a clear description of problem this feature solves

Should update AIQ examples to use optimized config for nim_rag_eval_llm:

  nim_rag_eval_llm:
    _type: nim
    model_name: meta/llama-3.1-70b-instruct
    temperature: 0.0000001
    top_p: 0.0001
    max_tokens: 6

Ragas should work better with the config above.

Describe your ideal solution

  • Update the example configs as shown above
  • Update the related docs

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
  • I have searched the open feature requests and have found no duplicates for this feature request

Metadata

Metadata

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions