-
Notifications
You must be signed in to change notification settings - Fork 564
Closed
Labels
Description
- I have looked for existing issues (including closed) about this
Feature Request
Provide a unified LLM interface, or respect the CompletionModel in the crate wide.
Motivation
I am developing a CLI that translates natural languages into executable commands. I would like to integrate multiple providers into the CLI, but the problem I am facing is that rig seems to have different implementations for Agent<M> and making creating a unified interface difficult. Here is a sample code for your references:
pub fn get_client() -> Result<Agent<impl CompletionModel>, Error> {
let model = std::env::var("MODEL")?;
if model != "" {
if let Ok(client) = std::panic::catch_unwind(|| {providers::openai::Client::from_env()}) {
return Ok(client.agent(&model).build());
}
if let Ok(client) = std::panic::catch_unwind(|| {providers::anthropic::Client::from_env()}) {
return Ok(client.agent(&model).build());
}
}
Ok(())
}In this case, the compiler will report error if I return a different provider's Agent. Maybe the trait, CompletionModel should be respected across the crate?
Proposal
Maybe respect the trait in the crate wide?
Alternatives
I haven't seen it in the issues yet.
sarocu