[model client enchancement] Support internally hosted models that can be accessed via requests #254
Labels
[adalflow] suggest core feature
New core features in the base classes and optimization
help wanted
Need helps with input, discussion, review, and PR submission.
P1
2nd priority
Is your feature request related to a problem? Please describe.
User case:
In our corporate environment, we use an internally hosted version of the ChatGPT-Omni model due to security restrictions, and we access it via a custom API. We send requests using requests.post with the headers containing an authorization token, and the prompt is passed as params['messages'] in the format {'role': 'system', 'content': sys_prompt}, {'role': 'user', 'content': user_prompt}.
Describe the solution you'd like
There are possibly two solutions: (1) one request based model client (2) as LiteLLM is request native, they are most likely already support this case.
We need to choose the solution with the least amount of work.
Process
We will need contributor to make a short proposal at this issue. And then follow the guideline to add model_client in the contributor guide.
The text was updated successfully, but these errors were encountered: