-
Notifications
You must be signed in to change notification settings - Fork 359
allows better flexibility for litellm endpoints #549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about plugging hfh InferenceClient too (in addition to LiteLLM)? should be very similar
Co-authored-by: Victor Muštar <[email protected]>
@julien-c Will be adding it in another PR ! |
* allows better flexibility for litellm * add config file * add doc * add doc * add doc * add doc * add lighteval imgs * add lighteval imgs * add doc * Update docs/source/use-litellm-as-backend.mdx Co-authored-by: Victor Muštar <[email protected]> * Update docs/source/use-litellm-as-backend.mdx --------- Co-authored-by: Victor Muštar <[email protected]>
* allows better flexibility for litellm * add config file * add doc * add doc * add doc * add doc * add lighteval imgs * add lighteval imgs * add doc * Update docs/source/use-litellm-as-backend.mdx Co-authored-by: Victor Muštar <[email protected]> * Update docs/source/use-litellm-as-backend.mdx --------- Co-authored-by: Victor Muštar <[email protected]>
No description provided.