You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Upon querying the /models endpoint, I can see the LoRA along with its base model
But upon making a request to the LoRA endpoint, I always get an Internal Server Error (I have tried with my own LoRA adapters, as well as with the example one from the documentation: same behavior)
Many thanks for your help!
The text was updated successfully, but these errors were encountered:
Your current environment
How would you like to use vllm
I am trying to serve LoRA adapters using the Open-AI compatible vLLM server, following the steps indicated in the documentation (https://docs.vllm.ai/en/latest/models/lora.html):
/models
endpoint, I can see the LoRA along with its base modelMany thanks for your help!
The text was updated successfully, but these errors were encountered: