[Frontend] Support Tool and RAG#3971
Conversation
|
Hi, I wonder which version of openai library support this "documents" parameter? And we may need add additional code to tokenizer those rag text? |
|
@leiwen83 Thanks for your feedback.
No, it just a custom parameter in vllm, as we have already a few custom parameter in openai_server.
It seems that reusing
We leverage the |
|
Hi esmeetu maybe offer a cohere client compatible api server, cohere_server, is a better choice? |
|
@bohea Thanks! I think it's a bit early to consider adding another api_server. As we know, Llama3 model's tool use and rag performance is also good. |
This is another function tool implement compared to #3237. This PR is simple and flexible. And this idea was inspired by https://github.com/huggingface/transformers/blob/main/src/transformers/models/cohere/tokenization_cohere_fast.py
That leveraged
transformersrich chat template support.Discussions
toolsparameters follow openai' design strictly, which have a fixed json format? although this server is named openai_server.tool_callsin openai response sometimes is unnecessary. For example, if prompt: 'Send email to Roy', after model returns tool call, my task finished. And i don't needtool_callsfor summary anymore.documentsparameter, whether we will use this information for future features? like RAG Speculative decoding and something else that accelerates RAG inference.TODO
Further ideas
For RAG applications, it's also useful to have a indicator(prefix_stop) to adapt prefix caching.
vllm-client. This might be not related to this PR.