diff --git a/docs/source/tutorials/Qwen3_embedding.md b/docs/source/tutorials/Qwen3_embedding.md index 667c1de61df..24f8114ffa1 100644 --- a/docs/source/tutorials/Qwen3_embedding.md +++ b/docs/source/tutorials/Qwen3_embedding.md @@ -30,13 +30,13 @@ Using the Qwen3-Embedding-8B model as an example, first run the docker container ### Online Inference ```bash -vllm serve Qwen/Qwen3-Embedding-8B --runner pooling --host 127.0.0.1 --port 8888 +vllm serve Qwen/Qwen3-Embedding-8B --runner pooling ``` Once your server is started, you can query the model with input prompts. ```bash -curl http://127.0.0.1:8888/v1/embeddings -H "Content-Type: application/json" -d '{ +curl http://127.0.0.1:8000/v1/embeddings -H "Content-Type: application/json" -d '{ "input": [ "The capital of China is Beijing.", "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun." @@ -48,7 +48,6 @@ curl http://127.0.0.1:8888/v1/embeddings -H "Content-Type: application/json" -d ```python import torch -import vllm from vllm import LLM def get_detailed_instruct(task_description: str, query: str) -> str: @@ -71,6 +70,7 @@ if __name__=="__main__": input_texts = queries + documents model = LLM(model="Qwen/Qwen3-Embedding-8B", + runner="pooling", distributed_executor_backend="mp") outputs = model.embed(input_texts) @@ -96,7 +96,7 @@ Refer to [vllm benchmark](https://docs.vllm.ai/en/latest/contributing/) for more Take the `serve` as an example. Run the code as follows. ```bash -vllm bench serve --model Qwen3-Embedding-8B --backend openai-embeddings --dataset-name random --host 127.0.0.1 --port 8888 --endpoint /v1/embeddings --tokenizer /root/.cache/Qwen3-Embedding-8B --random-input 200 --save-result --result-dir ./ +vllm bench serve --model Qwen3-Embedding-8B --backend openai-embeddings --dataset-name random --endpoint /v1/embeddings --random-input 200 --save-result --result-dir ./ ``` After about several minutes, you can get the performance evaluation result. With this tutorial, the performance result is: diff --git a/docs/source/tutorials/Qwen3_reranker.md b/docs/source/tutorials/Qwen3_reranker.md index 44cffd6df9d..1fd3c5a6e1d 100644 --- a/docs/source/tutorials/Qwen3_reranker.md +++ b/docs/source/tutorials/Qwen3_reranker.md @@ -31,9 +31,9 @@ Using the Qwen3-Reranker-8B model as an example, first run the docker container ### Online Inference ```bash -vllm serve Qwen/Qwen3-Reranker-8B --task score --host 127.0.0.1 --port 8888 --hf_overrides '{"architectures": ["Qwen3ForSequenceClassification"],"classifier_from_token": ["no", "yes"],"is_original_qwen3_reranker": true}' +vllm serve Qwen/Qwen3-Reranker-8B --runner pooling --hf_overrides '{"architectures": ["Qwen3ForSequenceClassification"],"classifier_from_token": ["no", "yes"],"is_original_qwen3_reranker": true}' ``` - +np Once your server is started, you can send request with follow examples. ### requests demo + formatting query & document @@ -41,7 +41,7 @@ Once your server is started, you can send request with follow examples. ```python import requests -url = "http://127.0.0.1:8888/v1/rerank" +url = "http://127.0.0.1:8000/v1/rerank" # Please use the query_template and document_template to format the query and # document for better reranker results. @@ -150,7 +150,7 @@ if __name__ == "__main__": outputs = model.score(query_template.format(prefix=prefix, instruction=instruction, query=query), documents) - print([output.outputs[0].score for output in outputs]) + print([output.outputs.score for output in outputs]) ``` If you run this script successfully, you will see a list of scores printed to the console, similar to this: @@ -167,7 +167,7 @@ Refer to [vllm benchmark](https://docs.vllm.ai/en/latest/contributing/) for more Take the `serve` as an example. Run the code as follows. ```bash -vllm bench serve --model Qwen3-Reranker-8B --backend vllm-rerank --dataset-name random-rerank --host 127.0.0.1 --port 8888 --endpoint /v1/rerank --tokenizer /root/.cache/Qwen3-Reranker-8B --random-input 200 --save-result --result-dir ./ +vllm bench serve --model Qwen3-Reranker-8B --backend vllm-rerank --dataset-name random-rerank --endpoint /v1/rerank --random-input 200 --save-result --result-dir ./ ``` After about several minutes, you can get the performance evaluation result. With this tutorial, the performance result is: diff --git a/docs/source/tutorials/Qwen3_vl_embedding.md b/docs/source/tutorials/Qwen3_vl_embedding.md new file mode 100644 index 00000000000..960c9f7b228 --- /dev/null +++ b/docs/source/tutorials/Qwen3_vl_embedding.md @@ -0,0 +1,117 @@ +# Qwen3-VL-Embedding + +## Introduction +The Qwen3-VL-Embedding and Qwen3-VL-Reranker model series are the latest additions to the Qwen family, built upon the recently open-sourced and powerful Qwen3-VL foundation model. Specifically designed for multimodal information retrieval and cross-modal understanding, this suite accepts diverse inputs including text, images, screenshots, and videos, as well as inputs containing a mixture of these modalities. This guide describes how to run the model with vLLM Ascend. + +## Supported Features + +Refer to [supported features](../user_guide/support_matrix/supported_models.md) to get the model's supported feature matrix. + +## Environment Preparation + +### Model Weight + +- `Qwen3-VL-Embedding-8B` [Download model weight](https://www.modelscope.cn/models/Qwen/Qwen3-VL-Embedding-8B) +- `Qwen3-VL-Embedding-2B` [Download model weight](https://www.modelscope.cn/models/Qwen/Qwen3-VL-Embedding-2B) + +It is recommended to download the model weight to the shared directory of multiple nodes, such as `/root/.cache/` +### Installation +You can use our official docker image to run `Qwen3-VL-Embedding` series models. +- Start the docker image on your node, refer to [using docker](../installation.md#set-up-using-docker). + +if you don't want to use the docker image as above, you can also build all from source: +- Install `vllm-ascend` from source, refer to [installation](../installation.md). + +## Deployment + +Using the Qwen3-VL-Embedding-8B model as an example, first run the docker container with the following command: + +### Online Inference + +```bash +vllm serve Qwen/Qwen3-VL-Embedding-8B --runner pooling +``` + +Once your server is started, you can query the model with input prompts. + +```bash +curl http://127.0.0.1:8000/v1/embeddings -H "Content-Type: application/json" -d '{ + "input": [ + "The capital of China is Beijing.", + "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun." + ] +}' +``` + +### Offline Inference + +```python +import torch +from vllm import LLM + +def get_detailed_instruct(task_description: str, query: str) -> str: + return f'Instruct: {task_description}\nQuery:{query}' + + +if __name__=="__main__": + # Each query must come with a one-sentence instruction that describes the task + task = 'Given a web search query, retrieve relevant passages that answer the query' + + queries = [ + get_detailed_instruct(task, 'What is the capital of China?'), + get_detailed_instruct(task, 'Explain gravity') + ] + # No need to add instruction for retrieval documents + documents = [ + "The capital of China is Beijing.", + "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun." + ] + input_texts = queries + documents + + model = LLM(model="Qwen/Qwen3-VL-Embedding-8B", + runner="pooling", + distributed_executor_backend="mp") + + outputs = model.embed(input_texts) + embeddings = torch.tensor([o.outputs.embedding for o in outputs]) + scores = (embeddings[:2] @ embeddings[2:].T) + print(scores.tolist()) +``` + +If you run this script successfully, you can see the info shown below: + +```bash +Adding requests: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 192.47it/s] +Processed prompts: 0%| | 0/4 [00:00system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n' +suffix = "<|im_end|>\n<|im_start|>assistant\n" + +query_template = "{prefix}: {instruction}\n: {query}\n" +document_template = ": {doc}{suffix}" + +instruction = ( + "Given a search query, retrieve relevant candidates that answer the query." +) + +query = "What is the capital of China?" + +documents = [ + "The capital of China is Beijing.", + "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.", +] + +documents = [ + document_template.format(doc=doc, suffix=suffix) for doc in documents +] + +response = requests.post(url, + json={ + "query": query_template.format(prefix=prefix, instruction=instruction, query=query), + "documents": documents, + }).json() + +print(response) +``` + +If you run this script successfully, you will see a list of scores printed to the console, similar to this: + +```bash +TODO:add the output +``` + +### Offline Inference + +```python +from vllm import LLM + +model_name = "Qwen/Qwen3-VL-Reranker-8B" + +# What is the difference between the official original version and one +# that has been converted into a sequence classification model? +# Qwen3-Reranker is a language model that doing reranker by using the +# logits of "no" and "yes" tokens. +# It needs to computing 151669 tokens logits, making this method extremely +# inefficient, not to mention incompatible with the vllm score API. +# A method for converting the original model into a sequence classification +# model was proposed. See:https://huggingface.co/Qwen/Qwen3-Reranker-0.6B/discussions/3 +# Models converted offline using this method can not only be more efficient +# and support the vllm score API, but also make the init parameters more +# concise, for example. +# model = LLM(model="Qwen/Qwen3-VL-Reranker-8B", runner="pooling") + +# If you want to load the official original version, the init parameters are +# as follows. + +model = LLM( + model=model_name, + runner="pooling", + hf_overrides={ + # Manually route to sequence classification architecture + # This tells vLLM to use Qwen3VLForSequenceClassification instead of + # the default Qwen3VLForConditionalGeneration + "architectures": ["Qwen3VLForSequenceClassification"], + # Specify which token logits to extract from the language model head + # The original reranker uses "no" and "yes" token logits for scoring + "classifier_from_token": ["no", "yes"], + # Enable special handling for original Qwen3-Reranker models + # This flag triggers conversion logic that transforms the two token + # vectors into a single classification vector + "is_original_qwen3_reranker": True, + }, +) + +# Why do we need hf_overrides for the official original version: +# vllm converts it to Qwen3VLForSequenceClassification when loaded for +# better performance. +# - Firstly, we need using `"architectures": ["Qwen3VLForSequenceClassification"],` +# to manually route to Qwen3VLForSequenceClassification. +# - Then, we will extract the vector corresponding to classifier_from_token +# from lm_head using `"classifier_from_token": ["no", "yes"]`. +# - Third, we will convert these two vectors into one vector. The use of +# conversion logic is controlled by `using "is_original_qwen3_reranker": True`. + +# Please use the query_template and document_template to format the query and +# document for better reranker results. + +prefix = '<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n' +suffix = "<|im_end|>\n<|im_start|>assistant\n" + +query_template = "{prefix}: {instruction}\n: {query}\n" +document_template = ": {doc}{suffix}" + +if __name__ == "__main__": + instruction = ( + "Given a search query, retrieve relevant candidates that answer the query." + ) + + query = "What is the capital of China?" + + documents = [ + "The capital of China is Beijing.", + "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.", + ] + + documents = [document_template.format(doc=doc, suffix=suffix) for doc in documents] + + outputs = model.score(query_template.format(prefix=prefix, instruction=instruction, query=query), documents) + + print([output.outputs.score for output in outputs]) +``` + +If you run this script successfully, you will see a list of scores printed to the console, similar to this: + +```bash +TODO: +``` + +## Performance + +Run performance of `Qwen3-VL-Reranker-8B` as an example. +Refer to [vllm benchmark](https://docs.vllm.ai/en/latest/contributing/) for more details. + +Take the `serve` as an example. Run the code as follows. + +```bash +vllm bench serve --model Qwen/Qwen3-VL-Reranker-8B --backend vllm-rerank --dataset-name random-rerank --endpoint /v1/rerank --random-input 200 --save-result --result-dir ./ +``` + +After about several minutes, you can get the performance evaluation result. With this tutorial, the performance result is: + +```bash +TODO: +``` diff --git a/docs/source/user_guide/support_matrix/supported_models.md b/docs/source/user_guide/support_matrix/supported_models.md index 3821e1138e2..4295fd99b9a 100644 --- a/docs/source/user_guide/support_matrix/supported_models.md +++ b/docs/source/user_guide/support_matrix/supported_models.md @@ -48,7 +48,9 @@ Get the latest info here: https://github.com/vllm-project/vllm-ascend/issues/160 | Model | Support | Note | BF16 | Supported Hardware | W8A8 | Chunked Prefill | Automatic Prefix Cache | LoRA | Speculative Decoding | Async Scheduling | Tensor Parallel | Pipeline Parallel | Expert Parallel | Data Parallel | Prefill-decode Disaggregation | Piecewise AclGraph | Fullgraph AclGraph | max-model-len | MLP Weight Prefetch | Doc | |-------------------------------|-----------|----------------------------------------------------------------------|------|--------------------|------|-----------------|------------------------|------|----------------------|------------------|-----------------|-------------------|-----------------|---------------|-------------------------------|--------------------|--------------------|---------------|---------------------|-----| | Qwen3-Embedding | ✅ | || A2/A3 |||||||||||||||| [Qwen3_embedding](../../tutorials/Qwen3_embedding.md)| +| Qwen3-VL-Embedding | ✅ | || A2/A3 |||||||||||||||| [Qwen3_vl_embedding](../../tutorials/Qwen3_vl_embedding.md)| | Qwen3-Reranker | ✅ | || A2/A3 |||||||||||||||| [Qwen3_reranker](../../tutorials/Qwen3_reranker.md)| +| Qwen3-VL-Reranker | ✅ | || A2/A3 |||||||||||||||| [Qwen3_vl_reranker](../../tutorials/Qwen3_vl_reranker.md)| | Molmo | ✅ | [1942](https://github.com/vllm-project/vllm-ascend/issues/1942) || A2/A3 ||||||||||||||||| | XLM-RoBERTa-based | ✅ | || A2/A3 ||||||||||||||||| | Bert | ✅ | || A2/A3 |||||||||||||||||