-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] support minicpm-v_2_6 for pytorch engine. #2767
Conversation
cc @grimoire |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -72,6 +72,7 @@ The TurboMind engine doesn't support window attention. Therefore, for models tha | |||
| DeepSeek-MoE | 16B | LLM | Yes | No | No | No | No | | |||
| DeepSeek-V2 | 16B, 236B | LLM | Yes | No | No | No | No | | |||
| MiniCPM3 | 4B | LLM | Yes | Yes | Yes | No | No | | |||
| MiniCPM-V-2_6 | 8B | LLM | Yes | No | No | No | No | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WaA16 was supported in my testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated.
93a9a16
to
d6adca4
Compare
d6adca4
to
a31390c
Compare
@@ -72,6 +72,7 @@ The TurboMind engine doesn't support window attention. Therefore, for models tha | |||
| DeepSeek-MoE | 16B | LLM | Yes | No | No | No | No | | |||
| DeepSeek-V2 | 16B, 236B | LLM | Yes | No | No | No | No | | |||
| MiniCPM3 | 4B | LLM | Yes | Yes | Yes | No | No | | |||
| MiniCPM-V-2_6 | 8B | LLM | Yes | No | No | No | Yes | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could update supported_models in readme as well
https://github.com/InternLM/lmdeploy?tab=readme-ov-file#supported-models
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MiniCPM-V-2_6 already exists in README, because turbomind engine supported it before, and support list in the README does not distinguish between the turbomind and pytorch engine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* feature: support qwen2.5 fuction_call (#2737) * feat: support qwen2.5 tools_call * fix: npe bug * fix: 模版不一致 * fix: adopting review suggestions * fix: adopting review suggestions * fix: adopting review suggestions * fix: adopting review suggestions * feat: Support multi tools calling * feat: Support multi tools calling * fix: Add '\n' between each tool * fix: Add ensure_ascii=False * bugfix: rfind * bugfix: tools_call -> tool_calls * bugfix: add toolName in tool_response * fix: some '\n' error * fix: remove toolname * fix: replace '\n' to self.separator * feat: add doc with multiple tool calling * fix:update doc * feat: add qwen2.5 prompt template test * feat: add qwen2.5 no tool call prompt test --------- Co-authored-by: gaozixiang <[email protected]> * Update supported models & Ascend doc (#2765) * update ascend supported model list * fix markdown * fix markdown * fix lint * Update get_started.md * Update get_started.md * [CI] Split vl testcases into turbomind and pytorch backend (#2751) * updaet * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * [Feature] support minicpm-v_2_6 for pytorch engine. (#2767) * support minicpmv_2_6. * update supported_models. * update supported_models. * Support qwen2-vl AWQ quantization (#2787) * Support qwen2-vl AWQ quantization * Update config.yaml --------- Co-authored-by: zhulinJulia24 <[email protected]> * [dlinfer] Fix qwenvl rope error for dlinfer backend (#2795) * Optimize update_step_ctx on Ascend (#2804) * opt update_ctx for ascend * fix lint --------- Co-authored-by: 逝夜长歌 <[email protected]> Co-authored-by: gaozixiang <[email protected]> Co-authored-by: jinminxi104 <[email protected]> Co-authored-by: zhulinJulia24 <[email protected]> Co-authored-by: zhoushenglong <[email protected]> Co-authored-by: AllentDan <[email protected]> Co-authored-by: Wei Tao <[email protected]>
* refactor VL modules for internvl and qwen2-vl (#2764) * qwen2-vl * internvl * qwen2 * Refactor VL modules for glm4v, deepseek-vl, llava-hf, cogvlm (#2772) * qwen2-vl * internvl * qwen2 * get image_tokens_per_patch for internvl2 * deepseek-vl * cogvlm * glm4v * update internvl * internvl_llava * llava * glm4v * upate internvl * cogvlm * deepseek * llava_hf * rollback llava, internvl-llava * Refactor VL modules for qwen-vl, llava and llava_next (#2773) * qwen2-vl * internvl * qwen2 * get image_tokens_per_patch for internvl2 * deepseek-vl * cogvlm * glm4v * update internvl * internvl_llava * llava * glm4v * upate internvl * cogvlm * deepseek * llava_hf * rollback llava, internvl-llava * refactor qwen * update internvl * update llava_hf * update qwen2-vl * llava_next * update llava_next * update llava * update llava * update llava * Refactor VL modules for qwen2-vl (#2777) * qwen2-vl * internvl * qwen2 * get image_tokens_per_patch for internvl2 * deepseek-vl * cogvlm * glm4v * update internvl * internvl_llava * llava * glm4v * upate internvl * cogvlm * deepseek * llava_hf * rollback llava, internvl-llava * refactor qwen * update internvl * update llava_hf * update qwen2-vl * llava_next * update llava_next * update llava * update llava * update llava * qwen2 * Fix side-effect to internvl (#2778) * qwen2-vl * internvl * qwen2 * get image_tokens_per_patch for internvl2 * deepseek-vl * cogvlm * glm4v * update internvl * internvl_llava * llava * glm4v * upate internvl * cogvlm * deepseek * llava_hf * rollback llava, internvl-llava * refactor qwen * update internvl * update llava_hf * update qwen2-vl * llava_next * update llava_next * update llava * update llava * update llava * qwen2 * fix internvl * Refactor VL modules for phi3-vision (#2779) * qwen2-vl * internvl * qwen2 * get image_tokens_per_patch for internvl2 * deepseek-vl * cogvlm * glm4v * update internvl * internvl_llava * llava * glm4v * upate internvl * cogvlm * deepseek * llava_hf * rollback llava, internvl-llava * refactor qwen * update internvl * update llava_hf * update qwen2-vl * llava_next * update llava_next * update llava * update llava * update llava * qwen2 * fix internvl * phi3-vision * Refactor VL modules for mllama and yi-vl (#2781) * qwen2-vl * internvl * qwen2 * get image_tokens_per_patch for internvl2 * deepseek-vl * cogvlm * glm4v * update internvl * internvl_llava * llava * glm4v * upate internvl * cogvlm * deepseek * llava_hf * rollback llava, internvl-llava * refactor qwen * update internvl * update llava_hf * update qwen2-vl * llava_next * update llava_next * update llava * update llava * update llava * qwen2 * fix internvl * phi3-vision * refactor yi-vl * refactor mllama * Refactor VLM module for minicpm and molmo (#2794) * Refactor VLM modules for xcomposer series (#2796) * Refactor VLM modules for internvl-llava (#2797) * Refactor VLM modules v2 (#2806) * internvl2 v2 * cogvlm * deepseek-vl * glm-4v * llava-hf * llava-next * llava * internvl-llava * mllama * phi3-vision * qwen * qwen2 * yi-vl * xcomposer * minicpm * molmo * update * update * Remove vl template (#2809) * Resolve conflicts (#2811) * feature: support qwen2.5 fuction_call (#2737) * feat: support qwen2.5 tools_call * fix: npe bug * fix: 模版不一致 * fix: adopting review suggestions * fix: adopting review suggestions * fix: adopting review suggestions * fix: adopting review suggestions * feat: Support multi tools calling * feat: Support multi tools calling * fix: Add '\n' between each tool * fix: Add ensure_ascii=False * bugfix: rfind * bugfix: tools_call -> tool_calls * bugfix: add toolName in tool_response * fix: some '\n' error * fix: remove toolname * fix: replace '\n' to self.separator * feat: add doc with multiple tool calling * fix:update doc * feat: add qwen2.5 prompt template test * feat: add qwen2.5 no tool call prompt test --------- Co-authored-by: gaozixiang <[email protected]> * Update supported models & Ascend doc (#2765) * update ascend supported model list * fix markdown * fix markdown * fix lint * Update get_started.md * Update get_started.md * [CI] Split vl testcases into turbomind and pytorch backend (#2751) * updaet * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * [Feature] support minicpm-v_2_6 for pytorch engine. (#2767) * support minicpmv_2_6. * update supported_models. * update supported_models. * Support qwen2-vl AWQ quantization (#2787) * Support qwen2-vl AWQ quantization * Update config.yaml --------- Co-authored-by: zhulinJulia24 <[email protected]> * [dlinfer] Fix qwenvl rope error for dlinfer backend (#2795) * Optimize update_step_ctx on Ascend (#2804) * opt update_ctx for ascend * fix lint --------- Co-authored-by: 逝夜长歌 <[email protected]> Co-authored-by: gaozixiang <[email protected]> Co-authored-by: jinminxi104 <[email protected]> Co-authored-by: zhulinJulia24 <[email protected]> Co-authored-by: zhoushenglong <[email protected]> Co-authored-by: AllentDan <[email protected]> Co-authored-by: Wei Tao <[email protected]> * PytorchEngine refactor multimodal (#2742) * WIP * support mrope * support long context * support causal=false * fix mask * flash attn bound * optimize * Moskau, Moskau, wirf die Gläser an die Wand * YMCA * optimize mllama * update processor * support cogvlm * all work and no play make jack a dull boy * upgrade triton * support qwen2vl * support internvl * phi3-v WIP * glm4v WIP * support chatglm and cogvlm * use image tokens * support llava * support internvl-mono * phi3v, mllama * add llavanext * use img token ids * support multiimage chatglm cogvlm * fix ut * minor-fix * minor-fix (#2813) * fix * fix mono * fix docs * read norm_type * super().collect_images->self.collect_images * add note in supported models * define the parameters clearly * better streaming * fix molmo * Fix vision model batch inference (#2868) * remove forward from vl models that are not supported by tm * support max_batch_size * fix * warn glm4v does not support multi images * unconst * fix deepseek-vl * fix internvl * fix llava * fix minicpm 2.6 * fix callback * fix minicpm v2.5 * fix minicpm v2.6 * update llava_next.py * remove hardcode from xcomposer2.py * rollback supported_models * change to staticmethod * fix vlm quantization * update doc * update --------- Co-authored-by: q yao <[email protected]>
* refactor VL modules for internvl and qwen2-vl (#2764) * qwen2-vl * internvl * qwen2 * Refactor VL modules for glm4v, deepseek-vl, llava-hf, cogvlm (#2772) * qwen2-vl * internvl * qwen2 * get image_tokens_per_patch for internvl2 * deepseek-vl * cogvlm * glm4v * update internvl * internvl_llava * llava * glm4v * upate internvl * cogvlm * deepseek * llava_hf * rollback llava, internvl-llava * Refactor VL modules for qwen-vl, llava and llava_next (#2773) * qwen2-vl * internvl * qwen2 * get image_tokens_per_patch for internvl2 * deepseek-vl * cogvlm * glm4v * update internvl * internvl_llava * llava * glm4v * upate internvl * cogvlm * deepseek * llava_hf * rollback llava, internvl-llava * refactor qwen * update internvl * update llava_hf * update qwen2-vl * llava_next * update llava_next * update llava * update llava * update llava * Refactor VL modules for qwen2-vl (#2777) * qwen2-vl * internvl * qwen2 * get image_tokens_per_patch for internvl2 * deepseek-vl * cogvlm * glm4v * update internvl * internvl_llava * llava * glm4v * upate internvl * cogvlm * deepseek * llava_hf * rollback llava, internvl-llava * refactor qwen * update internvl * update llava_hf * update qwen2-vl * llava_next * update llava_next * update llava * update llava * update llava * qwen2 * Fix side-effect to internvl (#2778) * qwen2-vl * internvl * qwen2 * get image_tokens_per_patch for internvl2 * deepseek-vl * cogvlm * glm4v * update internvl * internvl_llava * llava * glm4v * upate internvl * cogvlm * deepseek * llava_hf * rollback llava, internvl-llava * refactor qwen * update internvl * update llava_hf * update qwen2-vl * llava_next * update llava_next * update llava * update llava * update llava * qwen2 * fix internvl * Refactor VL modules for phi3-vision (#2779) * qwen2-vl * internvl * qwen2 * get image_tokens_per_patch for internvl2 * deepseek-vl * cogvlm * glm4v * update internvl * internvl_llava * llava * glm4v * upate internvl * cogvlm * deepseek * llava_hf * rollback llava, internvl-llava * refactor qwen * update internvl * update llava_hf * update qwen2-vl * llava_next * update llava_next * update llava * update llava * update llava * qwen2 * fix internvl * phi3-vision * Refactor VL modules for mllama and yi-vl (#2781) * qwen2-vl * internvl * qwen2 * get image_tokens_per_patch for internvl2 * deepseek-vl * cogvlm * glm4v * update internvl * internvl_llava * llava * glm4v * upate internvl * cogvlm * deepseek * llava_hf * rollback llava, internvl-llava * refactor qwen * update internvl * update llava_hf * update qwen2-vl * llava_next * update llava_next * update llava * update llava * update llava * qwen2 * fix internvl * phi3-vision * refactor yi-vl * refactor mllama * Refactor VLM module for minicpm and molmo (#2794) * Refactor VLM modules for xcomposer series (#2796) * Refactor VLM modules for internvl-llava (#2797) * Refactor VLM modules v2 (#2806) * internvl2 v2 * cogvlm * deepseek-vl * glm-4v * llava-hf * llava-next * llava * internvl-llava * mllama * phi3-vision * qwen * qwen2 * yi-vl * xcomposer * minicpm * molmo * update * update * Remove vl template (#2809) * Resolve conflicts (#2811) * feature: support qwen2.5 fuction_call (#2737) * feat: support qwen2.5 tools_call * fix: npe bug * fix: 模版不一致 * fix: adopting review suggestions * fix: adopting review suggestions * fix: adopting review suggestions * fix: adopting review suggestions * feat: Support multi tools calling * feat: Support multi tools calling * fix: Add '\n' between each tool * fix: Add ensure_ascii=False * bugfix: rfind * bugfix: tools_call -> tool_calls * bugfix: add toolName in tool_response * fix: some '\n' error * fix: remove toolname * fix: replace '\n' to self.separator * feat: add doc with multiple tool calling * fix:update doc * feat: add qwen2.5 prompt template test * feat: add qwen2.5 no tool call prompt test --------- Co-authored-by: gaozixiang <[email protected]> * Update supported models & Ascend doc (#2765) * update ascend supported model list * fix markdown * fix markdown * fix lint * Update get_started.md * Update get_started.md * [CI] Split vl testcases into turbomind and pytorch backend (#2751) * updaet * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * [Feature] support minicpm-v_2_6 for pytorch engine. (#2767) * support minicpmv_2_6. * update supported_models. * update supported_models. * Support qwen2-vl AWQ quantization (#2787) * Support qwen2-vl AWQ quantization * Update config.yaml * [dlinfer] Fix qwenvl rope error for dlinfer backend (#2795) * Optimize update_step_ctx on Ascend (#2804) * opt update_ctx for ascend * fix lint * PytorchEngine refactor multimodal (#2742) * WIP * support mrope * support long context * support causal=false * fix mask * flash attn bound * optimize * Moskau, Moskau, wirf die Gläser an die Wand * YMCA * optimize mllama * update processor * support cogvlm * all work and no play make jack a dull boy * upgrade triton * support qwen2vl * support internvl * phi3-v WIP * glm4v WIP * support chatglm and cogvlm * use image tokens * support llava * support internvl-mono * phi3v, mllama * add llavanext * use img token ids * support multiimage chatglm cogvlm * fix ut * minor-fix * minor-fix (#2813) * fix * fix mono * fix docs * read norm_type * super().collect_images->self.collect_images * add note in supported models * define the parameters clearly * better streaming * fix molmo * Fix vision model batch inference (#2868) * remove forward from vl models that are not supported by tm * support max_batch_size * fix * warn glm4v does not support multi images * unconst * fix deepseek-vl * fix internvl * fix llava * fix minicpm 2.6 * fix callback * fix minicpm v2.5 * fix minicpm v2.6 * update llava_next.py * remove hardcode from xcomposer2.py * rollback supported_models * change to staticmethod * optimize tp * fix vlm quantization * update doc * update
视频需要用户抽帧后传 @ransheng11 |
Motivation
support minicpm-v_2_6 model for pytorch engine.
Modification