Mistral Large 3 NVFP4 TRTLLM MoE support#15049
Conversation
Summary of ChangesHello @elvischenv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates TensorRT-LLM (TRTLLM) support for Mixture-of-Experts (MoE) layers, specifically targeting Mistral Large 3 models with NVFP4 quantization. The primary goal is to enhance inference performance, which has been validated by benchmark results showing a substantial increase in output token throughput. The changes involve adapting the quantization pipeline to prepare weights for TRTLLM kernels and modifying the MoE layer to leverage these optimized operations, ensuring both speed and accuracy are maintained. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds support for Mistral Large 3 with NVFP4 quantization on the TRTLLM backend, which is a valuable enhancement. The changes are well-structured, primarily introducing a new execution path for this specific configuration. The refactoring of prepare_static_weights_for_kernel into a reusable utility function is a notable improvement in code organization. The performance and accuracy benchmarks provided in the pull request description are very helpful and demonstrate significant gains. My review includes one suggestion to improve code clarity and maintainability regarding a hardcoded value.
|
/tag-and-rerun-ci |
…n3_pp * 'main' of https://github.com/sgl-project/sglang: (74 commits) [bug fix][pp] fix inconsistent latency between tp (sgl-project#15379) Fix warp illegal instruction in kimi k2 thinking PCG (sgl-project#15306) Fix gpt-oss yarn with `truncate` argument (sgl-project#14270) Monkey patch deepseek-ocr's `v_head_dim` (sgl-project#15384) [model-gateway] Replace PolicyRegistry RwLock with DashMap for lock-free policy lookups (sgl-project#15361) [PP] Fix dynamic chunking strategy for PP (sgl-project#15372) Fix issue: ENABLE_BELOW_SM90 cannot be enabled on aarch64 CPU (sgl-project#12967) Split test_piecewise_cuda_graph.py to optimize CI resource usage (sgl-project#15290) unified management of environment variables for vlm cuda ipc transport (sgl-project#14501) Mistral Large 3 NVFP4 TRTLLM MoE support (sgl-project#15049) fix: adjust time for test_epd_disaggregation.py (sgl-project#15354) Add doc for qwen3 next (sgl-project#15337) feat: DeepSeek-V3.2 Streaming tool call output (sgl-project#15278) Feature/trtllm mha workspace size configurable sgl-project#15089 (sgl-project#15131) [VLM] Support cos sin cache for Qwen3-VL & GLM-4.1V (sgl-project#15205) [Deepseek V3.2] Support Overlap Spec + NSA (sgl-project#15307) Add request-level timestamp for when prefill finishes (sgl-project#14860) [CI] Migrate LoRA tests to test/registered/lora/ (sgl-project#15176) Reserve more memory for DeepSeekOCR model and adjust server start timeout for DeepGEMM to reduce flakiness (sgl-project#15277) Fix condition check for require_gathered_buffer (sgl-project#15328) ...
Motivation
Support Mistral Large 3 NVFP4 TRTLLM MoE.
Tested with cmd from #14485 +
--moe-runner-backend flashinfer_trtllm:Accuracy
TRTLLM MoE:
Default cutlass MoE:
Perf
TRTLLM MoE:
Default cutlass MoE:
Modifications
Support Compressed Tensors W4A4 NVFP4 TRTLLM MoE method.
Checklist