Skip to content

Add Conv2dLayer/Conv3dLayer to fix PyTorch 2.9.1 CuDNN Conv3d bug#20282

Merged
mickqian merged 1 commit intosgl-project:mainfrom
yhyang201:conv-layer-abstraction
Mar 15, 2026
Merged

Add Conv2dLayer/Conv3dLayer to fix PyTorch 2.9.1 CuDNN Conv3d bug#20282
mickqian merged 1 commit intosgl-project:mainfrom
yhyang201:conv-layer-abstraction

Conversation

@yhyang201
Copy link
Copy Markdown
Collaborator

@yhyang201 yhyang201 commented Mar 10, 2026

Motivation

Very thanks for the benchmark data in this PR: #19788

  • Add Conv2dLayer/Conv3dLayer in sglang/srt/layers/conv.py. Conv3dLayer enables unfold+linear by default to avoid the PyTorch 2.9.1 + CuDNN < 9.15 Conv3d
    bug (Severe Performance Regression in Conv3D / bf16 in PyTorch 2.9.1 pytorch/pytorch#168167). Conv2dLayer is a drop-in replacement for nn.Conv2d with linear optimization opt-in.
  • Migrate 3 Conv3d models (qwen3_vl, qwen2_vl, glm4v) and 12 Conv2d patch embedding models to use the new layers.
  • Remove check_torch_2_9_1_cudnn_compatibility() from server_args.py.

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

…mization

Add unified Conv2d/Conv3d abstraction in sglang/srt/layers/conv.py that
automatically uses unfold+F.linear when kernel_size == stride, padding == 0,
dilation == 1, groups == 1. This is ~4-14x faster for patch embeddings and
avoids the PyTorch 2.9.1 + CuDNN < 9.15 Conv3d bug.

Migrate 15 vision models to use the new layers and remove the global
check_torch_2_9_1_cudnn_compatibility() check from server_args.py.

Co-Authored-By: wili-65535 <wili-65535@users.noreply.github.com>
@github-actions github-actions bot added the Multi-modal multi-modal language model label Mar 10, 2026
@yhyang201 yhyang201 marked this pull request as ready for review March 10, 2026 12:52
@yhyang201 yhyang201 changed the title Add Conv2dLayer/Conv3dLayer to fix PyTorch 2.9.1 CuDNN Conv3d bug [DO NOT MERGE] Add Conv2dLayer/Conv3dLayer to fix PyTorch 2.9.1 CuDNN Conv3d bug Mar 10, 2026
@yhyang201
Copy link
Copy Markdown
Collaborator Author

/tag-and-rerun-ci

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces custom 2D and 3D convolution layers designed to enhance performance and stability, particularly for patch embeddings in multimodal models. The primary motivation is to resolve a critical bug in PyTorch 2.9.1's nn.Conv3d with specific CuDNN versions, which caused significant performance degradation. By implementing an optimized unfold + F.linear path for eligible convolutions, the new layers provide a robust solution, allowing for the removal of a previously necessary compatibility warning.

Highlights

  • New Convolution Layers: Added Conv2dLayer and Conv3dLayer to sglang/srt/layers/conv.py to provide optimized convolution operations.
  • Bug Fix for PyTorch 2.9.1 CuDNN Conv3d: Conv3dLayer now defaults to an unfold+linear optimization path, which circumvents a known performance and memory bug in PyTorch 2.9.1 when used with CuDNN versions older than 9.15.
  • Model Migration: Migrated 3 Conv3d models (qwen3_vl, qwen2_vl, glm4v) and 12 Conv2d patch embedding models to utilize the newly introduced Conv2dLayer and Conv3dLayer.
  • Removed Compatibility Check: The check_torch_2_9_1_cudnn_compatibility() function was removed from server_args.py as the new convolution layers address the underlying issue.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/layers/conv.py
    • Added Conv2dLayer and Conv3dLayer classes, which provide unfold+linear optimization for specific convolution configurations and address a PyTorch 2.9.1 CuDNN bug.
  • python/sglang/srt/models/clip.py
    • Imported Conv2dLayer.
    • Updated self.patch_embedding to use Conv2dLayer instead of nn.Conv2d.
  • python/sglang/srt/models/dots_vlm_vit.py
    • Imported Conv2dLayer.
    • Updated self.proj to use Conv2dLayer instead of nn.Conv2d.
  • python/sglang/srt/models/glm4v.py
    • Imported Conv3dLayer.
    • Replaced nn.Conv3d with Conv3dLayer for self.proj.
    • Removed the copy_conv3d_weight_to_linear method and related linear conversion logic.
    • Simplified dtype and device properties by removing conditional checks for self.linear.
    • Removed the call to self.visual.patch_embed.copy_conv3d_weight_to_linear() from load_weights.
  • python/sglang/srt/models/idefics2.py
    • Imported Conv2dLayer.
    • Updated self.patch_embedding to use Conv2dLayer instead of nn.Conv2d.
  • python/sglang/srt/models/internvl.py
    • Imported Conv2dLayer.
    • Updated self.patch_embedding to use Conv2dLayer instead of nn.Conv2d.
  • python/sglang/srt/models/kimi_k25.py
    • Imported Conv2dLayer.
    • Updated self.proj to use Conv2dLayer instead of nn.Conv2d.
  • python/sglang/srt/models/kimi_vl_moonvit.py
    • Imported Conv2dLayer.
    • Updated self.proj to use Conv2dLayer instead of nn.Conv2d.
  • python/sglang/srt/models/midashenglm.py
    • Imported Conv2dLayer.
    • Updated self.proj to use Conv2dLayer instead of nn.Conv2d.
  • python/sglang/srt/models/paddleocr_vl.py
    • Imported Conv2dLayer.
    • Updated self.patch_embedding to use Conv2dLayer instead of nn.Conv2d.
  • python/sglang/srt/models/pixtral.py
    • Imported Conv2dLayer.
    • Updated self.patch_conv to use Conv2dLayer instead of nn.Conv2d.
  • python/sglang/srt/models/qwen2_vl.py
    • Imported Conv3dLayer.
    • Updated self.proj to use Conv3dLayer instead of nn.Conv3d.
  • python/sglang/srt/models/qwen3_vl.py
    • Imported Conv3dLayer.
    • Updated self.proj to use Conv3dLayer instead of nn.Conv3d.
  • python/sglang/srt/models/siglip.py
    • Imported Conv2dLayer.
    • Updated self.patch_embedding to use Conv2dLayer instead of nn.Conv2d.
  • python/sglang/srt/models/step3_vl.py
    • Imported Conv2dLayer.
    • Updated self.patch_embedding to use Conv2dLayer instead of nn.Conv2d.
  • python/sglang/srt/models/step3_vl_10b.py
    • Imported Conv2dLayer.
    • Updated self.conv1 to use Conv2dLayer instead of nn.Conv2d.
  • python/sglang/srt/server_args.py
    • Removed the check_torch_2_9_1_cudnn_compatibility method and its invocation from check_server_args.
  • test/unit/test_conv_layer.py
    • Added unit tests for Conv2dLayer and Conv3dLayer, covering basic functionality, linear optimization, padding modes, and error handling.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces Conv2dLayer and Conv3dLayer as optimized replacements for PyTorch's native convolution layers, primarily to address a bug in PyTorch 2.9.1 with older CuDNN versions. The use of an unfold+linear optimization is a clever workaround. The migration of various models to these new layers is well-executed, and the refactoring in glm4v.py is a notable improvement in code clarity. The addition of comprehensive unit tests is also commendable. I have a couple of suggestions to enhance the readability and robustness of the new convolution layers.

@yhyang201
Copy link
Copy Markdown
Collaborator Author

/rerun-failed-ci

@yhyang201 yhyang201 changed the title [DO NOT MERGE] Add Conv2dLayer/Conv3dLayer to fix PyTorch 2.9.1 CuDNN Conv3d bug Add Conv2dLayer/Conv3dLayer to fix PyTorch 2.9.1 CuDNN Conv3d bug Mar 11, 2026
@yhyang201
Copy link
Copy Markdown
Collaborator Author

yhyang201 commented Mar 11, 2026

@mickqian @JustinTong0323 what do you think of this pr?

Copy link
Copy Markdown
Collaborator

@mickqian mickqian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need some performance and accuracy report for this one

@yhyang201
Copy link
Copy Markdown
Collaborator Author

accuracy:

uv run python eval.py ocrbench --model kimi/your-model-id     --think-mode kimi --max-tokens 8192 --stream

Before:
ocrbench_scorer                                                                                                                                                              
accuracy         0.896                                                                                                                                                       
stderr           0.010  

After:
ocrbench_scorer                                                                                                                                                              
accuracy         0.903                                                                                                                                                       
stderr           0.009  

performance:

python3 -m sglang.bench_serving   --backend sglang-oai-chat  --model Qwen/Qwen3-VL-8B-Instruct --dataset-name image   --num-prompts 128   --random-input-len 4000   --random-output-len 200   --random-range-ratio 1.0   --image-count 10   --image-resolution 720p   --image-content random --max-concurrency 16 --seed 123  --warmup-requests 0 


Before:
============ Serving Benchmark Result ============
Backend:                                 sglang-oai-chat
Traffic request rate:                    inf       
Max request concurrency:                 16        
Successful requests:                     128       
Benchmark duration (s):                  173.95    
Total input tokens:                      1668829   
Total input text tokens:                 539869    
Total input vision tokens:               1128960   
Total generated tokens:                  25600     
Total generated tokens (retokenized):    22829     
Request throughput (req/s):              0.74      
Input token throughput (tok/s):          9593.59   
Output token throughput (tok/s):         147.17    
Peak output token throughput (tok/s):    1168.00   
Peak concurrent requests:                32        
Total token throughput (tok/s):          9740.75   
Concurrency:                             15.73     
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   21381.68  
Median E2E Latency (ms):                 21479.66  
P90 E2E Latency (ms):                    23069.17  
P99 E2E Latency (ms):                    23342.17  
---------------Time to First Token----------------
Mean TTFT (ms):                          13388.04  
Median TTFT (ms):                        13553.09  
P99 TTFT (ms):                           19540.12  
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          40.17     
Median TPOT (ms):                        39.47     
P99 TPOT (ms):                           71.67     
---------------Inter-Token Latency----------------
Mean ITL (ms):                           42.98     
Median ITL (ms):                         13.72     
P95 ITL (ms):                            14.45     
P99 ITL (ms):                            55.83     
Max ITL (ms):                            11697.02  
==================================================

After 

============ Serving Benchmark Result ============
Backend:                                 sglang-oai-chat
Traffic request rate:                    inf       
Max request concurrency:                 16        
Successful requests:                     128       
Benchmark duration (s):                  172.38    
Total input tokens:                      1668813   
Total input text tokens:                 539853    
Total input vision tokens:               1128960   
Total generated tokens:                  25600     
Total generated tokens (retokenized):    22238     
Request throughput (req/s):              0.74      
Input token throughput (tok/s):          9681.27   
Output token throughput (tok/s):         148.51    
Peak output token throughput (tok/s):    1184.00   
Peak concurrent requests:                32        
Total token throughput (tok/s):          9829.78   
Concurrency:                             15.75     
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   21216.08  
Median E2E Latency (ms):                 21315.23  
P90 E2E Latency (ms):                    22811.43  
P99 E2E Latency (ms):                    23098.89  
---------------Time to First Token----------------
Mean TTFT (ms):                          13328.89  
Median TTFT (ms):                        13509.19  
P99 TTFT (ms):                           18717.53  
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          39.63     
Median TPOT (ms):                        38.49     
P99 TPOT (ms):                           71.98     
---------------Inter-Token Latency----------------
Mean ITL (ms):                           43.21     
Median ITL (ms):                         13.73     
P95 ITL (ms):                            27.51     
P99 ITL (ms):                            55.83     
Max ITL (ms):                            11864.45  
==================================================

@mickqian
Copy link
Copy Markdown
Collaborator

vlm affected only, bypassing

@mickqian mickqian merged commit 1c456a0 into sgl-project:main Mar 15, 2026
361 of 386 checks passed
Wangzheee pushed a commit to Wangzheee/sglang that referenced this pull request Mar 21, 2026
…gl-project#20282)

Co-authored-by: wili-65535 <wili-65535@users.noreply.github.com>
0-693 pushed a commit to 0-693/sglang that referenced this pull request Mar 25, 2026
…gl-project#20282)

Co-authored-by: wili-65535 <wili-65535@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Multi-modal multi-modal language model run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants