-
Notifications
You must be signed in to change notification settings - Fork 393
[1/N] Polish deployment skills - Add a debug loop for unsupported models #1236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
3 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
70 changes: 70 additions & 0 deletions
70
.claude/skills/deployment/references/unsupported-models.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,70 @@ | ||
| # Deploying Unsupported Models | ||
|
|
||
| When deploying a model not in the validated support matrix (`support-matrix.md`), expect failures. This guide covers the iterative debug loop for getting unsupported models running on vLLM, SGLang, or TRT-LLM. | ||
|
|
||
| ## Step 1 — Run and collect the error | ||
|
|
||
| Submit the deployment job. When it fails, read the full log — focus on the **first** error traceback (not "See root cause above" wrappers). Identify the file and line number in the framework source. | ||
|
|
||
| ## Step 2 — Diagnose the root cause | ||
|
|
||
| Fetch the framework source at the failing line (use `gh api` for the tagged version, or `find` inside the container). Common error categories: | ||
|
|
||
| | Category | Symptoms | Examples | | ||
| |----------|----------|----------| | ||
| | **Weight key mismatch** | `KeyError`, `Unexpected key`, `Missing key` during weight loading | Checkpoint uses `model.language_model.layers.*` but framework expects `model.layers.*`. See [vllm#39406](https://github.com/vllm-project/vllm/pull/39406) | | ||
| | **Quantized/unquantized layer confusion** | Wrong layer type loaded, dtype errors, shape mismatches | Framework tries to load unquantized layers with FP4 kernel due to overly broad `quantization_config.ignore` patterns or missing ignore entries. See [sglang#18937](https://github.com/sgl-project/sglang/pull/18937) | | ||
| | **Missing architecture support** | `NoneType is not iterable`, `KeyError` on model type, unknown architecture | Framework's model handler doesn't recognize the text backbone type (e.g., `ministral3` not handled in vLLM's `mistral3.py` init). Fix: extend the model type mapping | | ||
| | **Transformers version mismatch** | `ImportError`, `KeyError` on config fields | Framework ships with older transformers that doesn't know the model type. Fix: upgrade transformers after installing the framework | | ||
| | **Kernel-level issues** | CUDA errors, `triton` import failures, unsupported ops | Framework lacks kernel support for this model + quantization combo | | ||
|
|
||
| ## Step 3 — Apply a targeted fix | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Use |
||
|
|
||
| Focus on **small, targeted patches** to the framework source. Do not modify `config.json` or the checkpoint — fix the framework's handling instead. | ||
|
|
||
| ### Weight key mismatches and architecture mapping gaps | ||
|
|
||
| Patch the framework source in the run script using `sed` or a Python one-liner. Keep patches minimal — change only what's needed to unblock the current error. | ||
|
|
||
| ```bash | ||
| # Example: extend model type mapping in vLLM mistral3.py | ||
| FRAMEWORK_FILE=$(find /usr/local/lib -path "*/vllm/model_executor/models/mistral3.py" 2>/dev/null | head -1) | ||
| sed -i 's/old_pattern/new_pattern/' "${FRAMEWORK_FILE}" | ||
| ``` | ||
|
|
||
| > **Tip**: when locating framework source files inside containers, use `find` instead of Python import — some frameworks print log messages to stdout during import that can corrupt captured paths. | ||
|
|
||
| ### Speeding up debug iterations (vLLM) | ||
|
|
||
| When iterating on fixes, use these flags to shorten the feedback loop: | ||
|
|
||
| - **`--load-format dummy`** — skip loading actual model weights. Useful for testing whether the model initializes, config is parsed correctly, and weight keys match without waiting for the full checkpoint load. | ||
| - **`VLLM_USE_PRECOMPILED=1 pip install --editable .`** — when patching vLLM source directly (instead of `sed`), this rebuilds only Python code without recompiling C++/CUDA extensions. | ||
|
|
||
| ### Quantized/unquantized layer confusion | ||
|
|
||
| Check `hf_quant_config.json` ignore patterns against the framework's weight loading logic. The framework may try to load layers listed in `ignore` with quantized kernels, or vice versa. Fix by adjusting the framework's layer filtering logic. | ||
|
|
||
| ### Kernel-level issues | ||
|
|
||
| These require framework kernel team involvement. Do NOT attempt to patch kernels. Instead: | ||
|
|
||
| 1. Document the exact error (model, format, framework version, GPU type) | ||
| 2. Inform the user: *"This model + quantization combination requires kernel support that isn't available in {framework} v{version}. I'd suggest reaching out to the {framework} kernel team or trying a different framework."* | ||
| 3. Suggest trying an alternative framework (vLLM → SGLang → TRT-LLM) | ||
|
|
||
| ## Step 4 — Re-run and iterate | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Use |
||
|
|
||
| After applying a fix, resubmit the job. Each iteration may reveal a new error (e.g., fixing the init error exposes a weight loading error). Continue the loop: **run → read error → diagnose → patch → re-run**. | ||
|
|
||
| Typical iteration count: 1-3 for straightforward fixes, 3-5 for models requiring multiple patches. | ||
|
|
||
| ## Step 5 — Know when to stop | ||
|
|
||
| **Stop patching and escalate** when: | ||
|
|
||
| - The error is in compiled CUDA kernels or triton ops (not Python-level) | ||
| - The fix requires changes to core framework abstractions (not just model handlers) | ||
| - You've done 5+ iterations without the server starting | ||
|
|
||
| In these cases, inform the user and suggest: trying a different framework, checking for a newer framework version, or filing an issue with the framework team. | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,86 @@ | ||
| # Post-Quantization Checkpoint Validation | ||
|
|
||
| Verify the exported checkpoint's quantization pattern matches the recipe used. Quantization config patterns may silently miss layers if the model uses non-standard naming — this only surfaces later as deployment failures when the serving framework tries to load unquantized weights as quantized. | ||
|
|
||
| ## Expected quantization patterns by recipe | ||
|
|
||
| | Recipe (`--qformat`) | What should be quantized | What should be excluded | | ||
| |----------------------|-------------------------|------------------------| | ||
| | `nvfp4` | All linear layers | lm_head, routers, norms, embeddings | | ||
| | `nvfp4_mlp_only` | MLP layers (including MoE experts) | Attention layers, lm_head, routers | | ||
| | `nvfp4_experts_only` | MoE expert layers only | Dense MLP, attention, lm_head, routers | | ||
| | `nvfp4_omlp_only` | MLP + o_proj layers | Other attention layers, lm_head, routers | | ||
| | `fp8` | All linear layers | lm_head, norms, embeddings | | ||
| | `int4_awq` | All linear layers | lm_head, norms, embeddings | | ||
|
|
||
| ## Validation script | ||
|
|
||
| Run against the exported checkpoint to check every linear layer is either quantized (has scale params) or explicitly excluded: | ||
|
|
||
| ```bash | ||
| python3 -c " | ||
| import json, fnmatch | ||
|
|
||
| output = '<output_path>' | ||
| idx = json.load(open(f'{output}/model.safetensors.index.json')) | ||
| cfg = json.load(open(f'{output}/hf_quant_config.json')) | ||
| excludes = cfg['quantization']['exclude_modules'] | ||
|
|
||
| all_keys = set(idx['weight_map'].keys()) | ||
| # Identify linear weight params (skip norms, embeddings, scalars, scales) | ||
| skip_suffixes = ('_scale', '_scale_2', 'layernorm', 'layer_norm', 'norm.weight', 'embed', 'scalar') | ||
| linear_weights = sorted(k for k in all_keys | ||
| if k.endswith('.weight') and not any(s in k.lower() for s in skip_suffixes)) | ||
|
|
||
| # Check which have quantization scales | ||
| quantized, excluded, unexpected = [], [], [] | ||
| for w in linear_weights: | ||
| base = w.rsplit('.weight', 1)[0] | ||
| has_scales = any(f'{base}.{s}' in all_keys for s in ['weight_scale', 'input_scale']) | ||
| is_excluded = any(fnmatch.fnmatch(w, p) or fnmatch.fnmatch(base, p) for p in excludes) | ||
|
|
||
| if has_scales: | ||
| quantized.append(w) | ||
| elif is_excluded: | ||
| excluded.append(w) | ||
| else: | ||
| unexpected.append(w) | ||
|
|
||
| print(f'Quantized layers: {len(quantized)}') | ||
| print(f'Excluded layers (in exclude_modules): {len(excluded)}') | ||
| if unexpected: | ||
| print(f'\nWARNING: {len(unexpected)} layers have NO scales and are NOT in exclude list:') | ||
| # Group by module type for readability | ||
| groups = {} | ||
| for w in unexpected: | ||
| parts = w.split('.') | ||
| module_type = next((p for p in parts if p in | ||
| ('self_attn', 'mlp', 'experts', 'router', 'lm_head', 'embed_tokens', 'vision_tower')), 'other') | ||
| groups.setdefault(module_type, []).append(w) | ||
| for mtype, weights in sorted(groups.items()): | ||
| print(f' {mtype}: {len(weights)} weights (e.g., {weights[0]})') | ||
| print() | ||
| print('These layers were silently skipped during quantization.') | ||
| print('Likely cause: quantization config patterns did not match these module names.') | ||
| print('This WILL cause deployment failures (framework loads them as quantized but they are BF16).') | ||
| print('Fix: add missing patterns to the config, or add to exclude_modules if intentionally unquantized.') | ||
| else: | ||
| print('\nAll layers are either quantized or explicitly excluded. Checkpoint is consistent.') | ||
| " | ||
| ``` | ||
|
|
||
| ## Common pattern gaps | ||
|
|
||
| Layers silently skipped because the quantization config patterns don't match the model's naming: | ||
|
|
||
| | Model | Module path | Missed by pattern | Fix | | ||
| |-------|-------------|-------------------|-----| | ||
| | Gemma4 MoE | `layers.N.experts.*` | `*mlp*`, `*block_sparse_moe*` | Add `*.experts.*` (PR #1219) | | ||
| | Custom MoE | `layers.N.moe_block.experts.*` | `*mlp*` | Add matching pattern | | ||
| | VLM projector | `multi_modal_projector.*` | — | Usually excluded; verify | | ||
|
|
||
| ## What to do when warnings appear | ||
|
|
||
| - **Layers should have been quantized** (e.g., MoE experts with `nvfp4_mlp_only`): the quantization config patterns missed them. Fix by adding the missing pattern to the config and re-running PTQ. Check if ModelOpt already has a plugin for the model in `modelopt/torch/quantization/plugins/huggingface.py`. | ||
|
|
||
| - **Layers are intentionally unquantized** (e.g., attention layers with `nvfp4_mlp_only`): they should be in the `exclude_modules` list but the export didn't add them. Add them manually to both `hf_quant_config.json` and `config.json` `quantization_config.ignore` in the checkpoint to prevent deployment failures. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix grammar in the transformers mismatch guidance.
The sentence uses incorrect subject-verb agreement: “transformers that doesn’t know”. Update to “transformers that don’t know”.
🤖 Prompt for AI Agents