Skip to content

[Model][GGUF] Add Gemma4 GGUF serving glue#41589

Closed
lesj0610 wants to merge 1 commit intovllm-project:mainfrom
lesj0610:lesj/gemma4-gguf-serving-glue-main
Closed

[Model][GGUF] Add Gemma4 GGUF serving glue#41589
lesj0610 wants to merge 1 commit intovllm-project:mainfrom
lesj0610:lesj/gemma4-gguf-serving-glue-main

Conversation

@lesj0610
Copy link
Copy Markdown
Contributor

@lesj0610 lesj0610 commented May 4, 2026

Summary

Add the Gemma4-specific GGUF serving glue needed to load local Gemma4 GGUF repositories with sibling HF config/tokenizer files:

  • prefer sibling config.json for local GGUF files when available instead of forcing transformers' GGUF parser
  • add Gemma4 GGUF tensor name mappings that are missing from gguf-py's current tables
  • reshape Gemma4 vision patch embedding GGUF tensors to vLLM's parameter layout
  • patch Gemma4 GGUF tokenizer/processor special-token fields from GGUF metadata
  • keep multimodal Gemma4 per-layer embeddings on the language-model device/dtype path

This PR intentionally contains no AutoRound quant runtime changes and no MoE activation changes.

Duplicate-work check

Checked open PRs before opening:

Tests

.venv/bin/python -m py_compile \
  tests/models/test_gguf_download.py \
  vllm/model_executor/model_loader/gguf_loader.py \
  vllm/model_executor/models/gemma4_mm.py \
  vllm/tokenizers/registry.py \
  vllm/transformers_utils/config.py \
  vllm/transformers_utils/gguf_utils.py \
  vllm/transformers_utils/processor.py

pre-commit run ruff-check --files \
  tests/models/test_gguf_download.py \
  vllm/model_executor/model_loader/gguf_loader.py \
  vllm/model_executor/models/gemma4_mm.py \
  vllm/tokenizers/registry.py \
  vllm/transformers_utils/config.py \
  vllm/transformers_utils/gguf_utils.py \
  vllm/transformers_utils/processor.py

pre-commit run ruff-format --files \
  tests/models/test_gguf_download.py \
  vllm/model_executor/model_loader/gguf_loader.py \
  vllm/model_executor/models/gemma4_mm.py \
  vllm/tokenizers/registry.py \
  vllm/transformers_utils/config.py \
  vllm/transformers_utils/gguf_utils.py \
  vllm/transformers_utils/processor.py

.venv/bin/python -m pytest tests/models/test_gguf_download.py -q

Result: 8 passed.

AI assistance was used to prepare and split this change; I reviewed the resulting diff and validation output.

Teach the GGUF loader and tokenizer/processor setup about Gemma4-specific tensor names, local config handling, and tokenizer special IDs.\n\nKeep this independent from the Gemma4 quant runtime and MoE activation changes.\n\nCo-authored-by: OpenAI Codex <codex@openai.com>

Signed-off-by: lesj0610 <lesj0610@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for Gemma4 GGUF models, including manual tensor mappings, tokenizer patching for special token IDs, and improved configuration loading that prioritizes local JSON files. It also updates the Gemma4 multimodal model's embedding initialization. A critical bug was identified in the GGUF loader where missing '.weight' suffixes in the manual mapping for MoE expert weights would lead to a runtime error.

Comment on lines +171 to +180
add_mapping(
f"blk.{idx}.ffn_gate_up_exps.weight",
f"{layer_prefix}.moe.gate_up_proj.weight",
handled_name=f"{layer_prefix}.experts.gate_up_proj",
)
add_mapping(
f"blk.{idx}.ffn_down_exps.weight",
f"{layer_prefix}.moe.down_proj.weight",
handled_name=f"{layer_prefix}.experts.down_proj",
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The handled_name for Gemma4 MoE expert weights (gate/up and down) is missing the .weight suffix. Since normalized_state_names (built at line 441) includes the suffix for weight tensors, this mismatch will cause the manual mapping to be skipped. Consequently, these parameters will be flagged as unmapped in the final check (line 509), leading to a RuntimeError during model loading.

Suggested change
add_mapping(
f"blk.{idx}.ffn_gate_up_exps.weight",
f"{layer_prefix}.moe.gate_up_proj.weight",
handled_name=f"{layer_prefix}.experts.gate_up_proj",
)
add_mapping(
f"blk.{idx}.ffn_down_exps.weight",
f"{layer_prefix}.moe.down_proj.weight",
handled_name=f"{layer_prefix}.experts.down_proj",
)
add_mapping(
f"blk.{idx}.ffn_gate_up_exps.weight",
f"{layer_prefix}.moe.gate_up_proj.weight",
handled_name=f"{layer_prefix}.experts.gate_up_proj.weight",
)
add_mapping(
f"blk.{idx}.ffn_down_exps.weight",
f"{layer_prefix}.moe.down_proj.weight",
handled_name=f"{layer_prefix}.experts.down_proj.weight",
)

@lesj0610
Copy link
Copy Markdown
Contributor Author

lesj0610 commented May 4, 2026

Closing per author workflow correction. This split will be handled in fork-only PRs before any upstream submission.

@lesj0610 lesj0610 closed this May 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant