[Bugfix] add small vocab table for eagle qwen2#6903
[Bugfix] add small vocab table for eagle qwen2#6903Swipe4057 wants to merge 6 commits intosgl-project:mainfrom
Conversation
There was a problem hiding this comment.
Hello @Swipe4057, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello team, gemini-code-assist here to provide a summary of this pull request. This PR, authored by @Swipe4057, introduces support for a 'small vocabulary table' (also referred to as 'hot vocabulary') specifically for the Qwen2 Eagle model within the SGLang framework. The change follows the pattern established in a previous PR (#3822) which implemented a similar feature for Llama models. The primary goal is to potentially optimize memory usage or performance by allowing the model's language model head to work with a reduced vocabulary size if configured, falling back to the full vocabulary otherwise. This change is also intended to address issue #6863.
Highlights
- Feature: Small Vocabulary Table: Adds support for using a potentially smaller, 'hot' vocabulary size for the Qwen2 Eagle model's language model head, mirroring functionality previously added for Llama models.
- Model Support: Specifically targets the
qwen2_eagle.pymodel implementation. - Issue Resolution: Aims to resolve issue #6863.
Changelog
- python/sglang/srt/models/qwen2_eagle.py
- Modified the initialization of the
ParallelLMHeadin theQwen2EagleModelclass to usegetattr(config, "hot_vocab_size", config.vocab_size)instead of justconfig.vocab_size(line 129). This allows the model to use a configured 'hot' vocabulary size if available, otherwise defaulting to the full vocabulary size.
- Modified the initialization of the
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Vocabularies vast,
Or maybe just a hot list,
Code adapts with speed.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces a small vocabulary table for Qwen2 EAGLE models, similar to what was done for Llama EAGLE models. The change is well-targeted and uses a standard Python approach to access an optional configuration attribute.
The modification in python/sglang/srt/models/qwen2_eagle.py to use getattr(config, "hot_vocab_size", config.vocab_size) for the ParallelLMHead is clear and directly implements the intended feature. This allows the EAGLE variant of Qwen2 to use a potentially smaller "hot" vocabulary if hot_vocab_size is specified in its configuration, falling back to the full vocab_size otherwise. This is a common pattern for EAGLE-style architectures.
The code is clean and the change is minimal and effective. Well done!
Summary of Findings
- Code Quality: No issues of
medium,high, orcriticalseverity were found in the reviewed code diff. The change is clear, concise, and correctly implements the described feature using idiomatic Python.
Merge Readiness
The pull request appears to be in good shape. The change is straightforward and aligns with the stated objective. Based on this review, the code seems ready for merging, pending any further project-specific checks or tests. As an AI assistant, I am not authorized to approve pull requests, so please ensure it undergoes the standard review and approval process by human reviewers.
|
@Swipe4057 would you please some testing result after this change? Maybe something like before and after? The code has only one line and it make sense to me, after the test results being attached we can merge it, thanks! |
I am following the original PR https://github.com/sgl-project/sglang/pull/3822/files , which introduces the small vocab table and modifies the script for llama models in python/sglang/srt/models/llama_eagle.py, similarly adjusting the code for the qwen2 series of models. This should also resolve the issue I recently opened: #6863.