Skip to content

Remove unused logic in models/mistral.py#33095

Merged
vllm-bot merged 1 commit intovllm-project:mainfrom
andylolu2:andy/fix-mistral
Jan 26, 2026
Merged

Remove unused logic in models/mistral.py#33095
vllm-bot merged 1 commit intovllm-project:mainfrom
andylolu2:andy/fix-mistral

Conversation

@andylolu2
Copy link
Copy Markdown
Contributor

@andylolu2 andylolu2 commented Jan 26, 2026

Some unused logic was added in #32780, cleaning it up.

Signed-off-by: Andy Lo <andy@mistral.ai>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request effectively removes unused logic related to quant_config and quantization scaling fusion within the MistralDecoderLayer's __init__ method. This cleanup improves code maintainability and reduces unnecessary complexity, aligning with the stated objective of the pull request. The super().__init__ call already handles the necessary quant_config initialization for attention and MLP layers, making the removed lines redundant.

@robertgshaw2-redhat robertgshaw2-redhat enabled auto-merge (squash) January 26, 2026 14:50
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jan 26, 2026
@vllm-bot vllm-bot merged commit d56afd4 into vllm-project:main Jan 26, 2026
55 of 57 checks passed
apd10 pushed a commit to apd10/vllm that referenced this pull request Jan 31, 2026
Signed-off-by: Andy Lo <andy@mistral.ai>
ItzDEXX pushed a commit to ItzDEXX/vllm that referenced this pull request Feb 19, 2026
Signed-off-by: Andy Lo <andy@mistral.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants