[AMD] Add Qwen3-Coder-Next accuracy and functionality test scripts for MI35x 8-GPU#18608
Conversation
Summary of ChangesHello @yichiche, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the testing infrastructure by integrating new CI tests for the Qwen3-Coder-Next model on AMD MI35x 8-GPU hardware. The added tests ensure the correct operation and performance of the model's advanced features, such as its attention backend, KV cache quantization, and prefill strategies, across different operational modes. This expansion of test coverage is crucial for maintaining stability and validating the model's capabilities on AMD platforms. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces two new test scripts to validate the functionality and accuracy of the Qwen3-Coder-Next model on AMD MI35x GPUs, which is a valuable addition for CI coverage. The code is well-structured, but I have identified a few areas for improvement concerning code duplication, maintainability, and test correctness. My main suggestions are to refactor the code to eliminate duplicated logic by reusing existing utilities, simplify model configurations, and ensure all test assertions are active to properly validate the model's performance.
c9603c2 to
e6f4803
Compare
|
/tag-and-rerun-ci |
HaiShaw
left a comment
There was a problem hiding this comment.
Largely okay, please take care of a few comments
e6f4803 to
7a246aa
Compare
Thanks @HaiShaw, I have incorporated the feedback and completed the refactoring. Everything is now ready. |
…ulative decoding cuda graph replay Guard `_use_mla_ps_kernel` checks with `self.use_mla` in `init_forward_metadata_replay_cuda_graph` for `is_target_verify()` and `is_draft_extend()` branches, matching the existing pattern in `init_forward_metadata_capture_cuda_graph`. `_use_mla_ps_kernel` is a global variable set to True when any MLA-enabled AiterAttnBackend instance is initialized. `self.max_split_per_batch` is only set inside `if self.use_mla:` during __init__. When HybridLinearAttnBackend iterates over its backend list (for hybrid attention models like Qwen3-Coder-Next), a non-MLA AiterAttnBackend instance would enter the `if _use_mla_ps_kernel:` block and crash with: AttributeError: 'AiterAttnBackend' object has no attribute 'max_split_per_batch' This was exposed by the Qwen3-Coder-Next MTP test added in #18608.
Motivation
Add CI test coverage for the Qwen3-Coder-Next model on AMD MI35x (gfx950) 8-GPU systems. This model features a hybrid architecture combining full attention (GQA) with linear attention (GDN/Gated Delta Net) and MoE (512 experts), requiring dedicated test scripts to validate accuracy and performance on AMD hardware. The tests ensure the aiter attention backend, fp8 KV cache quantization, and chunked prefill work correctly with this model's unique architecture across both basic and MTP (speculative decoding) configurations.
Modifications
Accuracy Tests
Benchmarking and Profiling
Checklist
Review Process
/tag-run-ci-label,/rerun-failed-ci,/tag-and-rerun-ci