Skip to content

The environment variable DEEPEP_HCCL_BUFFSIZE is added#329

Merged
Yael-X merged 1 commit intosgl-project:mainfrom
zzx-study:addEnv
Jan 22, 2026
Merged

The environment variable DEEPEP_HCCL_BUFFSIZE is added#329
Yael-X merged 1 commit intosgl-project:mainfrom
zzx-study:addEnv

Conversation

@zzx-study
Copy link
Copy Markdown
Contributor

Scenario 1—Setting both DEEPEP_HCCL_BUFFSIZE and HCCL_BUFFSIZE
The value of DEEPEP_HCCL_BUFFSIZE is used.
Scenario 2—Not setting environment variables
The default value 200 is used.
Scenario 3—Setting only HCCL_BUFFSIZE
The value of HCCL_BUFFSIZE is used.
Scenario 4—Setting only DEEPEP_HCCL_BUFFSIZE
The value of DEEPEP_HCCL_BUFFSIZE is used.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @zzx-study, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the configuration flexibility and optimizes memory management within the deep learning framework, particularly for Mixture of Experts (MoE) operations. It introduces a new environment variable, 'DEEPEP_HCCL_BUFFSIZE', to provide finer-grained control over communication buffer sizes, while also updating build system requirements to a newer CMake version. Significant changes were made to buffer offset calculations and synchronization mechanisms in MoE-related kernels, aiming to improve efficiency and stability during multi-round data processing.

Highlights

  • New Environment Variable for HCCL Buffer Size: Introduced 'DEEPEP_HCCL_BUFFSIZE' to allow more specific control over the HCCL buffer size, with a priority system that falls back to 'HCCL_BUFFSIZE' or a default value if not set.
  • CMake Version Updates: The minimum required CMake version has been upgraded from 3.12/3.16 to 3.20 across various 'CMakeLists.txt' files and generated build scripts.
  • Memory Offset and Buffer Size Adjustments: Reduced 'NOTIFY_DISPATCH_WIN_OFFSET' and 'NOTIFY_DISPATCH_BUFF_OFFSET' by half, and refined buffer size calculations in MoE-related tiling functions to optimize memory usage, especially for multi-round processing with double buffering.
  • Improved Error Handling and Configuration: Enhanced error logging for 'HCCL_BUFFSIZE' parsing with specific exception types (invalid_argument, out_of_range) and updated 'ASCEND_CANN_PACKAGE_PATH' in 'CMakePresets.json' to use the 'ASCEND_HOME_PATH' environment variable for better flexibility.
  • Synchronization and Processing Logic Refinements: Modified synchronization barriers ('PipeBarrier<PIPE_ALL>()' removed, 'SyncFuncAscendC::HardEvent::MTE2_S()' added) and adjusted the multi-round processing loop in 'cam_moe_combine_normal_multi_round.h' to potentially improve performance and correctness.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/pr-test-npu.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for a new environment variable DEEPEP_HCCL_BUFFSIZE to control buffer sizes, which now takes precedence over the existing HCCL_BUFFSIZE. The changes are extensive, affecting build configurations, host-side tiling logic, and kernel code. While the implementation correctly prioritizes the new environment variable, I've identified a critical bug in buffer size calculation, an incorrect log message that could hinder debugging, and a recurring performance issue in how environment variables are read. Addressing these points will improve the correctness and robustness of the code.

I am having trouble creating individual review comments. Click here to see my feedback.

csrc/deepep/ops/op_host/cam_moe_combine_normal_tiling.cc (539)

critical

There appears to be a bug in this line. The value of tokenNeedSizeCombine calculated on the previous line is completely overwritten with a value derived from maxRound. This will result in an incorrect (and likely much smaller) buffer size calculation, potentially causing runtime failures.

The intention seems to be to double the buffer size for multi-round scenarios, similar to the logic in cam_moe_dispatch_normal_tiling.cc. The calculation should modify the existing tokenNeedSizeCombine value, not replace it.

    if (maxRound > 1) { tokenNeedSizeCombine *= 2; }

csrc/deepep/ops/op_host/cam_moe_combine_normal_tiling.cc (547)

high

The log message for NEEDED_HCCL_BUFFSIZE is inconsistent with the changes in this pull request. It hardcodes 204MB for the NOTIFY_DISPATCH_WIN_OFFSET, but this constant has been updated to 102MB in tiling_args.h. This can be misleading when debugging buffer size issues.

Additionally, the formula in the log ...tokenNeedSizeCombine * 2... is confusing. After fixing the bug in tokenNeedSizeCombine calculation, its value will already include the factor of 2 when maxRound > 1. The log message should be updated to reflect the correct constant and a clearer formula.

                "((realBs * k * tokenNeedSizeCombine)) + 4MB + 102MB) * 2) = %luMB, "

csrc/deepep/ops/op_host/fused_deep_moe_tiling.cpp (41-54)

medium

The logic for reading the buffer size environment variable can be made more efficient. The current implementation calls getenv() multiple times for the same environment variable, which is unnecessary. You can store the result of getenv() in a variable to avoid these redundant calls.

This pattern of inefficiently calling getenv() is repeated in several other files modified in this PR. Applying this optimization across all occurrences would improve code quality.

        uint16_t defaultWindowSize = 200;
        const char* buffSizeStr = getenv("DEEPEP_HCCL_BUFFSIZE");
        if (buffSizeStr == nullptr) {
            buffSizeStr = getenv("HCCL_BUFFSIZE");
        }

        if (buffSizeStr == nullptr) {
            OP_LOGD("", "Env DEEPEP_HCCL_BUFFSIZE and HCCL_BUFFSIZE are not set, using default.");
        } else {
            try {
                std::string envStr(buffSizeStr);
                defaultWindowSize = std::stoi(envStr);
            } catch (const std::invalid_argument &ia) {
                OP_LOGE("", "Invalid argument when parsing HCCL_BUFFSIZE: %s", ia.what());
            } catch (const std::out_of_range &oor) {
                OP_LOGE("", "Out of range when parsing HCCL_BUFFSIZE: %s", oor.what());
            }
        }

…ity of DEEPEP_HCCL_BUFFSIZE is higher than that of HCCL_BUFFSIZE.
@zzx-study
Copy link
Copy Markdown
Contributor Author

The test results are as expected.
Scenario 1—Setting both DEEPEP_HCCL_BUFFSIZE and HCCL_BUFFSIZE
The value of DEEPEP_HCCL_BUFFSIZE is used.
Scenario 2—Not setting environment variables
The default value 200 is used.
Scenario 3—Setting only HCCL_BUFFSIZE
The value of HCCL_BUFFSIZE is used.
Scenario 4—Setting only DEEPEP_HCCL_BUFFSIZE
The value of DEEPEP_HCCL_BUFFSIZE is used.

@Yael-X Yael-X merged commit 21c8eb1 into sgl-project:main Jan 22, 2026
4 checks passed
Yael-X added a commit to Yael-X/sgl-kernel-npu that referenced this pull request Jan 26, 2026
* 'main' of https://github.com/sgl-project/sgl-kernel-npu: (24 commits)
  [Doc] Improved README.md content and English grammar and integrated the DeepWiki badge for Ask AI (sgl-project#345)
  (test) add solve_tril from upstream (sgl-project#339)
  Add AscendC triangular inverse (sgl-project#332)
  support the situation that topk maybe -1 on machine A3 (sgl-project#313)
  chunk_gated_delta_rule_npu output final state (sgl-project#341)
  The environment variable DEEPEP_HCCL_BUFFSIZE is added, and the priority of DEEPEP_HCCL_BUFFSIZE is higher than that of HCCL_BUFFSIZE. (sgl-project#329)
  Added the low_latency operator API documentation. (sgl-project#337)
  Added the verification of num_max_dispatch_tokens_per_rank to the decode operator adaptation layer. (sgl-project#330)
  Document get_dispatch_layout API (sgl-project#338)
  【Doc】add fused deep moe doc (sgl-project#335)
  add deepep normal api doc (sgl-project#336)
  remove the limit that A2 internode only support topk 8 (sgl-project#323)
  Optimize the performance of the Combine Ant Moving function and the use of HCCL buffer (sgl-project#314)
  deepep adapt custom cann installation path (sgl-project#327)
  [Chore] CANN version bump to 8.5.0 (sgl-project#326)
  add dfx for operator FusedDeepMoe (sgl-project#317)
  Integrate ccache for faster compilation (sgl-project#318)
  Modify contribution guide (sgl-project#315)
  fix bmm transpose in cann 8.5 (sgl-project#316)
  fix little batchsize and int8 quant on ci (sgl-project#302)
  ...
zhuyutong332 added a commit to zhuyutong332/sgl-kernel-npu that referenced this pull request Jan 27, 2026
* upstream/main:
  add function for deep-ep tests (sgl-project#301)
  [Doc] Improved README.md content and English grammar and integrated the DeepWiki badge for Ask AI (sgl-project#345)
  (test) add solve_tril from upstream (sgl-project#339)
  Add AscendC triangular inverse (sgl-project#332)
  support the situation that topk maybe -1 on machine A3 (sgl-project#313)
  chunk_gated_delta_rule_npu output final state (sgl-project#341)
  The environment variable DEEPEP_HCCL_BUFFSIZE is added, and the priority of DEEPEP_HCCL_BUFFSIZE is higher than that of HCCL_BUFFSIZE. (sgl-project#329)
  Added the low_latency operator API documentation. (sgl-project#337)
  Added the verification of num_max_dispatch_tokens_per_rank to the decode operator adaptation layer. (sgl-project#330)
  Document get_dispatch_layout API (sgl-project#338)
  【Doc】add fused deep moe doc (sgl-project#335)
  add deepep normal api doc (sgl-project#336)
  remove the limit that A2 internode only support topk 8 (sgl-project#323)
  Optimize the performance of the Combine Ant Moving function and the use of HCCL buffer (sgl-project#314)
  deepep adapt custom cann installation path (sgl-project#327)
  [Chore] CANN version bump to 8.5.0 (sgl-project#326)
  add dfx for operator FusedDeepMoe (sgl-project#317)
  Integrate ccache for faster compilation (sgl-project#318)
AndyKong2020 pushed a commit to AndyKong2020/sgl-kernel-npu that referenced this pull request Mar 24, 2026
…ity of DEEPEP_HCCL_BUFFSIZE is higher than that of HCCL_BUFFSIZE. (sgl-project#329)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants