Skip to content

Conversation

@oagniqgnat
Copy link
Contributor

@oagniqgnat oagniqgnat commented Sep 16, 2025

Introduce

This feature is primarily used to detect slow anomalies in the dispatch and combine communications within super nodes.(Reference #311)
It maintains a statistical matrix of average receive/send wait times: Matrix[src_rank, dst_rank], where each row represents a source rank and each column represents a destination rank.
Specifically:

  • The dispatch phase records the cumulative time each rank waits to receive tokens from other ranks.
  • The combine phase records the cumulative time each rank spends sending tokens to other ranks.

Usage

  • When invoking the dispatch operator, pass a tensor named dispatch_wait_recv_cost_stats with a size of num_ranks.
  • When invoking the combine operator, pass a tensor named combine_send_cost_stats with a size of num_ranks.
    The waiting time for receiving/sending will be stored in dispatch_wait_recv_cost_stats and combine_send_cost_stats.

Then use the diagnose_matrix interface in utils.py to calculate the rank of the anomaly.

def diagnose_matrix(
    mat,
    thres_col=3.0,
    thres_row=3.0,
    thres_point=5.0,
    suppress_points_in_strong_rowscols=True,
):
    ......
    return {
        "abnormal_cols": abnormal_cols,
        "abnormal_rows": abnormal_rows,
        "abnormal_points": abnormal_points,
    }

Test

python3 tests/python/deepep/test_intranode.py --enable-diagnose

Exapmle, set thres_col=1.5, thres_row=1.5, thres_point=1.5,

dispatch phase:

Dispatch wait recv cost stats:
tensor([[148845, 148189, 148516, 148917, 149175, 148609, 149042, 148811, 148254,
         148008, 146934, 146877, 147090, 147280, 148317, 148802],
        [240242, 241228, 241062, 241474, 241554, 240883, 241086, 241024, 240456,
         239860, 239368, 239191, 239388, 239835, 240636, 241514],
        [ 45442,  45366,  44954,  45977,  45931,  45695,  46021,  45750,  45697,
          45162,  44936,  44789,  44868,  45008,  45422,  45949],
        [ 11140,  11426,  11561,  11026,  11676,  11420,  11483,  11832,  12081,
          11950,  11967,  11922,  11587,  11169,  11304,  11705],
        [ 83886,  84607,  84710,  84953,  83828,  84524,  84867,  84652,  84224,
          83438,  82855,  82966,  82970,  83433,  83871,  84772],
        [185506, 185759, 186452, 186777, 186820, 186829, 186591, 186585, 185800,
         185113, 184907, 184779, 185300, 185426, 186420, 186875],
        [  7645,   7618,   7595,   8124,   7528,   7554,   7579,   7601,   7886,
           7863,   7955,   7904,   7878,   7605,   7524,   7527],
        [ 21623,  21667,  21708,  22076,  21894,  21682,  21439,  21199,  22296,
          22129,  22211,  22162,  21832,  21554,  21535,  21750],
        [ 96494,  96721,  97053,  97367,  97448,  97107,  97435,  97041,  97332,
          96661,  96218,  96125,  96277,  96447,  97058,  97533],
        [119295, 119816, 120021, 120098, 120300, 119958, 120291, 120264, 120291,
         120341, 119125, 118995, 119060, 119290, 119858, 120357],
        [ 30058,  30362,  30790,  31232,  31321,  30845,  31011,  31000,  30801,
          30036,  30816,  29784,  29867,  30126,  30843,  31290],
        [ 71937,  72230,  72502,  72596,  72577,  72504,  72557,  72633,  72734,
          72443,  72361,  72867,  72308,  72199,  72428,  72626],
        [ 27953,  28087,  28163,  28825,  28389,  28169,  28187,  28117,  28279,
          27957,  27800,  27879,  28065,  27840,  28082,  28213],
        [235805, 236157, 236579, 237217, 237379, 236157, 236902, 237044, 236624,
         235669, 235336, 234708, 234761, 236550, 235884, 237156],
        [ 19989,  20316,  20501,  20995,  20905,  20428,  20690,  20461,  20383,
          19853,  19794,  19750,  19749,  20279,  20317,  20787],
        [ 36047,  35886,  36091,  36106,  36167,  36144,  36098,  36205,  36308,
          36053,  36103,  36243,  36171,  35978,  36202,  36319]],
       device='npu:0', dtype=torch.int32)

Calculated abnormal ranks:

[Diagnose Dispatch wait recv cost] 
abnormal_rows [[0, 148229.125, 1.7111639000711096], [1, 240550.0625, 2.7769210882803845], [5, 185996.1875, 2.14714862278825], [13, 236245.5, 2.7272290189546036]], 
abnormal_cols [], 
abnormal_points []

combine phase:

Combine send cost stats:
tensor([[ 17535,  67014, 501899, 394013, 380369, 421478, 683155, 592656, 425904,
         403497, 414773, 514238, 724636, 708598, 437623, 394191],
        [ 60848,  15988, 502524, 474103, 513227, 420470, 539840, 485127, 464411,
         415381, 365740, 450094, 651227, 522109, 400606, 363934],
        [491042, 483476,   9967,  56094, 442099, 495325, 497743, 463416, 457214,
         412039, 388844, 479295, 469587, 457530, 428783, 352615],
        [510904, 510450,  62957,  13052, 429710, 480017, 512519, 392768, 393602,
         374604, 375297, 538673, 549245, 416121, 360176, 321353],
        [399463, 617925, 543199, 454542,  12542,  65434, 614647, 424236, 398032,
         388118, 329475, 481818, 758660, 564984, 382908, 330476],
        [478183, 578249, 585834, 523876,  66545,  15937, 784692, 577432, 436999,
         393947, 363914, 445326, 726018, 710072, 436198, 403622],
        [464003, 609982, 501194, 402355, 306554, 321554,  15766,  64817, 459165,
         437715, 432248, 571087, 713795, 506062, 342844, 288210],
        [479972, 512734, 474235, 423101, 391073, 437962,  60894,  11336, 448688,
         469527, 456334, 475064, 512557, 463246, 395979, 348762],
        [527042, 592813, 461439, 375299, 362390, 466999, 593294, 442352,  13326,
          62114, 460688, 587821, 678208, 437098, 320981, 355646],
        [558229, 662661, 570484, 479456, 336138, 440521, 743352, 534235,  61249,
          13436, 430387, 579100, 821487, 718146, 411456, 386770],
        [435739, 613712, 583898, 478798, 315877, 361587, 721543, 519360, 429456,
         389244,  12301,  68793, 876324, 615263, 376605, 358820],
        [517460, 597047, 592513, 486437, 323178, 395890, 762980, 579128, 420911,
         393029,  67082,  26066, 868488, 581504, 382181, 332309],
        [553443, 484408, 526832, 489950, 411433, 440452, 500062, 495189, 511627,
         430218, 403097, 516341,  10905,  58856, 482921, 509737],
        [545272, 500621, 481087, 422905, 387628, 459312, 499742, 466025, 458888,
         391166, 419738, 543670,  60611,  12215, 436960, 484767],
        [560601, 660786, 570498, 412770, 318250, 437199, 721577, 472127, 372920,
         354698, 422344, 537366, 826942, 683831,  10575,  66856],
        [521747, 593666, 606340, 518684, 361110, 425254, 726548, 621850, 482865,
         432528, 374748, 476399, 739924, 756717,  71028,  17964]],
       device='npu:0', dtype=torch.int32)

Calculated abnormal ranks:

[Diagnose Combine send cost] 
abnormal_rows [], 
abnormal_cols [], 
abnormal_points [[0, 6, 683155, 1.5760364591299412], [0, 12, 724636, 1.671732997047645], [0, 13, 708598, 1.6347333809553586], [1, 12, 651227, 1.5023786624848157], [4, 12, 758660, 1.750226259170351], [5, 6, 784692, 1.810282002162894], [5, 12, 726018, 1.6749212667470803], [5, 13, 710072, 1.6381338943685044], [6, 12, 713795, 1.6467228437831185], [8, 12, 678208, 1.564623745524221], [9, 1, 662661, 1.5287568649040202], [9, 6, 743352, 1.7149107508064203], [9, 12, 821487, 1.895167952662687], [9, 13, 718146, 1.6567605872434963], [10, 6, 721543, 1.664597455672571], [10, 12, 876324, 2.0216767410186365], [11, 6, 762980, 1.760192485727196], [11, 12, 868488, 2.003599113403026], [14, 1, 660786, 1.524431245738723], [14, 6, 721577, 1.6646758935667683], [14, 12, 826942, 1.9077526206875919], [14, 13, 683831, 1.5775959890263362], [15, 6, 726548, 1.6761439750978042], [15, 12, 739924, 1.707002365473813], [15, 13, 756717, 1.745743764216659]]

Performance

  1. Disable diagnosis:
[tuning] Dispatch (BF16) 114.91 GB/s (HCCS), avg_t: 4067.52 us
[tuning] Combine 100.52 GB/s (HCCS), avg_t: 4649.90 us
  1. Enable diagnosis
[tuning] Dispatch (BF16) 114.22 GB/s (HCCS), avg_t: 4091.92 us
[tuning] Combine 99.57 GB/s (HCCS), avg_t: 4693.97 us

After multiple tests, the performance loss during the dispatch and combine stages is less than 1%.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @oagniqgnat, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the deep_ep library by integrating diagnostic capabilities into its core dispatch and combine operations. These new modules enable the collection of crucial performance metrics related to token communication waiting times, providing valuable insights for identifying and resolving bottlenecks in distributed deep learning models. The changes span across C++ and Python components, ensuring comprehensive support for performance analysis.

Highlights

  • Diagnostic Modules for Dispatch: Introduced an optional diagnostic module to the intranode_dispatch function, allowing it to record the time taken to receive all tokens from each slave rank in the current rank during the dispatch phase. This data is stored in a torch.Tensor.
  • Diagnostic Modules for Combine: Added an optional diagnostic module to the intranode_combine function, which records the time when the current rank sends all tokens to other ranks during the combine phase. This information is also captured in a torch.Tensor.
  • C++ API and Kernel Integration: The C++ API (deep_ep.hpp, deep_ep.cpp) and underlying kernel operations (cam_moe_dispatch_normal, cam_moe_combine_normal) have been updated to accept and process these new diagnostic tensors, including modifications to their tiling functions and op_api definitions.
  • Python Binding and Testing: The Python bindings (buffer.py) for dispatch and combine now expose these diagnostic parameters. A new test case (test_intranode.py) has been added to validate the diagnostic functionality, including a diagnose_matrix utility to analyze the collected statistics.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces diagnostic modules to the dispatch and combine functions, allowing for performance monitoring by collecting cost statistics. The changes are propagated through the entire stack, from the Python API down to the C++ kernels. While the overall approach is sound and the necessary plumbing is in place, there are critical flaws in the core profiling logic within the kernels. Specifically, the implementation incorrectly accumulates timestamps instead of time durations, which will lead to incorrect metrics. Additionally, there are a few areas for improvement regarding code clarity, such as a magic number for time conversion and misleading variable names in a utility function. Addressing these issues will ensure the new diagnostic feature provides accurate and meaningful data.

@oagniqgnat oagniqgnat force-pushed the deepxtrace_npu branch 5 times, most recently from 75a1731 to ca5d42f Compare September 17, 2025 01:25
@oagniqgnat oagniqgnat changed the title Add diagnostic modules to dispatch and combine [Feature] Add diagnostic modules to dispatch and combine Sep 17, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable diagnostic feature to detect communication anomalies in dispatch and combine operations by measuring wait times within the kernels. The changes are comprehensive, spanning C++ kernels, op definitions, and Python bindings, and include a new Python utility for analyzing the collected data. My review focuses on improving the accuracy of the time measurement and the efficiency of the Python utility function.

@Yael-X Yael-X merged commit ff12c20 into sgl-project:main Sep 19, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants