Skip to content

Conversation

@jhaotingc
Copy link
Contributor

@jhaotingc jhaotingc commented Jul 24, 2025

Overview:

Tested functionality on EOS (8xH100 cluster).

Details:

Where should the reviewer start?

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

  • closes GitHub issue: #xxx

Summary by CodeRabbit

  • New Features

    • Added new configuration options for the Eagle One Model in the Llama 4 backend, including settings for aggregation, decoding, and prefill engines with support for speculative decoding and advanced parallelism.
  • Documentation

    • Introduced a new section detailing the Eagle3-one-model setup, including configuration requirements and deployment notes.
    • Provided an example request and response for using the Llama 4 Maverick Instruct model with Eagle speculative decoding.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Jul 24, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@github-actions
Copy link

👋 Hi jhaotingc! Thank you for contributing to ai-dynamo/dynamo.

Just a reminder: The NVIDIA Test Github Validation CI runs an essential subset of the testing framework to quickly catch errors.Your PR reviewers may elect to test the changes comprehensively before approving your changes.

🚀

@github-actions github-actions bot added the external-contribution Pull request is from an external contributor label Jul 24, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 24, 2025

Walkthrough

This update introduces three new YAML configuration files for the Eagle One Model under the Llama4 backend in the TRTLLM engine, covering prefill, decode, and aggregate scenarios with Eagle3 one-model speculative decoding. Additionally, documentation is expanded to describe these configurations and provide example usage instructions.

Changes

File(s) Change Summary
.../engine_configs/llama4/eagle_one_model/eagle_agg.yml
.../eagle_decode.yaml
.../eagle_prefill.yaml
Added new YAML configs for Eagle One Model with Eagle3 one-model speculative decoding, specifying backend, parallelism, batch/token limits, KV cache, CUDA graph, and logging settings.
.../llama4_plus_eagle.md Extended documentation with Eagle3-one-model section, requirements, usage notes, and example request/response.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Possibly related PRs

Poem

In YAML fields the configs grow,
For Eagle3, the specs now show.
Prefill, decode, and aggregate too,
With docs to guide what you should do.
The rabbit hops with cheerful pride—
Eagle One Model, now supplied! 🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
components/backends/trtllm/llama4_plus_eagle.md (1)

65-78: Update paths to new one-model configs

The examples still point to .../eagle/eagle_*.yaml while the new files live under eagle_one_model/ (and one has a .yml extension). Using these commands verbatim will raise a file-not-found error.

Please sync the doc paths with the actual filenames.

🧹 Nitpick comments (6)
components/backends/trtllm/engine_configs/llama4/eagle_one_model/eagle_prefill.yaml (2)

25-31: Unify boolean casing for eagle3_one_model across configs

eagle3_one_model is set to True here, while the same flag is lowercase (true) in eagle_agg.yml. YAML parsers usually accept both, but the mixed style hurts readability and can trip up simple grep/templating tools.

-  eagle3_one_model: True
+  eagle3_one_model: true

1-35: Extension inconsistency may confuse users

This file uses .yaml, while eagle_agg.yml uses .yml. Pick one extension for the whole tree to avoid broken glob patterns in deployment scripts.

components/backends/trtllm/engine_configs/llama4/eagle_one_model/eagle_agg.yml (1)

1-41: File-extension differs from sibling configs

This file is .yml; the other two are .yaml. Align for consistency (rename to .yaml or vice-versa).

components/backends/trtllm/engine_configs/llama4/eagle_one_model/eagle_decode.yaml (1)

25-31: Match boolean style with other configs

Same flag/style issue as pre-fill config.

-  eagle3_one_model: True
+  eagle3_one_model: true
components/backends/trtllm/llama4_plus_eagle.md (2)

37-41: Fix typos & markdown-lint warnings

  • “congis” → “configs”
  • Bullet list uses * but rest of doc uses -
  • Grammar: “may got ran” → “might be run”
-* The congis in `engine_configs/llama4/eagle_one_model` are tested with 8xH100 cluster. Be sure to change the `NUM_GPUS_PER_NODE` accordingly or change TP/EP size in config. 1 8xH100 node for aggregated .yml file, 2 8xH100 for prefill/decode .yml file.
-* The current `./multinode/start_frontend_services.sh` may got ran `NUM_GPUS_PER_NODE` times depending on how srun/mpi is launched, beware that the frontend service only needs to be ran once.
+- The configs in `engine_configs/llama4/eagle_one_model` were validated on an 8×H100 cluster.  Adjust `NUM_GPUS_PER_NODE` or the TP/EP sizes as needed (1 node for aggregated, 2 nodes for prefill/decode).
+- The current `./multinode/start_frontend_services.sh` might be run `NUM_GPUS_PER_NODE` times depending on how *srun* / MPI is launched; the frontend service only needs to start once.

86-98: Add language tag to fenced code block

Markdown-lint (MD040) flags the block; adding bash improves rendering and syntax highlighting.

-```
+```bash
 ...
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 13d3cc1 and 8d19829.

📒 Files selected for processing (4)
  • components/backends/trtllm/engine_configs/llama4/eagle_one_model/eagle_agg.yml (1 hunks)
  • components/backends/trtllm/engine_configs/llama4/eagle_one_model/eagle_decode.yaml (1 hunks)
  • components/backends/trtllm/engine_configs/llama4/eagle_one_model/eagle_prefill.yaml (1 hunks)
  • components/backends/trtllm/llama4_plus_eagle.md (2 hunks)
🧰 Additional context used
🧠 Learnings (4)
components/backends/trtllm/engine_configs/llama4/eagle_one_model/eagle_prefill.yaml (1)

Learnt from: ptarasiewiczNV
PR: #2027
File: container/deps/vllm/install_vllm.sh:0-0
Timestamp: 2025-07-22T10:22:28.972Z
Learning: The --torch-backend=auto flag works with vLLM installations via uv pip install, even though it's not a standard pip option. This flag is processed by vLLM's build system during installation to automatically match PyTorch distribution with container CUDA versions.

components/backends/trtllm/engine_configs/llama4/eagle_one_model/eagle_agg.yml (2)

Learnt from: ptarasiewiczNV
PR: #2027
File: container/deps/vllm/install_vllm.sh:0-0
Timestamp: 2025-07-22T10:22:28.972Z
Learning: The --torch-backend=auto flag works with vLLM installations via uv pip install, even though it's not a standard pip option. This flag is processed by vLLM's build system during installation to automatically match PyTorch distribution with container CUDA versions.

Learnt from: tanmayv25
PR: #1391
File: examples/tensorrt_llm/common/base_engine.py:171-176
Timestamp: 2025-06-05T01:10:51.865Z
Learning: In examples/tensorrt_llm/common/base_engine.py, the _init_engine method is called only once during initialization, so direct mutation of the _default_sampling_params object during setup is safe and appropriate.

components/backends/trtllm/engine_configs/llama4/eagle_one_model/eagle_decode.yaml (1)

Learnt from: ptarasiewiczNV
PR: #2027
File: container/deps/vllm/install_vllm.sh:0-0
Timestamp: 2025-07-22T10:22:28.972Z
Learning: The --torch-backend=auto flag works with vLLM installations via uv pip install, even though it's not a standard pip option. This flag is processed by vLLM's build system during installation to automatically match PyTorch distribution with container CUDA versions.

components/backends/trtllm/llama4_plus_eagle.md (2)

Learnt from: tanmayv25
PR: #1391
File: examples/tensorrt_llm/common/base_engine.py:171-176
Timestamp: 2025-06-05T01:10:51.865Z
Learning: In examples/tensorrt_llm/common/base_engine.py, the _init_engine method is called only once during initialization, so direct mutation of the _default_sampling_params object during setup is safe and appropriate.

Learnt from: ptarasiewiczNV
PR: #2027
File: container/deps/vllm/install_vllm.sh:0-0
Timestamp: 2025-07-22T10:22:28.972Z
Learning: The --torch-backend=auto flag works with vLLM installations via uv pip install, even though it's not a standard pip option. This flag is processed by vLLM's build system during installation to automatically match PyTorch distribution with container CUDA versions.

🪛 LanguageTool
components/backends/trtllm/llama4_plus_eagle.md

[grammar] ~39-~39: Ensure spelling is correct
Context: ...TensorRT-LLM/commits/v1.0.0rc2/). * The congis in `engine_configs/llama4/eagle_one_mod...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

🪛 markdownlint-cli2 (0.17.2)
components/backends/trtllm/llama4_plus_eagle.md

38-38: Unordered list style
Expected: dash; Actual: asterisk

(MD004, ul-style)


39-39: Unordered list style
Expected: dash; Actual: asterisk

(MD004, ul-style)


40-40: Unordered list style
Expected: dash; Actual: asterisk

(MD004, ul-style)


86-86: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: pre-merge-rust (lib/runtime/examples)
  • GitHub Check: pre-merge-rust (.)
  • GitHub Check: Build and Test - vllm
  • GitHub Check: pre-merge-rust (lib/bindings/python)
🔇 Additional comments (3)
components/backends/trtllm/engine_configs/llama4/eagle_one_model/eagle_prefill.yaml (1)

16-24: Consider adding cuda_graph_config for parity

The pre-fill engine is missing a cuda_graph_config block present in the decode/agg configs. If this omission is intentional (e.g., graphs offer no benefit for batch-size 1), leave a short comment so future maintainers don’t assume it was forgotten.

components/backends/trtllm/engine_configs/llama4/eagle_one_model/eagle_agg.yml (1)

25-31: Path & weight availability check

pytorch_weights_path: nvidia/Llama-4-Maverick-17B-128E-Eagle3 assumes the model is public and the runtime has HF creds. If the weight is gated/private, add a comment or doc note; otherwise deployment will fail silently.

components/backends/trtllm/engine_configs/llama4/eagle_one_model/eagle_decode.yaml (1)

19-22: Validate max_seq_len comment

The inline comment says 8704 = 8192 ISL + 512 OSL; confirm the serving stack actually supports 512 OSL tokens when max_num_tokens is capped at 2048. If not, adjust max_seq_len or add an explanatory comment.

@jhaotingc jhaotingc force-pushed the add_eagle3_one_model_example branch from 8d19829 to 51936e7 Compare July 26, 2025 00:21
@jhaotingc jhaotingc changed the title add Llama4 eagle3 one model example and configs docs: add Llama4 eagle3 one model example and configs Jul 28, 2025
@github-actions github-actions bot added the docs label Jul 28, 2025
@jhaotingc jhaotingc force-pushed the add_eagle3_one_model_example branch from 51936e7 to 2aab137 Compare July 28, 2025 00:08
Signed-off-by: Jhao-Ting Chen <[email protected]>
Signed-off-by: Jhao-Ting Chen <[email protected]>
@jhaotingc jhaotingc force-pushed the add_eagle3_one_model_example branch from 2aab137 to 0942166 Compare July 28, 2025 17:03
@richardhuo-nv richardhuo-nv merged commit 708d7c3 into ai-dynamo:main Jul 28, 2025
10 checks passed
@jhaotingc jhaotingc deleted the add_eagle3_one_model_example branch July 28, 2025 20:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

docs external-contribution Pull request is from an external contributor size/L

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants