Skip to content

Conversation

@KrishnanPrash
Copy link
Contributor

@KrishnanPrash KrishnanPrash commented Jul 9, 2025

Overview:

  • Speculative Decoding Example of Llama 4 + Eagle 3 utilizing Speculative Decoding (Multi-Token Prediction)

Summary by CodeRabbit

  • New Features
    • Added configuration files for LLaMA4 Eagle model deployment, including settings for aggregation, decoding, and prefill engines.
    • Introduced YAML files for both aggregated and disaggregated deployment modes, supporting resource allocation, batch sizes, token limits, and speculative decoding.
    • Provided options for model serving endpoints, GPU allocation, and advanced features such as CUDA graph optimizations and FP8/FP4 precision support.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Jul 9, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@KrishnanPrash
Copy link
Contributor Author

TODO: Add commands to run example to examples/tensorrt_llm/README.md

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 9, 2025

Walkthrough

Six new YAML configuration files have been added to the TensorRT LLM examples directory for the LLaMA4 Eagle model. These files define engine parameters, runtime settings, and deployment configurations for aggregation, decoding, prefill, and both aggregated and disaggregated model serving setups, including resource allocation and speculative decoding options.

Changes

File(s) Change Summary
.../agg_config.yaml
.../decode_config.yaml
.../prefill_config.yaml
Added engine configuration files specifying backend, parallelism, batch sizes, token limits, key-value cache settings, speculative decoding, CUDA graph options, and logging for LLaMA4 Eagle.
.../mtp_agg.yaml Added aggregated deployment configuration with Frontend and TensorRTLLMWorker sections, model serving parameters, and resource allocation.
.../mtp_disagg.yaml Added disaggregated deployment configuration with Frontend, TensorRTLLMWorker, and TensorRTLLMPrefillWorker sections, supporting modular and distributed serving.

Poem

A warren of configs, fresh and new,
For LLaMA4 Eagle, the models grew.
Prefill, decode, and aggregate too,
With YAMLs to guide what the engines do.
Rabbits rejoice—deployment is neat!
🐇✨ Now LLM servings are hard to beat!


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
examples/tensorrt_llm/configs/llama4/eagle/engine_configs/agg_config.yaml (1)

50-50: kv_cache_dtype: fp8 is experimental – guard with capability check
FP8 cache is currently limited to Hopper/Blackwell class GPUs and recent TRT-LLM nightly builds. Add an inline comment or pre-run capability probe to avoid silent precision down-casting on older cards.

examples/tensorrt_llm/configs/llama4/eagle/engine_configs/prefill_config.yaml (1)

28-33: Inconsistent KV-cache memory policy across engines
Prefill leaves 75 % of memory free, whereas decode/agg leave 15–70 %. Such disparity may starve the prefill worker when sharing a node with decode workers. Align the fractions or document the reasoning to avoid debugging surprises.

examples/tensorrt_llm/configs/llama4/eagle/engine_configs/decode_config.yaml (1)

26-31: max_num_tokens: 512 sits exactly at the formula boundary – leave head-room
max_num_tokens >= max_batch(256) × (layers+1)(2) ⇒ 512. Any future bump of batch-sizes or num_nextn_predict_layers will break graph capture. Recommend 640 or 768 to stay safe.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6835dd7 and 7b4c32a.

📒 Files selected for processing (5)
  • examples/tensorrt_llm/configs/llama4/eagle/engine_configs/agg_config.yaml (1 hunks)
  • examples/tensorrt_llm/configs/llama4/eagle/engine_configs/decode_config.yaml (1 hunks)
  • examples/tensorrt_llm/configs/llama4/eagle/engine_configs/prefill_config.yaml (1 hunks)
  • examples/tensorrt_llm/configs/llama4/eagle/mtp_agg.yaml (1 hunks)
  • examples/tensorrt_llm/configs/llama4/eagle/mtp_disagg.yaml (1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: tanmayv25
PR: ai-dynamo/dynamo#1391
File: examples/tensorrt_llm/common/base_engine.py:171-176
Timestamp: 2025-06-05T01:10:51.865Z
Learning: In examples/tensorrt_llm/common/base_engine.py, the _init_engine method is called only once during initialization, so direct mutation of the _default_sampling_params object during setup is safe and appropriate.
examples/tensorrt_llm/configs/llama4/eagle/mtp_agg.yaml (1)
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
examples/tensorrt_llm/configs/llama4/eagle/mtp_disagg.yaml (1)
Learnt from: nnshah1
PR: ai-dynamo/dynamo#1444
File: tests/fault_tolerance/configs/agg_tp_1_dp_8.yaml:31-38
Timestamp: 2025-07-01T15:33:53.262Z
Learning: In fault tolerance test configurations, the `resources` section under `ServiceArgs` specifies resources per individual worker, not total resources for all workers. So `workers: 8` with `gpu: '1'` means 8 workers × 1 GPU each = 8 GPUs total.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build and Test - vllm
🔇 Additional comments (5)
examples/tensorrt_llm/configs/llama4/eagle/engine_configs/agg_config.yaml (2)

30-30: free_gpu_memory_fraction looks aggressive for 8 K-token, 256-batch workloads
With max_num_tokens: 8448, batch size 256, and kv_cache_dtype: fp8, leaving 30 % of the GPU memory unused may still push large sequences into OOM on 24 GB cards. Please double-check with the target HW and consider lowering the fraction (e.g. 0.15–0.20) or tightening max_batch_size.


32-36: Speculative MTP enabled on the aggregate engine – validate downstream decode logic
decoding_type: MTP + num_nextn_predict_layers: 1 assumes the aggregated engine will internally handle MTP. In most TRT-LLM deployments, MTP is applied only on the decode stage, while prefill stays vanilla. Verify that the runtime really supports MTP in aggregated mode; otherwise move this block to the decode-only config.

examples/tensorrt_llm/configs/llama4/eagle/engine_configs/prefill_config.yaml (1)

34-38: Confirm that MTP on the prefill stage is intended
Prefill traditionally computes only ISL; speculative tokens are generated during decode. Enabling MTP here will allocate extra scratch buffers without benefit. Consider removing the block unless you have a custom prefill-MTP kernel.

examples/tensorrt_llm/configs/llama4/eagle/engine_configs/decode_config.yaml (1)

32-34: Extremely high free_gpu_memory_fraction (0.85)
Only 15 % of memory is kept for the KV cache, while decode is usually the most memory-hungry phase. Please double-check; values >0.5 commonly lead to runtime allocation failures.

examples/tensorrt_llm/configs/llama4/eagle/mtp_agg.yaml (1)

29-31: GPU allocation doesn’t match TP×EP world size
tensor_parallel_size:4 and moe_expert_parallel_size:4 imply 16 ranks, yet only 8 GPUs are allocated (workers:1, gpu: 8). Either halve EP/TP or request 16 GPUs to avoid NCCL topology errors.

@KrishnanPrash KrishnanPrash changed the title TRTLLM Example of Llama4+Eagle3 (MTP) docs: TRTLLM Example of Llama4+Eagle3 (MTP) Jul 9, 2025
@github-actions github-actions bot added the docs label Jul 9, 2025
Copy link
Contributor

@Tabrizian Tabrizian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update the TRT-LLM commit as well.

KrishnanPrash and others added 3 commits July 10, 2025 09:04
Co-authored-by: Iman Tabrizian <[email protected]>
Signed-off-by: KrishnanPrash <[email protected]>
Co-authored-by: Iman Tabrizian <[email protected]>
Signed-off-by: KrishnanPrash <[email protected]>
Co-authored-by: Iman Tabrizian <[email protected]>
Signed-off-by: KrishnanPrash <[email protected]>
richardhuo-nv
richardhuo-nv previously approved these changes Jul 11, 2025
@richardhuo-nv richardhuo-nv dismissed their stale review July 11, 2025 23:30

accidentally clicked approve, please resolve comments.

@richardhuo-nv
Copy link
Contributor

richardhuo-nv commented Jul 11, 2025

We should add how to run these configs in the README.

You can reference the guide for MTP.

@KrishnanPrash KrishnanPrash changed the title docs: TRTLLM Example of Llama4+Eagle3 (MTP) docs: TRTLLM Example of Llama4+Eagle3 (Speculative Decoding) Jul 12, 2025
@KrishnanPrash KrishnanPrash merged commit ef59ac8 into main Jul 14, 2025
7 checks passed
@KrishnanPrash KrishnanPrash deleted the kprashanth/trtllm-example branch July 14, 2025 23:16
ZichengMa pushed a commit that referenced this pull request Jul 17, 2025
ZichengMa added a commit that referenced this pull request Jul 18, 2025
commit e330d96
Author: Yan Ru Pei <[email protected]>
Date:   Fri Jul 18 13:40:54 2025 -0700

    feat: enable / disable chunked prefill for mockers (#2015)

    Signed-off-by: Yan Ru Pei <[email protected]>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 353146e
Author: GuanLuo <[email protected]>
Date:   Fri Jul 18 13:33:36 2025 -0700

    feat: add vLLM v1 multi-modal example. Add llama4 Maverick example (#1990)

    Signed-off-by: GuanLuo <[email protected]>
    Co-authored-by: krishung5 <[email protected]>

commit 1f07dab
Author: Jacky <[email protected]>
Date:   Fri Jul 18 13:04:20 2025 -0700

    feat: Add migration to LLM requests (#1930)

commit 5f17918
Author: Tanmay Verma <[email protected]>
Date:   Fri Jul 18 12:59:34 2025 -0700

    refactor: Migrate to new UX2 for python launch (#2003)

commit fc12436
Author: Graham King <[email protected]>
Date:   Fri Jul 18 14:52:57 2025 -0400

    feat(frontend): router-mode settings (#2001)

commit dc75cf1
Author: ptarasiewiczNV <[email protected]>
Date:   Fri Jul 18 18:47:28 2025 +0200

    chore: Move NIXL repo clone to Dockerfiles (#2009)

commit f6f392c
Author: Iman Tabrizian <[email protected]>
Date:   Thu Jul 17 18:44:17 2025 -0700

    Remove link to the fix for disagg + eagle3 for TRT-LLM example (#2006)

    Signed-off-by: Iman Tabrizian <[email protected]>

commit cc90ca6
Author: atchernych <[email protected]>
Date:   Thu Jul 17 18:34:40 2025 -0700

    feat: Create a convenience script to uninstall Dynamo Deploy CRDs (#1933)

commit 267b422
Author: Greg Clark <[email protected]>
Date:   Thu Jul 17 20:44:21 2025 -0400

    chore: loosed python requirement versions (#1998)

    Signed-off-by: Greg Clark <[email protected]>

commit b8474e5
Author: ishandhanani <[email protected]>
Date:   Thu Jul 17 16:35:05 2025 -0700

    chore: update cmake and gap installation and sgl in wideep container (#1991)

commit 157a3b0
Author: Biswa Panda <[email protected]>
Date:   Thu Jul 17 15:38:12 2025 -0700

    fix: incorrect helm upgrade command (#2000)

commit 0dfca2c
Author: Ryan McCormick <[email protected]>
Date:   Thu Jul 17 15:33:33 2025 -0700

    ci: Update trtllm gitlab triggers for new components directory and test script (#1992)

commit f3fb09e
Author: Kris Hung <[email protected]>
Date:   Thu Jul 17 14:59:59 2025 -0700

    fix: Fix syntax for tokio-console (#1997)

commit dacffb8
Author: Biswa Panda <[email protected]>
Date:   Thu Jul 17 14:57:10 2025 -0700

    fix: use non-dev golang image for operator (#1993)

commit 2b29a0a
Author: zaristei <[email protected]>
Date:   Thu Jul 17 13:10:42 2025 -0700

    fix: Working Arm Build Dockerfile for Vllm_v1 (#1844)

commit 2430d89
Author: Ryan McCormick <[email protected]>
Date:   Thu Jul 17 12:57:46 2025 -0700

    test: Add trtllm kv router tests (#1988)

commit 1eadc01
Author: Graham King <[email protected]>
Date:   Thu Jul 17 15:07:41 2025 -0400

    feat(runtime): Support tokio-console (#1986)

commit b62e633
Author: GuanLuo <[email protected]>
Date:   Thu Jul 17 11:16:28 2025 -0700

    feat: support separate chat_template.jinja file (#1853)

commit 8ae3719
Author: Hongkuan Zhou <[email protected]>
Date:   Thu Jul 17 11:12:35 2025 -0700

    chore: add some details to dynamo deploy quickstart and fix deploy.sh (#1978)

    Signed-off-by: Hongkuan Zhou <[email protected]>
    Co-authored-by: julienmancuso <[email protected]>

commit 08891ff
Author: Ryan McCormick <[email protected]>
Date:   Thu Jul 17 10:57:42 2025 -0700

    fix: Update trtllm tests to use new scripts instead of dynamo serve (#1979)

commit 49b7a0d
Author: Ryan Olson <[email protected]>
Date:   Thu Jul 17 08:35:04 2025 -0600

    feat: record + analyze logprobs (#1957)

commit 6d2be14
Author: Biswa Panda <[email protected]>
Date:   Thu Jul 17 00:17:58 2025 -0700

    refactor: replace vllm with vllm_v1 container (#1953)

    Co-authored-by: alec-flowers <[email protected]>

commit 4d2a31a
Author: ishandhanani <[email protected]>
Date:   Wed Jul 16 18:04:09 2025 -0700

    chore: add port reservation to utils (#1980)

commit 1e3e4a0
Author: Alec <[email protected]>
Date:   Wed Jul 16 15:54:04 2025 -0700

    fix: port race condition through deterministic ports (#1937)

commit 4ad281f
Author: Tanmay Verma <[email protected]>
Date:   Wed Jul 16 14:33:51 2025 -0700

    refactor: Move TRTLLM example to the component/backends (#1976)

commit 57d24a1
Author: Misha Chornyi <[email protected]>
Date:   Wed Jul 16 14:10:24 2025 -0700

    build: Removing shell configuration violations. It's bad practice to hardcod… (#1973)

commit 182d3b5
Author: Graham King <[email protected]>
Date:   Wed Jul 16 16:12:40 2025 -0400

    chore(bindings): Remove mistralrs / llama.cpp (#1970)

commit def6eaa
Author: Harrison Saturley-Hall <[email protected]>
Date:   Wed Jul 16 15:50:23 2025 -0400

    feat: attributions for debian deps of sglang, trtllm, vllm runtime containers (#1971)

commit f31732a
Author: Yan Ru Pei <[email protected]>
Date:   Wed Jul 16 11:22:15 2025 -0700

    feat: integrate mocker with dynamo-run and python cli (#1927)

commit aba6099
Author: Graham King <[email protected]>
Date:   Wed Jul 16 12:26:32 2025 -0400

    perf(router): Remove lock from router hot path (#1963)

commit b212103
Author: Hongkuan Zhou <[email protected]>
Date:   Wed Jul 16 08:55:33 2025 -0700

    docs: add notes in docs to deprecate local connector (#1959)

commit 7b325ee
Author: Biswa Panda <[email protected]>
Date:   Tue Jul 15 18:52:00 2025 -0700

    fix: vllm router examples (#1942)

commit a50be1a
Author: hhzhang16 <[email protected]>
Date:   Tue Jul 15 17:58:01 2025 -0700

    feat: update CODEOWNERS (#1926)

commit e260fdf
Author: Harrison Saturley-Hall <[email protected]>
Date:   Tue Jul 15 18:49:21 2025 -0400

    feat: add bitnami helm chart attribution (#1943)

    Signed-off-by: Harrison Saturley-Hall <[email protected]>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 1c03404
Author: Biswa Panda <[email protected]>
Date:   Tue Jul 15 14:26:24 2025 -0700

    fix: update inference gateway deployment instructions (#1940)

commit 5ca570f
Author: Graham King <[email protected]>
Date:   Tue Jul 15 16:54:03 2025 -0400

    chore: Rename dynamo.ingress to dynamo.frontend (#1944)

commit 7b9182f
Author: Graham King <[email protected]>
Date:   Tue Jul 15 16:33:07 2025 -0400

    chore: Move examples/cli to lib/bindings/examples/cli (#1952)

commit 40d40dd
Author: Graham King <[email protected]>
Date:   Tue Jul 15 16:02:19 2025 -0400

    chore(multi-modal): Rename frontend.py to web.py (#1951)

commit a9e0891
Author: Ryan Olson <[email protected]>
Date:   Tue Jul 15 12:30:30 2025 -0600

    feat: adding http clients and recorded response stream (#1919)

commit 4128d58
Author: Biswa Panda <[email protected]>
Date:   Tue Jul 15 10:30:47 2025 -0700

    feat: allow helm upgrade using deploy script (#1936)

commit 4da078b
Author: Graham King <[email protected]>
Date:   Tue Jul 15 12:57:38 2025 -0400

    fix: Remove OpenSSL dependency, use Rust TLS (#1945)

commit fc004d4
Author: jthomson04 <[email protected]>
Date:   Tue Jul 15 08:45:42 2025 -0700

    fix: Fix TRT-LLM container build when using a custom pip wheel (#1825)

commit 3c6fc6f
Author: ishandhanani <[email protected]>
Date:   Mon Jul 14 22:35:20 2025 -0700

    chore: fix typo (#1938)

commit de7fe38
Author: Alec <[email protected]>
Date:   Mon Jul 14 21:47:12 2025 -0700

    feat: add vllm e2e integration tests (#1935)

commit 860f3f7
Author: Keiven C <[email protected]>
Date:   Mon Jul 14 21:44:19 2025 -0700

    chore: metrics endpoint variables renamed from HTTP_SERVER->SYSTEM (#1934)

    Co-authored-by: Keiven Chang <[email protected]>

commit fc402a3
Author: Biswa Panda <[email protected]>
Date:   Mon Jul 14 21:21:20 2025 -0700

    feat: configurable namespace for vllm v1 example (#1909)

commit df40d2c
Author: ZichengMa <[email protected]>
Date:   Mon Jul 14 21:11:29 2025 -0700

    docs: fix typo and add mount-workspace to vllm doc (#1931)

    Signed-off-by: ZichengMa <[email protected]>
    Co-authored-by: Alec <[email protected]>

commit 901715b
Author: Tanmay Verma <[email protected]>
Date:   Mon Jul 14 20:14:51 2025 -0700

    refactor:  Refactor the TRTLLM examples remove dynamo SDK (#1884)

commit 5bf23d5
Author: hhzhang16 <[email protected]>
Date:   Mon Jul 14 18:29:19 2025 -0700

    feat: update DynamoGraphDeployments for vllm_v1 (#1890)

    Co-authored-by: mohammedabdulwahhab <[email protected]>

commit 9e76590
Author: ishandhanani <[email protected]>
Date:   Mon Jul 14 17:29:56 2025 -0700

    docs: organize sglang readme (#1910)

commit ef59ac8
Author: KrishnanPrash <[email protected]>
Date:   Mon Jul 14 16:16:44 2025 -0700

    docs: TRTLLM Example of Llama4+Eagle3 (Speculative Decoding) (#1828)

    Signed-off-by: KrishnanPrash <[email protected]>
    Co-authored-by: Iman Tabrizian <[email protected]>

commit 053041e
Author: Jorge António <[email protected]>
Date:   Tue Jul 15 00:06:38 2025 +0100

    fix: resolve incorrect finish reason propagation (#1857)

commit 3733f58
Author: Graham King <[email protected]>
Date:   Mon Jul 14 19:04:22 2025 -0400

    feat(backends): Python llama.cpp engine (#1925)

commit 6a1350c
Author: Tushar Sharma <[email protected]>
Date:   Mon Jul 14 14:56:36 2025 -0700

    build: minor improvements to sglang dockerfile (#1917)

commit e2a619b
Author: Neelay Shah <[email protected]>
Date:   Mon Jul 14 14:52:53 2025 -0700

    fix: remove environment variable passing (#1911)

    Signed-off-by: Neelay Shah <[email protected]>
    Co-authored-by: Neelay Shah <[email protected]>

commit 3d17a49
Author: Schwinn Saereesitthipitak <[email protected]>
Date:   Mon Jul 14 14:41:56 2025 -0700

    refactor: remove dynamo build (#1778)

    Signed-off-by: Schwinn Saereesitthipitak <[email protected]>

commit 3e0cb07
Author: Anant Sharma <[email protected]>
Date:   Mon Jul 14 15:43:48 2025 -0400

    fix: copy attributions and license to trtllm runtime container (#1916)

commit fc36bf5
Author: ishandhanani <[email protected]>
Date:   Mon Jul 14 12:31:49 2025 -0700

    feat: receive kvmetrics from sglang scheduler (#1789)

    Co-authored-by: zixuanzhang226 <[email protected]>

commit df91fce
Author: Yan Ru Pei <[email protected]>
Date:   Mon Jul 14 12:24:04 2025 -0700

    feat: prefill aware routing (#1895)

commit ad8ad66
Author: Graham King <[email protected]>
Date:   Mon Jul 14 15:20:35 2025 -0400

    feat: Shrink the ai-dynamo wheel by 35 MiB (#1918)

    Remove http and llmctl binaries. They have been unused for a while.

commit 480b41d
Author: Graham King <[email protected]>
Date:   Mon Jul 14 15:06:45 2025 -0400

    feat: Python frontend / ingress node (#1912)
ZichengMa added a commit that referenced this pull request Jul 21, 2025
commit cb6de94
Author: ptarasiewiczNV <[email protected]>
Date:   Sun Jul 20 22:34:50 2025 +0200

    chore: Install vLLM and WideEP kernels in vLLM runtime container (#2010)

    Signed-off-by: Alec <[email protected]>
    Co-authored-by: Alec <[email protected]>
    Co-authored-by: alec-flowers <[email protected]>

commit fe63c17
Author: Alec <[email protected]>
Date:   Fri Jul 18 17:45:08 2025 -0700

    fix: Revert "feat: add vLLM v1 multi-modal example. Add llama4 Maverick ex… (#2017)

commit bf1998f
Author: jthomson04 <[email protected]>
Date:   Fri Jul 18 17:23:50 2025 -0700

    fix: Don't detokenize twice in TRT-LLM examples (#1955)

commit 343a481
Author: Ryan Olson <[email protected]>
Date:   Fri Jul 18 16:22:43 2025 -0600

    feat: http disconnects (#2014)

commit e330d96
Author: Yan Ru Pei <[email protected]>
Date:   Fri Jul 18 13:40:54 2025 -0700

    feat: enable / disable chunked prefill for mockers (#2015)

    Signed-off-by: Yan Ru Pei <[email protected]>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 353146e
Author: GuanLuo <[email protected]>
Date:   Fri Jul 18 13:33:36 2025 -0700

    feat: add vLLM v1 multi-modal example. Add llama4 Maverick example (#1990)

    Signed-off-by: GuanLuo <[email protected]>
    Co-authored-by: krishung5 <[email protected]>

commit 1f07dab
Author: Jacky <[email protected]>
Date:   Fri Jul 18 13:04:20 2025 -0700

    feat: Add migration to LLM requests (#1930)

commit 5f17918
Author: Tanmay Verma <[email protected]>
Date:   Fri Jul 18 12:59:34 2025 -0700

    refactor: Migrate to new UX2 for python launch (#2003)

commit fc12436
Author: Graham King <[email protected]>
Date:   Fri Jul 18 14:52:57 2025 -0400

    feat(frontend): router-mode settings (#2001)

commit dc75cf1
Author: ptarasiewiczNV <[email protected]>
Date:   Fri Jul 18 18:47:28 2025 +0200

    chore: Move NIXL repo clone to Dockerfiles (#2009)

commit f6f392c
Author: Iman Tabrizian <[email protected]>
Date:   Thu Jul 17 18:44:17 2025 -0700

    Remove link to the fix for disagg + eagle3 for TRT-LLM example (#2006)

    Signed-off-by: Iman Tabrizian <[email protected]>

commit cc90ca6
Author: atchernych <[email protected]>
Date:   Thu Jul 17 18:34:40 2025 -0700

    feat: Create a convenience script to uninstall Dynamo Deploy CRDs (#1933)

commit 267b422
Author: Greg Clark <[email protected]>
Date:   Thu Jul 17 20:44:21 2025 -0400

    chore: loosed python requirement versions (#1998)

    Signed-off-by: Greg Clark <[email protected]>

commit b8474e5
Author: ishandhanani <[email protected]>
Date:   Thu Jul 17 16:35:05 2025 -0700

    chore: update cmake and gap installation and sgl in wideep container (#1991)

commit 157a3b0
Author: Biswa Panda <[email protected]>
Date:   Thu Jul 17 15:38:12 2025 -0700

    fix: incorrect helm upgrade command (#2000)

commit 0dfca2c
Author: Ryan McCormick <[email protected]>
Date:   Thu Jul 17 15:33:33 2025 -0700

    ci: Update trtllm gitlab triggers for new components directory and test script (#1992)

commit f3fb09e
Author: Kris Hung <[email protected]>
Date:   Thu Jul 17 14:59:59 2025 -0700

    fix: Fix syntax for tokio-console (#1997)

commit dacffb8
Author: Biswa Panda <[email protected]>
Date:   Thu Jul 17 14:57:10 2025 -0700

    fix: use non-dev golang image for operator (#1993)

commit 2b29a0a
Author: zaristei <[email protected]>
Date:   Thu Jul 17 13:10:42 2025 -0700

    fix: Working Arm Build Dockerfile for Vllm_v1 (#1844)

commit 2430d89
Author: Ryan McCormick <[email protected]>
Date:   Thu Jul 17 12:57:46 2025 -0700

    test: Add trtllm kv router tests (#1988)

commit 1eadc01
Author: Graham King <[email protected]>
Date:   Thu Jul 17 15:07:41 2025 -0400

    feat(runtime): Support tokio-console (#1986)

commit b62e633
Author: GuanLuo <[email protected]>
Date:   Thu Jul 17 11:16:28 2025 -0700

    feat: support separate chat_template.jinja file (#1853)

commit 8ae3719
Author: Hongkuan Zhou <[email protected]>
Date:   Thu Jul 17 11:12:35 2025 -0700

    chore: add some details to dynamo deploy quickstart and fix deploy.sh (#1978)

    Signed-off-by: Hongkuan Zhou <[email protected]>
    Co-authored-by: julienmancuso <[email protected]>

commit 08891ff
Author: Ryan McCormick <[email protected]>
Date:   Thu Jul 17 10:57:42 2025 -0700

    fix: Update trtllm tests to use new scripts instead of dynamo serve (#1979)

commit 49b7a0d
Author: Ryan Olson <[email protected]>
Date:   Thu Jul 17 08:35:04 2025 -0600

    feat: record + analyze logprobs (#1957)

commit 6d2be14
Author: Biswa Panda <[email protected]>
Date:   Thu Jul 17 00:17:58 2025 -0700

    refactor: replace vllm with vllm_v1 container (#1953)

    Co-authored-by: alec-flowers <[email protected]>

commit 4d2a31a
Author: ishandhanani <[email protected]>
Date:   Wed Jul 16 18:04:09 2025 -0700

    chore: add port reservation to utils (#1980)

commit 1e3e4a0
Author: Alec <[email protected]>
Date:   Wed Jul 16 15:54:04 2025 -0700

    fix: port race condition through deterministic ports (#1937)

commit 4ad281f
Author: Tanmay Verma <[email protected]>
Date:   Wed Jul 16 14:33:51 2025 -0700

    refactor: Move TRTLLM example to the component/backends (#1976)

commit 57d24a1
Author: Misha Chornyi <[email protected]>
Date:   Wed Jul 16 14:10:24 2025 -0700

    build: Removing shell configuration violations. It's bad practice to hardcod… (#1973)

commit 182d3b5
Author: Graham King <[email protected]>
Date:   Wed Jul 16 16:12:40 2025 -0400

    chore(bindings): Remove mistralrs / llama.cpp (#1970)

commit def6eaa
Author: Harrison Saturley-Hall <[email protected]>
Date:   Wed Jul 16 15:50:23 2025 -0400

    feat: attributions for debian deps of sglang, trtllm, vllm runtime containers (#1971)

commit f31732a
Author: Yan Ru Pei <[email protected]>
Date:   Wed Jul 16 11:22:15 2025 -0700

    feat: integrate mocker with dynamo-run and python cli (#1927)

commit aba6099
Author: Graham King <[email protected]>
Date:   Wed Jul 16 12:26:32 2025 -0400

    perf(router): Remove lock from router hot path (#1963)

commit b212103
Author: Hongkuan Zhou <[email protected]>
Date:   Wed Jul 16 08:55:33 2025 -0700

    docs: add notes in docs to deprecate local connector (#1959)

commit 7b325ee
Author: Biswa Panda <[email protected]>
Date:   Tue Jul 15 18:52:00 2025 -0700

    fix: vllm router examples (#1942)

commit a50be1a
Author: hhzhang16 <[email protected]>
Date:   Tue Jul 15 17:58:01 2025 -0700

    feat: update CODEOWNERS (#1926)

commit e260fdf
Author: Harrison Saturley-Hall <[email protected]>
Date:   Tue Jul 15 18:49:21 2025 -0400

    feat: add bitnami helm chart attribution (#1943)

    Signed-off-by: Harrison Saturley-Hall <[email protected]>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 1c03404
Author: Biswa Panda <[email protected]>
Date:   Tue Jul 15 14:26:24 2025 -0700

    fix: update inference gateway deployment instructions (#1940)

commit 5ca570f
Author: Graham King <[email protected]>
Date:   Tue Jul 15 16:54:03 2025 -0400

    chore: Rename dynamo.ingress to dynamo.frontend (#1944)

commit 7b9182f
Author: Graham King <[email protected]>
Date:   Tue Jul 15 16:33:07 2025 -0400

    chore: Move examples/cli to lib/bindings/examples/cli (#1952)

commit 40d40dd
Author: Graham King <[email protected]>
Date:   Tue Jul 15 16:02:19 2025 -0400

    chore(multi-modal): Rename frontend.py to web.py (#1951)

commit a9e0891
Author: Ryan Olson <[email protected]>
Date:   Tue Jul 15 12:30:30 2025 -0600

    feat: adding http clients and recorded response stream (#1919)

commit 4128d58
Author: Biswa Panda <[email protected]>
Date:   Tue Jul 15 10:30:47 2025 -0700

    feat: allow helm upgrade using deploy script (#1936)

commit 4da078b
Author: Graham King <[email protected]>
Date:   Tue Jul 15 12:57:38 2025 -0400

    fix: Remove OpenSSL dependency, use Rust TLS (#1945)

commit fc004d4
Author: jthomson04 <[email protected]>
Date:   Tue Jul 15 08:45:42 2025 -0700

    fix: Fix TRT-LLM container build when using a custom pip wheel (#1825)

commit 3c6fc6f
Author: ishandhanani <[email protected]>
Date:   Mon Jul 14 22:35:20 2025 -0700

    chore: fix typo (#1938)

commit de7fe38
Author: Alec <[email protected]>
Date:   Mon Jul 14 21:47:12 2025 -0700

    feat: add vllm e2e integration tests (#1935)

commit 860f3f7
Author: Keiven C <[email protected]>
Date:   Mon Jul 14 21:44:19 2025 -0700

    chore: metrics endpoint variables renamed from HTTP_SERVER->SYSTEM (#1934)

    Co-authored-by: Keiven Chang <[email protected]>

commit fc402a3
Author: Biswa Panda <[email protected]>
Date:   Mon Jul 14 21:21:20 2025 -0700

    feat: configurable namespace for vllm v1 example (#1909)

commit df40d2c
Author: ZichengMa <[email protected]>
Date:   Mon Jul 14 21:11:29 2025 -0700

    docs: fix typo and add mount-workspace to vllm doc (#1931)

    Signed-off-by: ZichengMa <[email protected]>
    Co-authored-by: Alec <[email protected]>

commit 901715b
Author: Tanmay Verma <[email protected]>
Date:   Mon Jul 14 20:14:51 2025 -0700

    refactor:  Refactor the TRTLLM examples remove dynamo SDK (#1884)

commit 5bf23d5
Author: hhzhang16 <[email protected]>
Date:   Mon Jul 14 18:29:19 2025 -0700

    feat: update DynamoGraphDeployments for vllm_v1 (#1890)

    Co-authored-by: mohammedabdulwahhab <[email protected]>

commit 9e76590
Author: ishandhanani <[email protected]>
Date:   Mon Jul 14 17:29:56 2025 -0700

    docs: organize sglang readme (#1910)

commit ef59ac8
Author: KrishnanPrash <[email protected]>
Date:   Mon Jul 14 16:16:44 2025 -0700

    docs: TRTLLM Example of Llama4+Eagle3 (Speculative Decoding) (#1828)

    Signed-off-by: KrishnanPrash <[email protected]>
    Co-authored-by: Iman Tabrizian <[email protected]>

commit 053041e
Author: Jorge António <[email protected]>
Date:   Tue Jul 15 00:06:38 2025 +0100

    fix: resolve incorrect finish reason propagation (#1857)

commit 3733f58
Author: Graham King <[email protected]>
Date:   Mon Jul 14 19:04:22 2025 -0400

    feat(backends): Python llama.cpp engine (#1925)

commit 6a1350c
Author: Tushar Sharma <[email protected]>
Date:   Mon Jul 14 14:56:36 2025 -0700

    build: minor improvements to sglang dockerfile (#1917)

commit e2a619b
Author: Neelay Shah <[email protected]>
Date:   Mon Jul 14 14:52:53 2025 -0700

    fix: remove environment variable passing (#1911)

    Signed-off-by: Neelay Shah <[email protected]>
    Co-authored-by: Neelay Shah <[email protected]>

commit 3d17a49
Author: Schwinn Saereesitthipitak <[email protected]>
Date:   Mon Jul 14 14:41:56 2025 -0700

    refactor: remove dynamo build (#1778)

    Signed-off-by: Schwinn Saereesitthipitak <[email protected]>

commit 3e0cb07
Author: Anant Sharma <[email protected]>
Date:   Mon Jul 14 15:43:48 2025 -0400

    fix: copy attributions and license to trtllm runtime container (#1916)

commit fc36bf5
Author: ishandhanani <[email protected]>
Date:   Mon Jul 14 12:31:49 2025 -0700

    feat: receive kvmetrics from sglang scheduler (#1789)

    Co-authored-by: zixuanzhang226 <[email protected]>

commit df91fce
Author: Yan Ru Pei <[email protected]>
Date:   Mon Jul 14 12:24:04 2025 -0700

    feat: prefill aware routing (#1895)

commit ad8ad66
Author: Graham King <[email protected]>
Date:   Mon Jul 14 15:20:35 2025 -0400

    feat: Shrink the ai-dynamo wheel by 35 MiB (#1918)

    Remove http and llmctl binaries. They have been unused for a while.

commit 480b41d
Author: Graham King <[email protected]>
Date:   Mon Jul 14 15:06:45 2025 -0400

    feat: Python frontend / ingress node (#1912)
ZichengMa added a commit that referenced this pull request Jul 21, 2025
commit d4b5414
Author: atchernych <[email protected]>
Date:   Mon Jul 21 13:10:24 2025 -0700

    fix: mypy error (#2029)

commit 79337c7
Author: Ryan McCormick <[email protected]>
Date:   Mon Jul 21 12:12:16 2025 -0700

    build: support custom TRTLLM build for commits not on main branch (#2021)

commit 95dd942
Author: atchernych <[email protected]>
Date:   Mon Jul 21 12:09:33 2025 -0700

    docs: Post-Merge cleanup of the deploy documentation (#1922)

commit cb6de94
Author: ptarasiewiczNV <[email protected]>
Date:   Sun Jul 20 22:34:50 2025 +0200

    chore: Install vLLM and WideEP kernels in vLLM runtime container (#2010)

    Signed-off-by: Alec <[email protected]>
    Co-authored-by: Alec <[email protected]>
    Co-authored-by: alec-flowers <[email protected]>

commit fe63c17
Author: Alec <[email protected]>
Date:   Fri Jul 18 17:45:08 2025 -0700

    fix: Revert "feat: add vLLM v1 multi-modal example. Add llama4 Maverick ex… (#2017)

commit bf1998f
Author: jthomson04 <[email protected]>
Date:   Fri Jul 18 17:23:50 2025 -0700

    fix: Don't detokenize twice in TRT-LLM examples (#1955)

commit 343a481
Author: Ryan Olson <[email protected]>
Date:   Fri Jul 18 16:22:43 2025 -0600

    feat: http disconnects (#2014)

commit e330d96
Author: Yan Ru Pei <[email protected]>
Date:   Fri Jul 18 13:40:54 2025 -0700

    feat: enable / disable chunked prefill for mockers (#2015)

    Signed-off-by: Yan Ru Pei <[email protected]>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 353146e
Author: GuanLuo <[email protected]>
Date:   Fri Jul 18 13:33:36 2025 -0700

    feat: add vLLM v1 multi-modal example. Add llama4 Maverick example (#1990)

    Signed-off-by: GuanLuo <[email protected]>
    Co-authored-by: krishung5 <[email protected]>

commit 1f07dab
Author: Jacky <[email protected]>
Date:   Fri Jul 18 13:04:20 2025 -0700

    feat: Add migration to LLM requests (#1930)

commit 5f17918
Author: Tanmay Verma <[email protected]>
Date:   Fri Jul 18 12:59:34 2025 -0700

    refactor: Migrate to new UX2 for python launch (#2003)

commit fc12436
Author: Graham King <[email protected]>
Date:   Fri Jul 18 14:52:57 2025 -0400

    feat(frontend): router-mode settings (#2001)

commit dc75cf1
Author: ptarasiewiczNV <[email protected]>
Date:   Fri Jul 18 18:47:28 2025 +0200

    chore: Move NIXL repo clone to Dockerfiles (#2009)

commit f6f392c
Author: Iman Tabrizian <[email protected]>
Date:   Thu Jul 17 18:44:17 2025 -0700

    Remove link to the fix for disagg + eagle3 for TRT-LLM example (#2006)

    Signed-off-by: Iman Tabrizian <[email protected]>

commit cc90ca6
Author: atchernych <[email protected]>
Date:   Thu Jul 17 18:34:40 2025 -0700

    feat: Create a convenience script to uninstall Dynamo Deploy CRDs (#1933)

commit 267b422
Author: Greg Clark <[email protected]>
Date:   Thu Jul 17 20:44:21 2025 -0400

    chore: loosed python requirement versions (#1998)

    Signed-off-by: Greg Clark <[email protected]>

commit b8474e5
Author: ishandhanani <[email protected]>
Date:   Thu Jul 17 16:35:05 2025 -0700

    chore: update cmake and gap installation and sgl in wideep container (#1991)

commit 157a3b0
Author: Biswa Panda <[email protected]>
Date:   Thu Jul 17 15:38:12 2025 -0700

    fix: incorrect helm upgrade command (#2000)

commit 0dfca2c
Author: Ryan McCormick <[email protected]>
Date:   Thu Jul 17 15:33:33 2025 -0700

    ci: Update trtllm gitlab triggers for new components directory and test script (#1992)

commit f3fb09e
Author: Kris Hung <[email protected]>
Date:   Thu Jul 17 14:59:59 2025 -0700

    fix: Fix syntax for tokio-console (#1997)

commit dacffb8
Author: Biswa Panda <[email protected]>
Date:   Thu Jul 17 14:57:10 2025 -0700

    fix: use non-dev golang image for operator (#1993)

commit 2b29a0a
Author: zaristei <[email protected]>
Date:   Thu Jul 17 13:10:42 2025 -0700

    fix: Working Arm Build Dockerfile for Vllm_v1 (#1844)

commit 2430d89
Author: Ryan McCormick <[email protected]>
Date:   Thu Jul 17 12:57:46 2025 -0700

    test: Add trtllm kv router tests (#1988)

commit 1eadc01
Author: Graham King <[email protected]>
Date:   Thu Jul 17 15:07:41 2025 -0400

    feat(runtime): Support tokio-console (#1986)

commit b62e633
Author: GuanLuo <[email protected]>
Date:   Thu Jul 17 11:16:28 2025 -0700

    feat: support separate chat_template.jinja file (#1853)

commit 8ae3719
Author: Hongkuan Zhou <[email protected]>
Date:   Thu Jul 17 11:12:35 2025 -0700

    chore: add some details to dynamo deploy quickstart and fix deploy.sh (#1978)

    Signed-off-by: Hongkuan Zhou <[email protected]>
    Co-authored-by: julienmancuso <[email protected]>

commit 08891ff
Author: Ryan McCormick <[email protected]>
Date:   Thu Jul 17 10:57:42 2025 -0700

    fix: Update trtllm tests to use new scripts instead of dynamo serve (#1979)

commit 49b7a0d
Author: Ryan Olson <[email protected]>
Date:   Thu Jul 17 08:35:04 2025 -0600

    feat: record + analyze logprobs (#1957)

commit 6d2be14
Author: Biswa Panda <[email protected]>
Date:   Thu Jul 17 00:17:58 2025 -0700

    refactor: replace vllm with vllm_v1 container (#1953)

    Co-authored-by: alec-flowers <[email protected]>

commit 4d2a31a
Author: ishandhanani <[email protected]>
Date:   Wed Jul 16 18:04:09 2025 -0700

    chore: add port reservation to utils (#1980)

commit 1e3e4a0
Author: Alec <[email protected]>
Date:   Wed Jul 16 15:54:04 2025 -0700

    fix: port race condition through deterministic ports (#1937)

commit 4ad281f
Author: Tanmay Verma <[email protected]>
Date:   Wed Jul 16 14:33:51 2025 -0700

    refactor: Move TRTLLM example to the component/backends (#1976)

commit 57d24a1
Author: Misha Chornyi <[email protected]>
Date:   Wed Jul 16 14:10:24 2025 -0700

    build: Removing shell configuration violations. It's bad practice to hardcod… (#1973)

commit 182d3b5
Author: Graham King <[email protected]>
Date:   Wed Jul 16 16:12:40 2025 -0400

    chore(bindings): Remove mistralrs / llama.cpp (#1970)

commit def6eaa
Author: Harrison Saturley-Hall <[email protected]>
Date:   Wed Jul 16 15:50:23 2025 -0400

    feat: attributions for debian deps of sglang, trtllm, vllm runtime containers (#1971)

commit f31732a
Author: Yan Ru Pei <[email protected]>
Date:   Wed Jul 16 11:22:15 2025 -0700

    feat: integrate mocker with dynamo-run and python cli (#1927)

commit aba6099
Author: Graham King <[email protected]>
Date:   Wed Jul 16 12:26:32 2025 -0400

    perf(router): Remove lock from router hot path (#1963)

commit b212103
Author: Hongkuan Zhou <[email protected]>
Date:   Wed Jul 16 08:55:33 2025 -0700

    docs: add notes in docs to deprecate local connector (#1959)

commit 7b325ee
Author: Biswa Panda <[email protected]>
Date:   Tue Jul 15 18:52:00 2025 -0700

    fix: vllm router examples (#1942)

commit a50be1a
Author: hhzhang16 <[email protected]>
Date:   Tue Jul 15 17:58:01 2025 -0700

    feat: update CODEOWNERS (#1926)

commit e260fdf
Author: Harrison Saturley-Hall <[email protected]>
Date:   Tue Jul 15 18:49:21 2025 -0400

    feat: add bitnami helm chart attribution (#1943)

    Signed-off-by: Harrison Saturley-Hall <[email protected]>
    Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

commit 1c03404
Author: Biswa Panda <[email protected]>
Date:   Tue Jul 15 14:26:24 2025 -0700

    fix: update inference gateway deployment instructions (#1940)

commit 5ca570f
Author: Graham King <[email protected]>
Date:   Tue Jul 15 16:54:03 2025 -0400

    chore: Rename dynamo.ingress to dynamo.frontend (#1944)

commit 7b9182f
Author: Graham King <[email protected]>
Date:   Tue Jul 15 16:33:07 2025 -0400

    chore: Move examples/cli to lib/bindings/examples/cli (#1952)

commit 40d40dd
Author: Graham King <[email protected]>
Date:   Tue Jul 15 16:02:19 2025 -0400

    chore(multi-modal): Rename frontend.py to web.py (#1951)

commit a9e0891
Author: Ryan Olson <[email protected]>
Date:   Tue Jul 15 12:30:30 2025 -0600

    feat: adding http clients and recorded response stream (#1919)

commit 4128d58
Author: Biswa Panda <[email protected]>
Date:   Tue Jul 15 10:30:47 2025 -0700

    feat: allow helm upgrade using deploy script (#1936)

commit 4da078b
Author: Graham King <[email protected]>
Date:   Tue Jul 15 12:57:38 2025 -0400

    fix: Remove OpenSSL dependency, use Rust TLS (#1945)

commit fc004d4
Author: jthomson04 <[email protected]>
Date:   Tue Jul 15 08:45:42 2025 -0700

    fix: Fix TRT-LLM container build when using a custom pip wheel (#1825)

commit 3c6fc6f
Author: ishandhanani <[email protected]>
Date:   Mon Jul 14 22:35:20 2025 -0700

    chore: fix typo (#1938)

commit de7fe38
Author: Alec <[email protected]>
Date:   Mon Jul 14 21:47:12 2025 -0700

    feat: add vllm e2e integration tests (#1935)

commit 860f3f7
Author: Keiven C <[email protected]>
Date:   Mon Jul 14 21:44:19 2025 -0700

    chore: metrics endpoint variables renamed from HTTP_SERVER->SYSTEM (#1934)

    Co-authored-by: Keiven Chang <[email protected]>

commit fc402a3
Author: Biswa Panda <[email protected]>
Date:   Mon Jul 14 21:21:20 2025 -0700

    feat: configurable namespace for vllm v1 example (#1909)

commit df40d2c
Author: ZichengMa <[email protected]>
Date:   Mon Jul 14 21:11:29 2025 -0700

    docs: fix typo and add mount-workspace to vllm doc (#1931)

    Signed-off-by: ZichengMa <[email protected]>
    Co-authored-by: Alec <[email protected]>

commit 901715b
Author: Tanmay Verma <[email protected]>
Date:   Mon Jul 14 20:14:51 2025 -0700

    refactor:  Refactor the TRTLLM examples remove dynamo SDK (#1884)

commit 5bf23d5
Author: hhzhang16 <[email protected]>
Date:   Mon Jul 14 18:29:19 2025 -0700

    feat: update DynamoGraphDeployments for vllm_v1 (#1890)

    Co-authored-by: mohammedabdulwahhab <[email protected]>

commit 9e76590
Author: ishandhanani <[email protected]>
Date:   Mon Jul 14 17:29:56 2025 -0700

    docs: organize sglang readme (#1910)

commit ef59ac8
Author: KrishnanPrash <[email protected]>
Date:   Mon Jul 14 16:16:44 2025 -0700

    docs: TRTLLM Example of Llama4+Eagle3 (Speculative Decoding) (#1828)

    Signed-off-by: KrishnanPrash <[email protected]>
    Co-authored-by: Iman Tabrizian <[email protected]>

commit 053041e
Author: Jorge António <[email protected]>
Date:   Tue Jul 15 00:06:38 2025 +0100

    fix: resolve incorrect finish reason propagation (#1857)

commit 3733f58
Author: Graham King <[email protected]>
Date:   Mon Jul 14 19:04:22 2025 -0400

    feat(backends): Python llama.cpp engine (#1925)

commit 6a1350c
Author: Tushar Sharma <[email protected]>
Date:   Mon Jul 14 14:56:36 2025 -0700

    build: minor improvements to sglang dockerfile (#1917)

commit e2a619b
Author: Neelay Shah <[email protected]>
Date:   Mon Jul 14 14:52:53 2025 -0700

    fix: remove environment variable passing (#1911)

    Signed-off-by: Neelay Shah <[email protected]>
    Co-authored-by: Neelay Shah <[email protected]>

commit 3d17a49
Author: Schwinn Saereesitthipitak <[email protected]>
Date:   Mon Jul 14 14:41:56 2025 -0700

    refactor: remove dynamo build (#1778)

    Signed-off-by: Schwinn Saereesitthipitak <[email protected]>

commit 3e0cb07
Author: Anant Sharma <[email protected]>
Date:   Mon Jul 14 15:43:48 2025 -0400

    fix: copy attributions and license to trtllm runtime container (#1916)

commit fc36bf5
Author: ishandhanani <[email protected]>
Date:   Mon Jul 14 12:31:49 2025 -0700

    feat: receive kvmetrics from sglang scheduler (#1789)

    Co-authored-by: zixuanzhang226 <[email protected]>

commit df91fce
Author: Yan Ru Pei <[email protected]>
Date:   Mon Jul 14 12:24:04 2025 -0700

    feat: prefill aware routing (#1895)

commit ad8ad66
Author: Graham King <[email protected]>
Date:   Mon Jul 14 15:20:35 2025 -0400

    feat: Shrink the ai-dynamo wheel by 35 MiB (#1918)

    Remove http and llmctl binaries. They have been unused for a while.

commit 480b41d
Author: Graham King <[email protected]>
Date:   Mon Jul 14 15:06:45 2025 -0400

    feat: Python frontend / ingress node (#1912)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants