Skip to content

Commit ef59ac8

Browse files
docs: TRTLLM Example of Llama4+Eagle3 (Speculative Decoding) (#1828)
Signed-off-by: KrishnanPrash <[email protected]> Co-authored-by: Iman Tabrizian <[email protected]>
1 parent 053041e commit ef59ac8

File tree

6 files changed

+260
-0
lines changed

6 files changed

+260
-0
lines changed

examples/tensorrt_llm/README.md

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -350,3 +350,51 @@ unset TRTLLM_USE_NIXL_KVCACHE
350350
export TRTLLM_USE_UCX_KVCACHE=1
351351
```
352352

353+
354+
### Example architectures for Llama 4 Maverick Instruct + Eagle Speculative Decoding
355+
356+
#### Notes
357+
* Testing for the current example used:
358+
* One GB200x4 node for aggregate serving
359+
* Two GB200x4 nodes for disaggregate serving
360+
* To run Eagle Speculative Decoding with Llama 4, ensure the container meets the following criteria:
361+
* Built with a version of TensorRT-LLM based on the 0.21 release [Link](https://github.com/NVIDIA/TensorRT-LLM/tree/release/0.21)
362+
* The TensorRT-LLM build includes the changes from this PR [Link](https://github.com/NVIDIA/TensorRT-LLM/pull/5975)
363+
* If you need to download model weights off huggingface, make sure you run the command `huggingface-cli login` and have access to the necessary gated models.
364+
365+
##### Aggregated Serving
366+
```bash
367+
cd /workspace/examples/tensorrt_llm
368+
dynamo serve graphs.disagg:Frontend -f configs/llama4/eagle/eagle_agg.yaml
369+
```
370+
* Known Issue: In Aggregated Serving, setting `max_num_tokens` to higher values (e.g. `max_num_tokens: 8448`) can lead to Out of Memory (OOM) errors. This is being investigated by the TRTLLM team.
371+
372+
##### Disaggregated Serving
373+
374+
###### Head Node
375+
Start nats/etcd
376+
``` bash
377+
nats-server -js &
378+
etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:2379 --data-dir /tmp/etcd &
379+
```
380+
381+
Launch graph of Frontend and TensorRTLLMWorker (decode) on head node:
382+
383+
```bash
384+
cd /workspace/examples/tensorrt_llm
385+
dynamo serve graphs.agg:Frontend -f configs/llama4/eagle/eagle_disagg.yaml &
386+
```
387+
388+
###### Worker Node(s)
389+
Set environment variables pointing at the etcd/nats endpoints on the head node.
390+
```bash
391+
export HEAD_NODE_IP="<head-node-ip>"
392+
export NATS_SERVER="nats://${HEAD_NODE_IP}:4222"
393+
export ETCD_ENDPOINTS="${HEAD_NODE_IP}:2379"
394+
```
395+
396+
Deploy a Prefill worker:
397+
```bash
398+
cd /workspace/examples/tensorrt_llm
399+
dynamo serve components.prefill_worker:TensorRTLLMPrefillWorker -f configs/llama4/eagle/eagle_disagg.yaml --service-name TensorRTLLMPrefillWorker &
400+
```
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
# SPDX-License-Identifier: Apache-2.0
3+
#
4+
# Licensed under the Apache License, Version 2.0 (the "License");
5+
# you may not use this file except in compliance with the License.
6+
# You may obtain a copy of the License at
7+
#
8+
# http://www.apache.org/licenses/LICENSE-2.0
9+
#
10+
# Unless required by applicable law or agreed to in writing, software
11+
# distributed under the License is distributed on an "AS IS" BASIS,
12+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
# See the License for the specific language governing permissions and
14+
# limitations under the License.
15+
16+
Frontend:
17+
# This is the client-facing model name, you can set this to anything you'd like.
18+
served_model_name: "nvidia/Llama-4-Maverick-17B-128E-Instruct-FP8"
19+
endpoint: dynamo.TensorRTLLMWorker.generate
20+
port: 8000
21+
router: round-robin
22+
23+
TensorRTLLMWorker:
24+
served_model_name: "nvidia/Llama-4-Maverick-17B-128E-Instruct-FP8"
25+
model-path: "nvidia/Llama-4-Maverick-17B-128E-Instruct-FP8"
26+
extra-engine-args: "configs/llama4/eagle/engine_configs/agg_config.yaml"
27+
router: round-robin
28+
ServiceArgs:
29+
workers: 1
30+
resources:
31+
gpu: 4
Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
# SPDX-License-Identifier: Apache-2.0
3+
#
4+
# Licensed under the Apache License, Version 2.0 (the "License");
5+
# you may not use this file except in compliance with the License.
6+
# You may obtain a copy of the License at
7+
#
8+
# http://www.apache.org/licenses/LICENSE-2.0
9+
#
10+
# Unless required by applicable law or agreed to in writing, software
11+
# distributed under the License is distributed on an "AS IS" BASIS,
12+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
# See the License for the specific language governing permissions and
14+
# limitations under the License.
15+
16+
Frontend:
17+
served_model_name: "nvidia/Llama-4-Maverick-17B-128E-Instruct-FP8"
18+
endpoint: dynamo.TensorRTLLMWorker.generate
19+
port: 8000
20+
router: round-robin
21+
22+
TensorRTLLMWorker:
23+
served_model_name: "nvidia/Llama-4-Maverick-17B-128E-Instruct-FP8"
24+
model-path: "nvidia/Llama-4-Maverick-17B-128E-Instruct-FP8"
25+
# Path to a YAML file containing additional keyword arguments to pass to the TRTLLM engine.
26+
# The fields in `extra-engine-args` holds higher priority than the above TRTLLM engine fields.
27+
extra-engine-args: "configs/llama4/eagle/engine_configs/decode_config.yaml"
28+
router: round-robin
29+
enable-disagg: true
30+
ServiceArgs:
31+
workers: 1
32+
resources:
33+
gpu: 4
34+
35+
TensorRTLLMPrefillWorker:
36+
model-path: "nvidia/Llama-4-Maverick-17B-128E-Instruct-FP8"
37+
# Path to a YAML file containing additional keyword arguments to pass to the TRTLLM engine.
38+
# The fields in `extra-engine-args` holds higher priority than the above TRTLLM engine fields.
39+
extra-engine-args: "configs/llama4/eagle/engine_configs/prefill_config.yaml"
40+
router: round-robin
41+
ServiceArgs:
42+
workers: 1
43+
resources:
44+
gpu: 4
Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
# SPDX-License-Identifier: Apache-2.0
3+
#
4+
# Licensed under the Apache License, Version 2.0 (the "License");
5+
# you may not use this file except in compliance with the License.
6+
# You may obtain a copy of the License at
7+
#
8+
# http://www.apache.org/licenses/LICENSE-2.0
9+
#
10+
# Unless required by applicable law or agreed to in writing, software
11+
# distributed under the License is distributed on an "AS IS" BASIS,
12+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
# See the License for the specific language governing permissions and
14+
# limitations under the License.
15+
16+
backend: pytorch
17+
tensor_parallel_size: 4
18+
moe_expert_parallel_size: 4
19+
max_batch_size: 256
20+
# When max_num_tokens set to higher values, can cause OOM issues.
21+
# Will be investigated in the future with TRTLLM team.
22+
max_num_tokens: 1024
23+
max_seq_len: 8448
24+
autotuner_enabled: false
25+
disable_overlap_scheduler: true
26+
27+
# Enable Speculative Decoding in the model engine
28+
speculative_config:
29+
decoding_type: Eagle
30+
max_draft_len: 1
31+
pytorch_weights_path: nvidia/Llama-4-Maverick-17B-128E-Eagle3
32+
eagle3_one_model: False
33+
34+
kv_cache_config:
35+
free_gpu_memory_fraction: 0.5
36+
enable_block_reuse: false
37+
38+
use_cuda_graph: true
39+
cuda_graph_padding_enabled: true
40+
cuda_graph_batch_sizes:
41+
- 1
42+
- 2
43+
- 4
44+
- 8
45+
- 16
46+
- 32
47+
- 64
48+
- 128
49+
- 256
50+
print_iter_log: true
51+
kv_cache_dtype: fp8
Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
# SPDX-License-Identifier: Apache-2.0
3+
#
4+
# Licensed under the Apache License, Version 2.0 (the "License");
5+
# you may not use this file except in compliance with the License.
6+
# You may obtain a copy of the License at
7+
#
8+
# http://www.apache.org/licenses/LICENSE-2.0
9+
#
10+
# Unless required by applicable law or agreed to in writing, software
11+
# distributed under the License is distributed on an "AS IS" BASIS,
12+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
# See the License for the specific language governing permissions and
14+
# limitations under the License.
15+
16+
backend: pytorch
17+
tensor_parallel_size: 4
18+
moe_expert_parallel_size: 4
19+
max_batch_size: 256
20+
max_num_tokens: 512
21+
# 8704 = 8192 ISL + 512 OSL
22+
max_seq_len: 8704
23+
disable_overlap_scheduler: true
24+
autotuner_enabled: false
25+
26+
# Enable Speculative Decoding in the model engine
27+
speculative_config:
28+
decoding_type: Eagle
29+
max_draft_len: 1
30+
pytorch_weights_path: nvidia/Llama-4-Maverick-17B-128E-Eagle3
31+
eagle3_one_model: False
32+
33+
kv_cache_config:
34+
free_gpu_memory_fraction: 0.5
35+
enable_block_reuse: false
36+
37+
use_cuda_graph: true
38+
cuda_graph_padding_enabled: true
39+
cuda_graph_batch_sizes:
40+
- 1
41+
- 2
42+
- 4
43+
- 8
44+
- 16
45+
- 32
46+
- 64
47+
- 128
48+
- 256
49+
print_iter_log: true
50+
kv_cache_dtype: fp8
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
# SPDX-License-Identifier: Apache-2.0
3+
#
4+
# Licensed under the Apache License, Version 2.0 (the "License");
5+
# you may not use this file except in compliance with the License.
6+
# You may obtain a copy of the License at
7+
#
8+
# http://www.apache.org/licenses/LICENSE-2.0
9+
#
10+
# Unless required by applicable law or agreed to in writing, software
11+
# distributed under the License is distributed on an "AS IS" BASIS,
12+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
# See the License for the specific language governing permissions and
14+
# limitations under the License.
15+
16+
backend: pytorch
17+
tensor_parallel_size: 4
18+
moe_expert_parallel_size: 4
19+
max_batch_size: 1
20+
max_num_tokens: 8192
21+
max_seq_len: 8192
22+
print_iter_log: true
23+
kv_cache_dtype: fp8
24+
disable_overlap_scheduler: true
25+
autotuner_enabled: false
26+
27+
# Enable Speculative Decoding in the model engine
28+
speculative_config:
29+
decoding_type: Eagle
30+
max_draft_len: 1
31+
pytorch_weights_path: nvidia/Llama-4-Maverick-17B-128E-Eagle3
32+
eagle3_one_model: False
33+
34+
kv_cache_config:
35+
free_gpu_memory_fraction: 0.5
36+
enable_block_reuse: false

0 commit comments

Comments
 (0)