-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Add support for HuggingFace and NeMo LoRA adapters with KV head configs #6275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for HuggingFace and NeMo LoRA adapters with KV head configs #6275
Conversation
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
WalkthroughThis update introduces robust support for both HuggingFace and NeMo LoRA adapter checkpoints, including per-layer and uniform key-value (KV) head configurations, improved error handling, and comprehensive test coverage. It enhances loader logic, model configuration, and test utilities, and adds validation and warnings for missing or inconsistent LoRA matrices. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Executor
participant LoraManager
participant Loader (HF/NeMo)
participant Model
User->>Executor: Submit LoRA request (with lora_ckpt_source)
Executor->>LoraManager: load_from_ckpt(..., ckpt_source)
LoraManager->>Loader (HF/NeMo): load_torch_lora/lora (based on ckpt_source)
Loader (HF/NeMo)->>LoraManager: Return loaded weights and config
LoraManager->>Model: Update model with LoRA weights
Model-->>User: Ready for inference/generation
Estimated code review effort5 (~150 minutes) Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
tensorrt_llm/_torch/model_config.py (1)
301-342: Comprehensive per-layer KV heads support implementation.The implementation excellently handles both uniform and per-layer KV head configurations with proper fallbacks and validation. The LoRA compatibility check is particularly important and well-placed.
Consider breaking line 321 to comply with the 120-character limit:
- # For uniform models, check: num_key_value_heads (standard) -> num_query_groups (NeMo) -> num_attention_heads + # For uniform models, check: num_key_value_heads (standard) -> + # num_query_groups (NeMo) -> num_attention_headstests/unittest/llmapi/test_llm_pytorch.py (1)
493-567: Comprehensive GQA NeMo LoRA integration test.The test effectively validates NeMo LoRA loading with grouped query attention and verifies the adapter's effect on generation. The deterministic setup with seed=42 and temperature=0.0 ensures reproducibility.
Consider adding a comment explaining why the specific expected output
"Paris. The capital of France is Paris. The"is expected with seed=42, as this might be fragile if the underlying model or random number generation changes.tensorrt_llm/lora_manager.py (1)
350-387: Good documentation improvements for NemoLoraLoader.The comprehensive docstring and the note about the misleading 'lora_dirs' parameter name are helpful. Consider creating a tracking issue to rename this parameter to 'lora_paths' in a future version to avoid confusion.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
tensorrt_llm/_torch/model_config.py(3 hunks)tensorrt_llm/_torch/models/modeling_llama.py(1 hunks)tensorrt_llm/_torch/models/modeling_nemotron_nas.py(1 hunks)tensorrt_llm/_torch/models/modeling_utils.py(1 hunks)tensorrt_llm/_torch/pyexecutor/_util.py(3 hunks)tensorrt_llm/executor/request.py(2 hunks)tensorrt_llm/executor/worker.py(1 hunks)tensorrt_llm/lora_manager.py(15 hunks)tests/integration/test_lists/test-db/l0_a100.yml(1 hunks)tests/unittest/llmapi/lora_test_utils.py(2 hunks)tests/unittest/llmapi/test_llm_pytorch.py(2 hunks)tests/unittest/test_lora_manager.py(1 hunks)
🧠 Learnings (7)
📓 Common learnings
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.374Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tensorrt_llm/executor/worker.py (1)
Learnt from: amitz-nv
PR: #5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.374Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks is_adapter_in_cpu_cache() and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tensorrt_llm/_torch/models/modeling_utils.py (1)
Learnt from: amitz-nv
PR: #5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.374Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks is_adapter_in_cpu_cache() and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tensorrt_llm/executor/request.py (1)
Learnt from: amitz-nv
PR: #5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.374Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks is_adapter_in_cpu_cache() and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tensorrt_llm/_torch/models/modeling_llama.py (1)
Learnt from: amitz-nv
PR: #5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.374Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks is_adapter_in_cpu_cache() and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tensorrt_llm/_torch/pyexecutor/_util.py (1)
Learnt from: amitz-nv
PR: #5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.374Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks is_adapter_in_cpu_cache() and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tensorrt_llm/lora_manager.py (1)
Learnt from: amitz-nv
PR: #5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.374Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks is_adapter_in_cpu_cache() and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
🧬 Code Graph Analysis (3)
tensorrt_llm/executor/worker.py (1)
tensorrt_llm/executor/request.py (3)
adapter_id(39-40)adapter_id(70-71)ckpt_source(51-52)
tensorrt_llm/_torch/pyexecutor/_util.py (4)
tensorrt_llm/lora_manager.py (1)
load_torch_lora(488-507)tensorrt_llm/_torch/models/modeling_phi4mm.py (1)
lora_config(242-262)tensorrt_llm/logger.py (1)
warning(131-132)tensorrt_llm/runtime/generation.py (2)
hidden_size(1152-1154)num_heads(1148-1149)
tensorrt_llm/_torch/model_config.py (3)
tensorrt_llm/_torch/distributed/communicator.py (1)
tp_size(46-47)tensorrt_llm/_torch/models/modeling_phi4mm.py (1)
lora_config(242-262)tensorrt_llm/functional.py (2)
max(438-442)max(3228-3250)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/model_config.py
321-321: Line too long (129 > 120)
(E501)
tests/unittest/llmapi/test_llm_pytorch.py
498-498: Line too long (124 > 120)
(E501)
499-499: Line too long (123 > 120)
(E501)
503-503: Line too long (122 > 120)
(E501)
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.374Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tensorrt_llm/executor/worker.py (1)
Learnt from: amitz-nv
PR: #5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.374Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks is_adapter_in_cpu_cache() and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tensorrt_llm/_torch/models/modeling_utils.py (1)
Learnt from: amitz-nv
PR: #5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.374Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks is_adapter_in_cpu_cache() and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tensorrt_llm/executor/request.py (1)
Learnt from: amitz-nv
PR: #5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.374Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks is_adapter_in_cpu_cache() and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tensorrt_llm/_torch/models/modeling_llama.py (1)
Learnt from: amitz-nv
PR: #5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.374Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks is_adapter_in_cpu_cache() and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tensorrt_llm/_torch/pyexecutor/_util.py (1)
Learnt from: amitz-nv
PR: #5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.374Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks is_adapter_in_cpu_cache() and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
tensorrt_llm/lora_manager.py (1)
Learnt from: amitz-nv
PR: #5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.374Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks is_adapter_in_cpu_cache() and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
🧬 Code Graph Analysis (3)
tensorrt_llm/executor/worker.py (1)
tensorrt_llm/executor/request.py (3)
adapter_id(39-40)adapter_id(70-71)ckpt_source(51-52)
tensorrt_llm/_torch/pyexecutor/_util.py (4)
tensorrt_llm/lora_manager.py (1)
load_torch_lora(488-507)tensorrt_llm/_torch/models/modeling_phi4mm.py (1)
lora_config(242-262)tensorrt_llm/logger.py (1)
warning(131-132)tensorrt_llm/runtime/generation.py (2)
hidden_size(1152-1154)num_heads(1148-1149)
tensorrt_llm/_torch/model_config.py (3)
tensorrt_llm/_torch/distributed/communicator.py (1)
tp_size(46-47)tensorrt_llm/_torch/models/modeling_phi4mm.py (1)
lora_config(242-262)tensorrt_llm/functional.py (2)
max(438-442)max(3228-3250)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/model_config.py
321-321: Line too long (129 > 120)
(E501)
tests/unittest/llmapi/test_llm_pytorch.py
498-498: Line too long (124 > 120)
(E501)
499-499: Line too long (123 > 120)
(E501)
503-503: Line too long (122 > 120)
(E501)
🔇 Additional comments (28)
tests/integration/test_lists/test-db/l0_a100.yml (1)
18-18: LGTM! Test integration looks good.The addition of
unittest/test_lora_manager.pyto the l0_a100 test configuration properly integrates the new LoRA manager tests into the CI pipeline for A100 GPU environments.tensorrt_llm/executor/worker.py (1)
362-363: LGTM! Checkpoint source parameter properly propagated.The addition of
ckpt_source=lora_request.ckpt_sourcecorrectly passes the checkpoint source information from the LoRA request to the LoRA manager, enabling source-aware loading logic for different LoRA formats (HF vs NeMo).tensorrt_llm/executor/request.py (3)
28-28: LGTM! Good default value choice.The default value
"hf"forlora_ckpt_sourceensures backward compatibility while supporting the new checkpoint source functionality.
33-36: LGTM! Proper validation logic.The validation in
__post_init__ensures only valid checkpoint sources ("hf" or "nemo") are accepted, providing clear error messages for invalid values.
50-52: LGTM! Clean property interface.The
ckpt_sourceproperty provides a clean, read-only interface to access the checkpoint source information.tensorrt_llm/_torch/models/modeling_utils.py (1)
367-373: LGTM! Proper source-aware loading logic.The conditional check
if config.lora_config.lora_ckpt_source == "hf"appropriately restricts custom lm_head loading to HuggingFace LoRA checkpoints only. This prevents inappropriate loading attempts with NeMo checkpoints, which likely have different structures.tensorrt_llm/_torch/models/modeling_nemotron_nas.py (1)
195-201: LGTM! Consistent source-aware loading pattern.The conditional check
if model_config.lora_config.lora_ckpt_source == "hf"properly restricts custom vocabulary loading to HuggingFace LoRA checkpoints. This follows the same pattern as other model files and prevents inappropriate loading attempts with NeMo checkpoints.tensorrt_llm/_torch/models/modeling_llama.py (1)
706-712: LGTM!The conditional check correctly restricts custom vocabulary loading to HuggingFace LoRA checkpoints only, which aligns with the PR's objective of supporting both HF and NeMo LoRA formats distinctly.
tensorrt_llm/_torch/pyexecutor/_util.py (2)
17-17: Import updated to use unified LoRA loader.The import change from
load_torch_hf_loratoload_torch_loracorrectly reflects the new unified loading approach that routes based on checkpoint source.
454-465: Per-layer KV heads support looks good with appropriate safeguards.The implementation correctly handles non-uniform KV heads across layers by using the maximum value and includes a clear warning about the untested code path. The tracking ticket reference (TRTLLM-6561) is helpful for future validation efforts.
tests/unittest/llmapi/lora_test_utils.py (1)
124-234: Excellent test utility implementation!The
create_mock_nemo_lora_checkpointfunction is well-designed with:
- Comprehensive parameter validation
- Support for deterministic testing via seeding
- Correct handling of GQA dimensions
- Clear warning about the hardcoded coefficient dependency
- Proper NeMo archive structure
tests/unittest/test_lora_manager.py (4)
221-274: Comprehensive test coverage for missing matrices handling.The test effectively validates graceful handling of missing matrices across both HF and NeMo formats with appropriate warning verification. Good use of subTest for parameterized testing.
350-391: Well-crafted test for NeMo tensor dimension validation.The test accurately verifies NeMo-specific tensor dimensions, especially the 3x factor for fused QKV in the 'out' matrix. The mocking approach is clean and effective.
392-452: Excellent test for NeMo rank derivation logic.The test properly validates the rank derivation hierarchy (config > existing tensors > default) with a well-structured custom checkpoint creation and precise verification.
453-477: Good regression test with clear documentation.The test effectively prevents regression of the original TypeError bug with clear comments about the issue. The error handling properly distinguishes between the specific regression and other potential errors.
tensorrt_llm/_torch/model_config.py (1)
416-419: Robust handling of None values in ffn_mult.The updated implementation safely handles None values using a conditional expression, preventing potential AttributeErrors.
tests/unittest/llmapi/test_llm_pytorch.py (3)
8-8: LGTM!The import of
create_mock_nemo_lora_checkpointis appropriate for testing NeMo LoRA functionality.
432-467: Well-structured parameterized test for NeMo LoRA loading.The test effectively covers different rank configurations and validates the expected module mappings for NeMo LoRA checkpoints.
469-491: Good negative test case for unsupported module validation.The test properly validates that unsupported NeMo LoRA modules raise appropriate errors, improving robustness.
tensorrt_llm/lora_manager.py (9)
5-10: LGTM! Appropriate imports for enhanced functionality.The added imports support warning messages, caching optimization, and improved type annotations.
27-189: Excellent documentation improvements!The added type annotations and comprehensive docstrings significantly enhance code clarity and maintainability. The documentation clearly explains parameters, return values, and potential exceptions.
281-348: Well-designed caching mechanism for .nemo file discovery!The implementation effectively uses LRU caching on individual paths to maximize cache efficiency when the same paths appear in different collections. The error handling is comprehensive with clear error messages for various failure scenarios.
443-508: Well-structured PyTorch LoRA loaders!The implementation provides clean separation between HuggingFace and NeMo checkpoint loading with proper validation and informative error messages. The router pattern effectively handles checkpoint source routing.
611-647: Good enhancement to unpack_nemo_weights function.The function now returns both model config and weights, enabling better rank determination in the loading logic. The type annotations and error handling improvements are valuable.
773-784: Good integration with the file discovery mechanism.The changes properly utilize the new
find_nemo_filesfunction to support both file and directory inputs for NeMo checkpoints.
819-961: Excellent enhancement to NeMo LoRA loading robustness!The implementation significantly improves error handling and graceful degradation:
- Smart rank determination from config with tensor-based fallback
- Comprehensive handling of missing matrices with zero tensor creation
- Processing all expected layers regardless of weight availability
- Informative warning messages for debugging
This aligns perfectly with the PR objective of robust checkpoint loading.
1100-1137: Good consistency with NeMo loader improvements.The HF loader now also gracefully handles missing matrices by creating zero tensors with appropriate warnings, improving robustness for incomplete checkpoints.
1232-1234: Helpful success message for debugging.The success message listing loaded UIDs aids in debugging and verification.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests
Chores
Test Coverage
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.