Skip to content

Conversation

@willkill07
Copy link
Member

@willkill07 willkill07 commented Oct 7, 2025

Description

  • The NIM model name has been switched to be in Title Case
  • Directions were slightly unclear on how to install nat + example

Closes

By Submitting this PR I confirm:

  • I am familiar with the Contributing Guidelines.
  • We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
    • Any contribution which contains commits that are not Signed-Off will not be accepted.
  • When the PR is ready for review, new or existing tests cover these changes.
  • When the PR is ready for review, the documentation is up to date with these changes.

Summary by CodeRabbit

  • Documentation
    • Corrected model references to nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 across NIM and vLLM examples.
    • Added NVIDIA NeMo Agent (NAT) to installation guidance alongside the simple web query example.
    • Updated install commands to use the "uv pip" workflow with steps for editable installs at project root and example.
    • Harmonized setup instructions across NIM and vLLM for consistent, clearer guidance.

@willkill07 willkill07 self-assigned this Oct 7, 2025
@willkill07 willkill07 requested a review from a team as a code owner October 7, 2025 20:06
@willkill07 willkill07 added doc Improvements or additions to documentation non-breaking Non-breaking change labels Oct 7, 2025
@coderabbitai
Copy link

coderabbitai bot commented Oct 7, 2025

Walkthrough

Documentation updated: model_name values standardized to nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1, instructions now reference installing the NVIDIA NeMo Agent toolkit (NAT), and installation commands changed to use uv with editable installs (uv pip install -e . and uv pip install -e examples/getting_started/simple_web_query).

Changes

Cohort / File(s) Summary
Docs: Local LLM usage guide
docs/source/workflows/llms/using-local-llms.md
- Updated nim_llm.model_name and vllm_llm.model_name to nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1.
- Added mention of installing NVIDIA NeMo Agent toolkit (NAT).
- Replaced pip install -e examples/getting_started/simple_web_query with uv pip install -e . and uv pip install -e examples/getting_started/simple_web_query in NIM and vLLM sections.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title “docs: update Using Local LLMs (model name and directions)” succinctly describes the documentation changes, stays under the 72-character limit, and employs the imperative mood appropriate for commit messages.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ebf8776 and cfc0a7b.

📒 Files selected for processing (1)
  • docs/source/workflows/llms/using-local-llms.md (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • docs/source/workflows/llms/using-local-llms.md

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6461d86 and ebf8776.

📒 Files selected for processing (1)
  • docs/source/workflows/llms/using-local-llms.md (3 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
docs/source/**/*.md

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

docs/source/**/*.md: Use the official naming throughout documentation: first use “NVIDIA NeMo Agent toolkit”, subsequent “NeMo Agent toolkit”; never use deprecated names (Agent Intelligence toolkit, aiqtoolkit, AgentIQ, AIQ/aiq)
Documentation sources are Markdown files under docs/source; images belong in docs/source/_static
Keep docs in sync with code; documentation pipeline must pass Sphinx and link checks; avoid TODOs/FIXMEs/placeholders; avoid offensive/outdated terms; ensure spelling correctness
Do not use words listed in ci/vale/styles/config/vocabularies/nat/reject.txt; accepted terms in accept.txt are allowed

Files:

  • docs/source/workflows/llms/using-local-llms.md
**/*

⚙️ CodeRabbit configuration file

**/*: # Code Review Instructions

  • Ensure the code follows best practices and coding standards. - For Python code, follow
    PEP 20 and
    PEP 8 for style guidelines.
  • Check for security vulnerabilities and potential issues. - Python methods should use type hints for all parameters and return values.
    Example:
    def my_function(param1: int, param2: str) -> bool:
        pass
  • For Python exception handling, ensure proper stack trace preservation:
    • When re-raising exceptions: use bare raise statements to maintain the original stack trace,
      and use logger.error() (not logger.exception()) to avoid duplicate stack trace output.
    • When catching and logging exceptions without re-raising: always use logger.exception()
      to capture the full stack trace information.

Documentation Review Instructions - Verify that documentation and comments are clear and comprehensive. - Verify that the documentation doesn't contain any TODOs, FIXMEs or placeholder text like "lorem ipsum". - Verify that the documentation doesn't contain any offensive or outdated terms. - Verify that documentation and comments are free of spelling mistakes, ensure the documentation doesn't contain any

words listed in the ci/vale/styles/config/vocabularies/nat/reject.txt file, words that might appear to be
spelling mistakes but are listed in the ci/vale/styles/config/vocabularies/nat/accept.txt file are OK.

Misc. - All code (except .mdc files that contain Cursor rules) should be licensed under the Apache License 2.0,

and should contain an Apache License 2.0 header comment at the top of each file.

  • Confirm that copyright years are up-to date whenever a file is changed.

Files:

  • docs/source/workflows/llms/using-local-llms.md
docs/source/**/*

⚙️ CodeRabbit configuration file

This directory contains the source code for the documentation. All documentation should be written in Markdown format. Any image files should be placed in the docs/source/_static directory.

Files:

  • docs/source/workflows/llms/using-local-llms.md
🧠 Learnings (1)
📓 Common learnings
Learnt from: CR
PR: NVIDIA/NeMo-Agent-Toolkit#0
File: .cursor/rules/general.mdc:0-0
Timestamp: 2025-09-23T18:39:15.023Z
Learning: Applies to docs/source/**/*.md : Use the official naming throughout documentation: first use “NVIDIA NeMo Agent toolkit”, subsequent “NeMo Agent toolkit”; never use deprecated names (Agent Intelligence toolkit, aiqtoolkit, AgentIQ, AIQ/aiq)

@willkill07
Copy link
Member Author

/merge

1 similar comment
@willkill07
Copy link
Member Author

/merge

@rapids-bot rapids-bot bot merged commit 17aa6c8 into NVIDIA:release/1.3 Oct 7, 2025
17 checks passed
@willkill07 willkill07 deleted the wkk_update-local-llm-docs branch October 23, 2025 18:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

doc Improvements or additions to documentation non-breaking Non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants