Skip to content

Commit a48b9b8

Browse files
authored
Perform vale spelling checks on notebooks (#896)
* Exports notebooks to markdown files in a temporary directory, and then runs vale on those * Remove out of date exclusion of the `nv_internal` directory ## By Submitting this PR I confirm: - I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing.md). - We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license. - Any contribution which contains commits that are not Signed-Off will not be accepted. - When the PR is ready for review, new or existing tests cover these changes. - When the PR is ready for review, the documentation is up to date with these changes. ## Summary by CodeRabbit - Documentation - Polished example notebooks: corrected wording/capitalization, standardized terminology (e.g., LlamaIndex, FastAPI), improved code/reference formatting, and clarified the GPU sizing notebook intro and notes. - Removed certain in-notebook execution snippets to streamline guidance. - Expanded documentation vocabulary to reduce linting false positives. - Chores - Documentation linting now includes converted notebooks for more comprehensive checks. - Improved robustness of docs checks with clearer error handling and temporary file management. - Added nbconvert to development dependencies to support notebook conversion. Authors: - David Gardner (https://github.com/dagardner-nv) Approvers: - Will Killian (https://github.com/willkill07) - https://github.com/Salonijain27 URL: #896
1 parent 4908e35 commit a48b9b8

File tree

8 files changed

+61
-29
lines changed

8 files changed

+61
-29
lines changed

ci/scripts/documentation_checks.sh

Lines changed: 31 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,37 @@
1717
set +e
1818

1919
# Intentionally excluding CHANGELOG.md as it immutable
20-
DOC_FILES=$(git ls-files "*.md" "*.rst" | grep -v -E '^(CHANGELOG|LICENSE)\.md$' | grep -v -E '^nv_internal/')
20+
DOC_FILES=$(git ls-files "*.md" "*.rst" | grep -v -E '^(CHANGELOG|LICENSE)\.md$')
21+
NOTEBOOK_FILES=$(git ls-files "*.ipynb")
2122

22-
vale ${DOC_FILES}
23+
if [[ -v ${WORKSPACE_TMP} ]]; then
24+
MKTEMP_ARGS=""
25+
else
26+
MKTEMP_ARGS="--tmpdir=${WORKSPACE_TMP}"
27+
fi
28+
29+
EXPORT_DIR=$(mktemp -d ${MKTEMP_ARGS} nat_converted_notebooks.XXXXXX)
30+
if [[ ! -d "${EXPORT_DIR}" ]]; then
31+
echo "ERROR: Failed to create temporary directory" >&2
32+
exit 1
33+
fi
34+
35+
jupyter nbconvert -y --log-level=WARN --to markdown --output-dir ${EXPORT_DIR} ${NOTEBOOK_FILES}
36+
if [[ $? -ne 0 ]]; then
37+
echo "ERROR: Failed to convert notebooks" >&2
38+
rm -rf "${EXPORT_DIR}"
39+
exit 1
40+
fi
41+
42+
CONVERTED_NOTEBOOK_FILES=$(find ${EXPORT_DIR} -type f -name "*.md")
43+
44+
vale ${DOC_FILES} ${CONVERTED_NOTEBOOK_FILES}
2345
RETVAL=$?
46+
47+
if [[ "${PRESERVE_TMP}" == "1" ]]; then
48+
echo "Preserving temporary directory: ${EXPORT_DIR}"
49+
else
50+
rm -rf "${EXPORT_DIR}"
51+
fi
52+
2453
exit $RETVAL

ci/vale/styles/config/vocabularies/nat/accept.txt

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ CMake
3131
Conda
3232
concurrencies
3333
config
34-
Configurability
34+
[Cc]onfigurability
3535
[Cc]oroutine(s?)
3636
CPython
3737
[Cc]ryptocurrenc[y|ies]
@@ -114,6 +114,7 @@ Pydantic
114114
PyPI
115115
pytest
116116
[Rr]edis
117+
[Rr]eimplement(ing)?
117118
[Rr]einstall(s?)
118119
[Rr]eplatform(ing)?
119120
[Rr]epo
@@ -135,6 +136,7 @@ Tavily
135136
[Tt]okenization
136137
[Tt]okenizer(s?)
137138
triages
139+
[Uu]ncomment
138140
[Uu]nencrypted
139141
[Uu]nittest(s?)
140142
[Uu]nprocessable
@@ -150,4 +152,4 @@ zsh
150152
Zep
151153
Optuna
152154
[Oo]ptimizable
153-
[Cc]heckpointed
155+
[Cc]heckpointed

examples/notebooks/1_getting_started.ipynb

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@
6060
"\n",
6161
"We'll walk you through how to achieve this.\n",
6262
"\n",
63-
"To demonstrate this, let's say that you have the following simple LangChain/LangGraph agent that answers generic user queries about current events by performing a web search using Tavily. We will show you how to bring this agent into the NeMo-Agent-Toolkit and benefit from the configurability, resuability, and easy user experience.\n",
63+
"To demonstrate this, let's say that you have the following simple LangChain/LangGraph agent that answers generic user queries about current events by performing a web search using Tavily. We will show you how to bring this agent into the NeMo-Agent-Toolkit and benefit from the configurability, reusability, and easy user experience.\n",
6464
"\n",
6565
"Run the following two cells to create the LangChain/LangGraph agent and run it with an example input."
6666
]
@@ -182,11 +182,11 @@
182182
"\n",
183183
"The NeMo Agent toolkit provides several ways to run/host an workflow. These are called `front_end` plugins. Some examples are:\n",
184184
"\n",
185-
"console: `nat run` (or long version nat start console …). This is useful when performing local testing and debugging. It allows you to pass inputs defined as arguments directly into the workflow. This is show already in the notebook.\n",
185+
"console: `nat run` (or long version `nat start console …`). This is useful when performing local testing and debugging. It allows you to pass inputs defined as arguments directly into the workflow. This is show already in the notebook.\n",
186186
"\n",
187-
"Fastapi: `nat serve`(or long version nat start fastapi …). This is useful when hosting your workflow as a REST and websockets endpoint.\n",
187+
"FastAPI: `nat serve`(or long version `nat start fastapi …`). This is useful when hosting your workflow as a REST and WebSockets endpoint.\n",
188188
"\n",
189-
"MCP: `nat mcp` (or long version nat start mcp …). This is useful when hosting the workflow and/or any function as an MCP server\n",
189+
"MCP: `nat mcp` (or long version `nat start mcp …`). This is useful when hosting the workflow and/or any function as an MCP server\n",
190190
"\n",
191191
"While these are the built in front-end components, the system is extensible with new user defined front-end plugins.\n",
192192
"\n",
@@ -256,9 +256,9 @@
256256
"```python\n",
257257
"tools = await builder.get_tools(config.tool_names, wrapper_type=LLMFrameworkEnum.LANGCHAIN)\n",
258258
"```\n",
259-
"> **Note**: This allows you to bring in tools from other frameworks like llama index as well and wrap them with langchain since you are implementing your agent in langchain.\n",
259+
"> **Note**: This allows you to bring in tools from other frameworks like LlamaIndex as well and wrap them with LangChain since you are implementing your agent in LangChain.\n",
260260
"\n",
261-
"In a similar way, you can initialize your llm by utilizing the parameters from the configuration object in the following way:\n",
261+
"In a similar way, you can initialize your LLM by utilizing the parameters from the configuration object in the following way:\n",
262262
"```python\n",
263263
"llm = await builder.get_llm(config.llm_name, wrapper_type=LLMFrameworkEnum.LANGCHAIN)\n",
264264
"```"
@@ -268,7 +268,7 @@
268268
"cell_type": "markdown",
269269
"metadata": {},
270270
"source": [
271-
"For each tool or reusable plugin, there are potentially multiple optional parameters with default values that can be overridden. The `nat info components` command can be used to list all available parameters. For example, to list all available parameters for the LLM nim type run:\n",
271+
"For each tool or reusable plugin, there are potentially multiple optional parameters with default values that can be overridden. The `nat info components` command can be used to list all available parameters. For example, to list all available parameters for the LLM NIM type run:\n",
272272
"\n",
273273
"```bash\n",
274274
"nat info components -t llm_provider -q nim\n",
@@ -281,7 +281,7 @@
281281
"source": [
282282
"#### Reusing the Inbuilt Tavily Search Function\n",
283283
"\n",
284-
"We can also make use of some of many example functions that the toolkit provides for common use cases. In this agent example, rather than reimplementing the tavily search, we will use the inbuilt function for internet search which is built on top of LangChain/LangGraph's tavily search API. You can list available functions using the following:"
284+
"We can also make use of some of many example functions that the toolkit provides for common use cases. In this agent example, rather than reimplementing the Tavily search, we will use the inbuilt function for internet search which is built on top of LangChain/LangGraph's Tavily search API. You can list available functions using the following:"
285285
]
286286
},
287287
{

examples/notebooks/2_add_tools_and_agents.ipynb

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
"metadata": {},
3232
"source": [
3333
"> **Note**: \n",
34-
"> All source code for this example can be found at [./retail_sales_agent](./retail_sales_agent/)"
34+
"> All source code for this example can be found at [`./retail_sales_agent`](./retail_sales_agent/)"
3535
]
3636
},
3737
{
@@ -68,7 +68,7 @@
6868
"\n",
6969
"All new functions (tools and agents) that you want to be a part of this agent system can be created inside this directory for easier grouping of plugins. The only necessity for discovery by the toolkit is to import all new files/functions or simply define them in the `register.py` function.\n",
7070
"\n",
71-
"The example referenced in this notebook has already been created in the [retail_sales_agent](./retail_sales_agent/) uisng the following command:\n",
71+
"The example referenced in this notebook has already been created in the [`retail_sales_agent`](./retail_sales_agent/) using the following command:\n",
7272
"```bash\n",
7373
"nat workflow create --workflow-dir . retail_sales_agent\n",
7474
"```"
@@ -116,7 +116,7 @@
116116
"cell_type": "markdown",
117117
"metadata": {},
118118
"source": [
119-
"### Adding a Retrieval Tool using Llamaindex"
119+
"### Adding a Retrieval Tool using LlamaIndex"
120120
]
121121
},
122122
{
@@ -127,7 +127,7 @@
127127
"\n",
128128
"Refer to the code for the `product_catalog_rag` tool in [llama_index_rag_tool.py](./retail_sales_agent/src/nat_retail_sales_agent/llama_index_rag_tool.py). This can use a Milvus vector store for GPU-accelerated indexing. \n",
129129
"\n",
130-
"It requires the addition of an embedder section the [config_with_rag.yml](./retail_sales_agent/configs/config_with_rag.yml). This section follows a the same structure as the llms section and serves as a way to separate the embedding models from the LLM models. In our example, we are using the `nvidia/nv-embedqa-e5-v5` model.\n",
130+
"It requires the addition of an embedder section the [config_with_rag.yml](./retail_sales_agent/configs/config_with_rag.yml). This section follows a the same structure as the `llms` section and serves as a way to separate the embedding models from the LLM models. In our example, we are using the `nvidia/nv-embedqa-e5-v5` model.\n",
131131
"\n",
132132
"\n",
133133
"You can test this workflow with the following command:"
@@ -217,17 +217,17 @@
217217
"source": [
218218
"Besides using inbuilt agents in the workflows, we can also create custom agents using LangGraph or any other framework and bring them into a workflow. We demonstrate this by swapping out the `react_agent` used by the data visualization expert for a custom agent that has human-in-the-loop capability (utilizing a reusable plugin for HITL in the NeMo-Agent-Toolkit). The agent will ask the user whether they would like a summary of graph content.\n",
219219
"\n",
220-
"The code can be found in [data_visualization_agent.py](examples/retail_sales_agent/src/nat_retail_sales_agent/data_visualization_agent.py)\n",
220+
"The code can be found in [`data_visualization_agent.py`](examples/retail_sales_agent/src/nat_retail_sales_agent/data_visualization_agent.py)\n",
221221
"\n",
222222
"This agent has an agent node, a tools node, a node to accept human input and a summarizer node.\n",
223223
"\n",
224-
"Agent → generates tool calls → conditional_edge routes to tools\n",
224+
"Agent → generates tool calls → `conditional_edge` routes to tools\n",
225225
"\n",
226-
"Tools → execute → edge routes back to data_visualization_agent\n",
226+
"Tools → execute → edge routes back to `data_visualization_agent`\n",
227227
"\n",
228-
"Agent → detects ToolMessage → creates summary AIMessage → conditional_edge routes to check_hitl_approval\n",
228+
"Agent → detects ToolMessage → creates summary `AIMessage``conditional_edge` routes to `check_hitl_approval`\n",
229229
"\n",
230-
"HITL → approval → conditional_edge routes to summarize or end\n",
230+
"HITL → approval → `conditional_edge` routes to summarize or end\n",
231231
"\n",
232232
"\n",
233233
"#### Human-in-the-Loop Plugin\n",
@@ -258,7 +258,7 @@
258258
" # Rest of the function\n",
259259
"```\n",
260260
"\n",
261-
"As we see above, requesting user input using NeMo Agent toolkit is straightforward. We can use the user_input_manager to prompt the user for input. The user's response is then processed to determine the next steps in the workflow. This can occur in any tool or function in the workflow, allowing for dynamic interaction with the user as needed."
261+
"As we see above, requesting user input using NeMo Agent toolkit is straightforward. We can use the `user_input_manager` to prompt the user for input. The user's response is then processed to determine the next steps in the workflow. This can occur in any tool or function in the workflow, allowing for dynamic interaction with the user as needed."
262262
]
263263
},
264264
{

examples/notebooks/3_observability_evaluation_and_profiling.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -185,9 +185,9 @@
185185
"\n",
186186
"- `prompt_caching_prefixes`: Identify common prompt prefixes. This is helpful for identifying if you have commonly repeated prompts that can be pre-populated in KV caches\n",
187187
"\n",
188-
"- `bottleneck_analysis`: Analyze workflow performance measures such as bottlenecks, latency, and concurrency spikes. This can be set to simple_stack for a simpler analysis. Nested stack will provide a more detailed analysis identifying nested bottlenecks like tool calls inside other tools calls.\n",
188+
"- `bottleneck_analysis`: Analyze workflow performance measures such as bottlenecks, latency, and concurrency spikes. This can be set to `simple_stack` for a simpler analysis. Nested stack will provide a more detailed analysis identifying nested bottlenecks like tool calls inside other tools calls.\n",
189189
"\n",
190-
"- `concurrency_spike_analysis`: Analyze concurrency spikes. This will identify if there are any spikes in the number of concurrent tool calls. At a spike_threshold of 7, the profiler will identify any spikes where the number of concurrent running functions is greater than or equal to 7. Those are surfaced to the user in a dedicated section of the workflow profiling report."
190+
"- `concurrency_spike_analysis`: Analyze concurrency spikes. This will identify if there are any spikes in the number of concurrent tool calls. At a `spike_threshold` of `7`, the profiler will identify any spikes where the number of concurrent running functions is greater than or equal to `7`. Those are surfaced to the user in a dedicated section of the workflow profiling report."
191191
]
192192
},
193193
{
@@ -210,7 +210,7 @@
210210
"cell_type": "markdown",
211211
"metadata": {},
212212
"source": [
213-
"This will, based on the above configuration, produce the following files in the output_dir specified in the configuration file:\n",
213+
"This will, based on the above configuration, produce the following files in the `output_dir` specified in the configuration file:\n",
214214
"\n",
215215
"- `all_requests_profiler_traces.json`: This file contains the raw usage statistics collected by the profiler. Includes raw traces of LLM and tool input, runtimes, and other metadata.\n",
216216
"\n",

examples/notebooks/launchables/GPU_Cluster_Sizing_with_NeMo_Agent_Toolkit.ipynb

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
"source": [
77
"# Size a GPU Cluster With NVIDIA NeMo Agent Toolkit\n",
88
"\n",
9-
"This notebook demonstrates how to use the NVIDIA NeMo Agent toolkit's sizing calculator to estimate the GPU cluster size required to accommodate a target number of users with a target response time. The estimation is based on the performance of the workflow at different concurrency levels.\n",
9+
"This notebook demonstrates how to use the sizing calculator example to estimate the GPU cluster size required to accommodate a target number of users with a target response time. The estimation is based on the performance of the workflow at different concurrency levels.\n",
1010
"\n",
1111
"The sizing calculator uses the [evaluation](https://docs.nvidia.com/nemo/agent-toolkit/latest/workflows/evaluate.html) and [profiling](https://docs.nvidia.com/nemo/agent-toolkit/latest/workflows/profiler.html) systems in the NeMo Agent toolkit.\n",
1212
"\n",
@@ -420,9 +420,7 @@
420420
"\n",
421421
"The configuration should include a `base_url` parameter for your cluster. You can edit the file manually yourself, or use the below interactive configuration editor.\n",
422422
"\n",
423-
"<div class=\"alert alert-block alert-success\">\n",
424-
" <b>NOTE:</b> You can bring your own config file! Simply replace <b>source_config</b> below with a path to your uploaded config file in the <b>NeMo-Agent-Toolkit</b> repo. \n",
425-
"</div>"
423+
"> **NOTE:** You can bring your own config file! Simply replace `source_config` below with a path to your uploaded config file in the *NeMo-Agent-Toolkit* repo. \n"
426424
]
427425
},
428426
{

pyproject.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -221,6 +221,7 @@ dev = [
221221
"httpx-sse~=0.4",
222222
"ipython~=8.31",
223223
"myst-parser~=4.0",
224+
"nbconvert", # Version determined by jupyter
224225
"nbsphinx~=0.9",
225226
"nvidia-nat_test",
226227
"nvidia-sphinx-theme>=0.0.7",

uv.lock

Lines changed: 2 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)