Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,13 +81,13 @@ pip install nvidia-nat
NeMo Agent Toolkit has many optional dependencies which can be installed with the core package. Optional dependencies are grouped by framework and can be installed with the core package. For example, to install the LangChain/LangGraph plugin, run the following:

```bash
pip install nvidia-nat[langchain]
pip install "nvidia-nat[langchain]"
```

Or for all optional dependencies:

```bash
pip install nvidia-nat[all]
pip install "nvidia-nat[all]"
```

The full list of optional dependencies can be found [here](./docs/source/quick-start/installing.md#framework-integrations).
Expand Down
18 changes: 9 additions & 9 deletions docs/source/extend/telemetry-exporters.md
Original file line number Diff line number Diff line change
Expand Up @@ -266,14 +266,14 @@ Before creating a custom exporter, check if your observability service is alread
| Service | Type | Installation | Configuration |
|---------|------|-------------|---------------|
| **File** | `file` | `pip install nvidia-nat` | local file or directory |
| **Langfuse** | `langfuse` | `pip install nvidia-nat[opentelemetry]` | endpoint + API keys |
| **LangSmith** | `langsmith` | `pip install nvidia-nat[opentelemetry]` | endpoint + API key |
| **OpenTelemetry Collector** | `otelcollector` | `pip install nvidia-nat[opentelemetry]` | endpoint + headers |
| **Patronus** | `patronus` | `pip install nvidia-nat[opentelemetry]` | endpoint + API key |
| **Galileo** | `galileo` | `pip install nvidia-nat[opentelemetry]` | endpoint + API key |
| **Phoenix** | `phoenix` | `pip install nvidia-nat[phoenix]` | endpoint |
| **RagaAI/Catalyst** | `catalyst` | `pip install nvidia-nat[ragaai]` | API key + project |
| **Weave** | `weave` | `pip install nvidia-nat[weave]` | project name |
| **Langfuse** | `langfuse` | `pip install "nvidia-nat[opentelemetry]"` | endpoint + API keys |
| **LangSmith** | `langsmith` | `pip install "nvidia-nat[opentelemetry]"` | endpoint + API key |
| **OpenTelemetry Collector** | `otelcollector` | `pip install "nvidia-nat[opentelemetry]"` | endpoint + headers |
| **Patronus** | `patronus` | `pip install "nvidia-nat[opentelemetry]"` | endpoint + API key |
| **Galileo** | `galileo` | `pip install "nvidia-nat[opentelemetry]"` | endpoint + API key |
| **Phoenix** | `phoenix` | `pip install "nvidia-nat[phoenix]"` | endpoint |
| **RagaAI/Catalyst** | `catalyst` | `pip install "nvidia-nat[ragaai]"` | API key + project |
| **Weave** | `weave` | `pip install "nvidia-nat[weave]"` | project name |

### Simple Configuration Example

Expand Down Expand Up @@ -412,7 +412,7 @@ class CustomSpanExporter(SpanExporter[Span, dict]):
> **Note**: OpenTelemetry exporters require the `nvidia-nat-opentelemetry` subpackage. Install it with:

> ```bash
> pip install nvidia-nat[opentelemetry]
> pip install "nvidia-nat[opentelemetry]"
> ```

For most OTLP-compatible services, use the pre-built `OTLPSpanAdapterExporter`:
Expand Down
4 changes: 2 additions & 2 deletions docs/source/quick-start/installing.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,13 +92,13 @@ pip install nvidia-nat
NeMo Agent toolkit has many optional dependencies which can be installed with the core package. Optional dependencies are grouped by framework and can be installed with the core package. For example, to install the LangChain/LangGraph plugin, run the following:

```bash
pip install nvidia-nat[langchain]
pip install "nvidia-nat[langchain]"
```

Or for all optional dependencies:

```bash
pip install nvidia-nat[all]
pip install "nvidia-nat[all]"
```

The full list of optional dependencies can be found [here](../quick-start/installing.md#framework-integrations).
Expand Down
2 changes: 1 addition & 1 deletion docs/source/reference/api-server-endpoints.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ result back to the client. The transaction schema is defined by the workflow.
## Asynchronous Generate
The asynchronous generate endpoint allows clients to submit a workflow to run in the background and return a response immediately with a unique identifier for the workflow. This can be used to query the status and results of the workflow at a later time. This is useful for long-running workflows, which would otherwise cause the client to time out.

This endpoint is only available when the `async_endpoints` optional dependency extra is installed. For users installing from source, this can be done by running `uv pip install -e '.[async_endpoints]'` from the root directory of the NeMo Agent toolkit library. Similarly, for users installing from PyPI, this can be done by running `pip install 'nvidia-nat[async_endpoints]'`.
This endpoint is only available when the `async_endpoints` optional dependency extra is installed. For users installing from source, this can be done by running `uv pip install -e '.[async_endpoints]'` from the root directory of the NeMo Agent toolkit library. Similarly, for users installing from PyPI, this can be done by running `pip install "nvidia-nat[async_endpoints]"`.

Asynchronous jobs are managed using [Dask](https://docs.dask.org/en/stable/). By default, a local Dask cluster is created at start time, however you can also configure the server to connect to an existing Dask scheduler by setting the `scheduler_address` configuration parameter. The Dask scheduler is used to manage the execution of asynchronous jobs, and can be configured to run on a single machine or across a cluster of machines. Job history and metadata is stored in a SQL database using [SQLAlchemy](https://www.sqlalchemy.org/). By default, a temporary SQLite database is created at start time, however you can also configure the server to use a persistent database by setting the `db_url` configuration parameter. Refer to the [SQLAlchemy documentation](https://docs.sqlalchemy.org/en/20/core/engines.html#database-urls) for the format of the `db_url` parameter. Any database supported by [SQLAlchemy's Asynchronous I/O extension](https://docs.sqlalchemy.org/en/20/orm/extensions/asyncio.html) can be used. Refer to [SQLAlchemy's Dialects](https://docs.sqlalchemy.org/en/20/dialects/index.html) for a complete list (many but not all of these support Asynchronous I/O).

Expand Down
2 changes: 1 addition & 1 deletion docs/source/reference/evaluate-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ limitations under the License.
It is recommended that the [Evaluating NeMo Agent toolkit Workflows](./evaluate.md) guide be read before proceeding with this detailed documentation.
:::

The evaluation endpoint can be used to start evaluation jobs on a remote NeMo Agent toolkit server. This endpoint is only available when the `async_endpoints` optional dependency extra is installed. For users installing from source, this can be done by running `uv pip install -e '.[async_endpoints]'` from the root directory of the NeMo Agent toolkit library. Similarly, for users installing from PyPI, this can be done by running `pip install 'nvidia-nat[async_endpoints]'`.
The evaluation endpoint can be used to start evaluation jobs on a remote NeMo Agent toolkit server. This endpoint is only available when the `async_endpoints` optional dependency extra is installed. For users installing from source, this can be done by running `uv pip install -e '.[async_endpoints]'` from the root directory of the NeMo Agent toolkit library. Similarly, for users installing from PyPI, this can be done by running `pip install "nvidia-nat[async_endpoints]"`.

## Evaluation Endpoint Overview
```{mermaid}
Expand Down
2 changes: 1 addition & 1 deletion docs/source/workflows/evaluate.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ uv pip install -e '.[profiling]'

If you are installing from a package, you can install the sub-package by running the following command:
```bash
uv pip install nvidia-nat[profiling]
uv pip install "nvidia-nat[profiling]"
```

## Evaluating a Workflow
Expand Down
2 changes: 1 addition & 1 deletion docs/source/workflows/mcp/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ NeMo Agent toolkit [Model Context Protocol (MCP)](https://modelcontextprotocol.i
* An [MCP client](./mcp-client.md) to connect to and use tools served by remote MCP servers.
* An [MCP server](./mcp-server.md) to publish tools using MCP to be used by any MCP client.

**Note:** MCP client functionality requires the `nvidia-nat-mcp` package. Install it with `uv pip install nvidia-nat[mcp]`.
**Note:** MCP client functionality requires the `nvidia-nat-mcp` package. Install it with `uv pip install "nvidia-nat[mcp]"`.


```{toctree}
Expand Down
2 changes: 1 addition & 1 deletion docs/source/workflows/mcp/mcp-client.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ This guide will cover how to use a NeMo Agent toolkit workflow as a MCP host wit
MCP client functionality requires the `nvidia-nat-mcp` package. Install it with:

```bash
uv pip install nvidia-nat[mcp]
uv pip install "nvidia-nat[mcp]"
```
## Accessing Protected MCP Servers
NeMo Agent toolkit can access protected MCP servers via the MCP client auth provider. For more information, see the [MCP Authentication](./mcp-auth.md) documentation.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/workflows/mcp/mcp-server.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ nat mcp serve --config_file examples/getting_started/simple_calculator/configs/c

To list the tools published by the MCP server you can use the `nat mcp client tool list` command. This command acts as an MCP client and connects to the MCP server running on the specified URL (defaults to `http://localhost:9901/mcp` for streamable-http, with backwards compatibility for `http://localhost:9901/sse`).

**Note:** The `nat mcp client` commands require the `nvidia-nat-mcp` package. If you encounter an error about missing MCP client functionality, install it with `uv pip install nvidia-nat[mcp]`.
**Note:** The `nat mcp client` commands require the `nvidia-nat-mcp` package. If you encounter an error about missing MCP client functionality, install it with `uv pip install "nvidia-nat[mcp]"`.

```bash
nat mcp client tool list
Expand Down
2 changes: 1 addition & 1 deletion docs/source/workflows/profiler.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ uv pip install -e ".[profiling]"

If you are installing from a package, you need to install the `nvidia-nat[profiling]` package by running the following command:
```bash
uv pip install nvidia-nat[profiling]
uv pip install "nvidia-nat[profiling]"
```

## Current Profiler Architecture
Expand Down
2 changes: 1 addition & 1 deletion examples/MCP/simple_auth_mcp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ It is recommended to read the [MCP Authentication](../../../docs/source/workflow
1. **Agent toolkit**: Ensure you have the Agent toolkit installed. If you have not already done so, follow the instructions in the [Install Guide](../../../docs/source/quick-start/installing.md#install-from-source) to create the development environment and install NeMo Agent Toolkit.
2. **MCP Server**: Access to an MCP server that requires authentication (e.g., corporate Jira system)

**Note**: If you installed NeMo Agent toolkit from source, MCP client functionality is already included. If you installed from PyPI, you may need to install the MCP client package separately with `uv pip install nvidia-nat[mcp]`.
**Note**: If you installed NeMo Agent toolkit from source, MCP client functionality is already included. If you installed from PyPI, you may need to install the MCP client package separately with `uv pip install "nvidia-nat[mcp]"`.

## Install this Workflow

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.


memory:
saas_memory:
_type: mem0_memory
Expand Down Expand Up @@ -56,7 +55,7 @@ functions:
The question should be about user preferences which will help you format your response.
For example: "How does the user like responses formatted?"

# To use these tools you will need to install the nvidia-nat[langchain] package
# To use these tools you will need to install the "nvidia-nat[langchain]" package
web_search_tool:
_type: tavily_internet_search
max_results: 5
Expand Down Expand Up @@ -85,11 +84,11 @@ embedders:
workflow:
_type: react_agent
tool_names:
- cuda_retriever_tool
- mcp_retriever_tool
- add_memory
- get_memory
- web_search_tool
- code_generation_tool
- cuda_retriever_tool
- mcp_retriever_tool
- add_memory
- get_memory
- web_search_tool
- code_generation_tool
verbose: true
llm_name: nim_llm
11 changes: 5 additions & 6 deletions examples/RAG/simple_rag/configs/milvus_rag_tools_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.


retrievers:
cuda_retriever:
_type: milvus_retriever
Expand All @@ -38,7 +37,7 @@ functions:
retriever: mcp_retriever
topic: Retrieve information about Model Context Protocol (MCP)

# To use these tools you will need to install the nvidia-nat[langchain] package
# To use these tools you will need to install the "nvidia-nat[langchain]" package
web_search_tool:
_type: tavily_internet_search
max_results: 5
Expand Down Expand Up @@ -67,10 +66,10 @@ embedders:
workflow:
_type: react_agent
tool_names:
- cuda_retriever_tool
- mcp_retriever_tool
- web_search_tool
- code_generation_tool
- cuda_retriever_tool
- mcp_retriever_tool
- web_search_tool
- code_generation_tool
verbose: true
llm_name: nim_llm
additional_instructions: "If a tool call results in code or other artifacts being returned, you MUST include that in your thoughts and response."
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ maintainers = [{ name = "NVIDIA Corporation" }]


[project.optional-dependencies]
# Optional dependencies are things that users would want to install with NAT. i.e. `uv pip install nvidia-nat[langchain]`
# Optional dependencies are things that users would want to install with NAT. i.e. `uv pip install "nvidia-nat[langchain]"`
# Keep sorted!!!
all = ["nvidia-nat-all"] # meta-package
adk = ["nvidia-nat-adk"]
Expand Down
4 changes: 2 additions & 2 deletions src/nat/agent/prompt_optimizer/register.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ async def prompt_optimizer_function(config: PromptOptimizerConfig, builder: Buil
from .prompt import mutator_prompt
except ImportError as exc:
raise ImportError("langchain-core is not installed. Please install it to use MultiLLMPlanner.\n"
"This error can be resolve by installing nvidia-nat[langchain]") from exc
"This error can be resolve by installing \"nvidia-nat[langchain]\".") from exc

llm = await builder.get_llm(config.optimizer_llm, wrapper_type=LLMFrameworkEnum.LANGCHAIN)

Expand Down Expand Up @@ -111,7 +111,7 @@ async def prompt_recombiner_function(config: PromptRecombinerConfig, builder: Bu
from langchain_core.prompts import PromptTemplate
except ImportError as exc:
raise ImportError("langchain-core is not installed. Please install it to use MultiLLMPlanner.\n"
"This error can be resolve by installing nvidia-nat[langchain].") from exc
"This error can be resolve by installing \"nvidia-nat[langchain]\".") from exc

llm = await builder.get_llm(config.optimizer_llm, wrapper_type=LLMFrameworkEnum.LANGCHAIN)

Expand Down
2 changes: 1 addition & 1 deletion src/nat/profiler/decorators/framework_wrapper.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ async def wrapper(workflow_config, builder):
except ImportError as e:
logger.warning(
"ADK profiler not available. " +
"Install NAT with ADK extras: pip install 'nvidia-nat[adk]'. Error: %s",
"Install NAT with ADK extras: pip install \"nvidia-nat[adk]\". Error: %s",
e)
else:
handler = ADKProfilerHandler()
Expand Down
2 changes: 1 addition & 1 deletion src/nat/profiler/forecasting/models/linear_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ def __init__(self):
except ImportError:
logger.error(
"scikit-learn is not installed. Please install scikit-learn to use the LinearModel "
"profiling model or install `nvidia-nat[profiler]` to install all necessary profiling packages.")
"profiling model or install \"nvidia-nat[profiler]\" to install all necessary profiling packages.")

raise

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ def __init__(self):
except ImportError:
logger.error(
"scikit-learn is not installed. Please install scikit-learn to use the RandomForest "
"profiling model or install `nvidia-nat[profiler]` to install all necessary profiling packages.")
"profiling model or install \"nvidia-nat[profiler]\" to install all necessary profiling packages.")

raise

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -304,7 +304,7 @@ def save_gantt_chart(all_nodes: list[CallNode], output_path: str) -> None:
import matplotlib.pyplot as plt
except ImportError:
logger.error("matplotlib is not installed. Please install matplotlib to use generate plots for the profiler "
"or install `nvidia-nat[profiler]` to install all necessary profiling packages.")
"or install \"nvidia-nat[profiler]\" to install all necessary profiling packages.")

raise

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ def run_prefixspan(sequences_map: dict[int, list[PrefixCallNode]],
from prefixspan import PrefixSpan
except ImportError:
logger.error("prefixspan is not installed. Please install prefixspan to run the prefix analysis in the "
"profiler or install `nvidia-nat[profiler]` to install all necessary profiling packages.")
"profiler or install \"nvidia-nat[profiler]\" to install all necessary profiling packages.")

raise

Expand Down