Skip to content
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,13 +81,13 @@ pip install nvidia-nat
NeMo Agent Toolkit has many optional dependencies which can be installed with the core package. Optional dependencies are grouped by framework and can be installed with the core package. For example, to install the LangChain/LangGraph plugin, run the following:

```bash
pip install nvidia-nat[langchain]
pip install "nvidia-nat[langchain]"
```

Or for all optional dependencies:

```bash
pip install nvidia-nat[all]
pip install "nvidia-nat[all]"
```

The full list of optional dependencies can be found [here](./docs/source/quick-start/installing.md#framework-integrations).
Expand Down
18 changes: 9 additions & 9 deletions docs/source/extend/telemetry-exporters.md
Original file line number Diff line number Diff line change
Expand Up @@ -266,14 +266,14 @@ Before creating a custom exporter, check if your observability service is alread
| Service | Type | Installation | Configuration |
|---------|------|-------------|---------------|
| **File** | `file` | `pip install nvidia-nat` | local file or directory |
| **Langfuse** | `langfuse` | `pip install nvidia-nat[opentelemetry]` | endpoint + API keys |
| **LangSmith** | `langsmith` | `pip install nvidia-nat[opentelemetry]` | endpoint + API key |
| **OpenTelemetry Collector** | `otelcollector` | `pip install nvidia-nat[opentelemetry]` | endpoint + headers |
| **Patronus** | `patronus` | `pip install nvidia-nat[opentelemetry]` | endpoint + API key |
| **Galileo** | `galileo` | `pip install nvidia-nat[opentelemetry]` | endpoint + API key |
| **Phoenix** | `phoenix` | `pip install nvidia-nat[phoenix]` | endpoint |
| **RagaAI/Catalyst** | `catalyst` | `pip install nvidia-nat[ragaai]` | API key + project |
| **Weave** | `weave` | `pip install nvidia-nat[weave]` | project name |
| **Langfuse** | `langfuse` | `pip install "nvidia-nat[opentelemetry]"` | endpoint + API keys |
| **LangSmith** | `langsmith` | `pip install "nvidia-nat[opentelemetry]"` | endpoint + API key |
| **OpenTelemetry Collector** | `otelcollector` | `pip install "nvidia-nat[opentelemetry]"` | endpoint + headers |
| **Patronus** | `patronus` | `pip install "nvidia-nat[opentelemetry]"` | endpoint + API key |
| **Galileo** | `galileo` | `pip install "nvidia-nat[opentelemetry]"` | endpoint + API key |
| **Phoenix** | `phoenix` | `pip install "nvidia-nat[phoenix]"` | endpoint |
| **RagaAI/Catalyst** | `catalyst` | `pip install "nvidia-nat[ragaai]"` | API key + project |
| **Weave** | `weave` | `pip install "nvidia-nat[weave]"` | project name |

### Simple Configuration Example

Expand Down Expand Up @@ -412,7 +412,7 @@ class CustomSpanExporter(SpanExporter[Span, dict]):
> **Note**: OpenTelemetry exporters require the `nvidia-nat-opentelemetry` subpackage. Install it with:

> ```bash
> pip install nvidia-nat[opentelemetry]
> pip install "nvidia-nat[opentelemetry]"
> ```

For most OTLP-compatible services, use the pre-built `OTLPSpanAdapterExporter`:
Expand Down
4 changes: 2 additions & 2 deletions docs/source/quick-start/installing.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,13 +92,13 @@ pip install nvidia-nat
NeMo Agent toolkit has many optional dependencies which can be installed with the core package. Optional dependencies are grouped by framework and can be installed with the core package. For example, to install the LangChain/LangGraph plugin, run the following:

```bash
pip install nvidia-nat[langchain]
pip install "nvidia-nat[langchain]"
```

Or for all optional dependencies:

```bash
pip install nvidia-nat[all]
pip install "nvidia-nat[all]"
```

The full list of optional dependencies can be found [here](../quick-start/installing.md#framework-integrations).
Expand Down
2 changes: 1 addition & 1 deletion docs/source/reference/api-server-endpoints.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ result back to the client. The transaction schema is defined by the workflow.
## Asynchronous Generate
The asynchronous generate endpoint allows clients to submit a workflow to run in the background and return a response immediately with a unique identifier for the workflow. This can be used to query the status and results of the workflow at a later time. This is useful for long-running workflows, which would otherwise cause the client to time out.

This endpoint is only available when the `async_endpoints` optional dependency extra is installed. For users installing from source, this can be done by running `uv pip install -e '.[async_endpoints]'` from the root directory of the NeMo Agent toolkit library. Similarly, for users installing from PyPI, this can be done by running `pip install 'nvidia-nat[async_endpoints]'`.
This endpoint is only available when the `async_endpoints` optional dependency extra is installed. For users installing from source, this can be done by running `uv pip install -e '.[async_endpoints]'` from the root directory of the NeMo Agent toolkit library. Similarly, for users installing from PyPI, this can be done by running `pip install "nvidia-nat[async_endpoints]"`.

Asynchronous jobs are managed using [Dask](https://docs.dask.org/en/stable/). By default, a local Dask cluster is created at start time, however you can also configure the server to connect to an existing Dask scheduler by setting the `scheduler_address` configuration parameter. The Dask scheduler is used to manage the execution of asynchronous jobs, and can be configured to run on a single machine or across a cluster of machines. Job history and metadata is stored in a SQL database using [SQLAlchemy](https://www.sqlalchemy.org/). By default, a temporary SQLite database is created at start time, however you can also configure the server to use a persistent database by setting the `db_url` configuration parameter. Refer to the [SQLAlchemy documentation](https://docs.sqlalchemy.org/en/20/core/engines.html#database-urls) for the format of the `db_url` parameter. Any database supported by [SQLAlchemy's Asynchronous I/O extension](https://docs.sqlalchemy.org/en/20/orm/extensions/asyncio.html) can be used. Refer to [SQLAlchemy's Dialects](https://docs.sqlalchemy.org/en/20/dialects/index.html) for a complete list (many but not all of these support Asynchronous I/O).

Expand Down
2 changes: 1 addition & 1 deletion docs/source/reference/evaluate-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ limitations under the License.
It is recommended that the [Evaluating NeMo Agent toolkit Workflows](./evaluate.md) guide be read before proceeding with this detailed documentation.
:::

The evaluation endpoint can be used to start evaluation jobs on a remote NeMo Agent toolkit server. This endpoint is only available when the `async_endpoints` optional dependency extra is installed. For users installing from source, this can be done by running `uv pip install -e '.[async_endpoints]'` from the root directory of the NeMo Agent toolkit library. Similarly, for users installing from PyPI, this can be done by running `pip install 'nvidia-nat[async_endpoints]'`.
The evaluation endpoint can be used to start evaluation jobs on a remote NeMo Agent toolkit server. This endpoint is only available when the `async_endpoints` optional dependency extra is installed. For users installing from source, this can be done by running `uv pip install -e '.[async_endpoints]'` from the root directory of the NeMo Agent toolkit library. Similarly, for users installing from PyPI, this can be done by running `pip install "nvidia-nat[async_endpoints]"`.

## Evaluation Endpoint Overview
```{mermaid}
Expand Down
2 changes: 1 addition & 1 deletion docs/source/workflows/evaluate.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ uv pip install -e '.[profiling]'

If you are installing from a package, you can install the sub-package by running the following command:
```bash
uv pip install nvidia-nat[profiling]
uv pip install "nvidia-nat[profiling]"
```

## Evaluating a Workflow
Expand Down
2 changes: 1 addition & 1 deletion docs/source/workflows/mcp/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ NeMo Agent toolkit [Model Context Protocol (MCP)](https://modelcontextprotocol.i
* An [MCP client](./mcp-client.md) to connect to and use tools served by remote MCP servers.
* An [MCP server](./mcp-server.md) to publish tools using MCP to be used by any MCP client.

**Note:** MCP client functionality requires the `nvidia-nat-mcp` package. Install it with `uv pip install nvidia-nat[mcp]`.
**Note:** MCP client functionality requires the `nvidia-nat-mcp` package. Install it with `uv pip install "nvidia-nat[mcp]"`.


```{toctree}
Expand Down
2 changes: 1 addition & 1 deletion docs/source/workflows/mcp/mcp-client.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ This guide will cover how to use a NeMo Agent toolkit workflow as a MCP host wit
MCP client functionality requires the `nvidia-nat-mcp` package. Install it with:

```bash
uv pip install nvidia-nat[mcp]
uv pip install "nvidia-nat[mcp]"
```
## Accessing Protected MCP Servers
NeMo Agent toolkit can access protected MCP servers via the MCP client auth provider. For more information, see the [MCP Authentication](./mcp-auth.md) documentation.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/workflows/mcp/mcp-server.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ nat mcp serve --config_file examples/getting_started/simple_calculator/configs/c

To list the tools published by the MCP server you can use the `nat mcp client tool list` command. This command acts as an MCP client and connects to the MCP server running on the specified URL (defaults to `http://localhost:9901/mcp` for streamable-http, with backwards compatibility for `http://localhost:9901/sse`).

**Note:** The `nat mcp client` commands require the `nvidia-nat-mcp` package. If you encounter an error about missing MCP client functionality, install it with `uv pip install nvidia-nat[mcp]`.
**Note:** The `nat mcp client` commands require the `nvidia-nat-mcp` package. If you encounter an error about missing MCP client functionality, install it with `uv pip install "nvidia-nat[mcp]"`.

```bash
nat mcp client tool list
Expand Down
2 changes: 1 addition & 1 deletion docs/source/workflows/profiler.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ uv pip install -e ".[profiling]"

If you are installing from a package, you need to install the `nvidia-nat[profiling]` package by running the following command:
```bash
uv pip install nvidia-nat[profiling]
uv pip install "nvidia-nat[profiling]"
```

## Current Profiler Architecture
Expand Down
2 changes: 1 addition & 1 deletion examples/MCP/simple_auth_mcp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ It is recommended to read the [MCP Authentication](../../../docs/source/workflow
1. **Agent toolkit**: Ensure you have the Agent toolkit installed. If you have not already done so, follow the instructions in the [Install Guide](../../../docs/source/quick-start/installing.md#install-from-source) to create the development environment and install NeMo Agent Toolkit.
2. **MCP Server**: Access to an MCP server that requires authentication (e.g., corporate Jira system)

**Note**: If you installed NeMo Agent toolkit from source, MCP client functionality is already included. If you installed from PyPI, you may need to install the MCP client package separately with `uv pip install nvidia-nat[mcp]`.
**Note**: If you installed NeMo Agent toolkit from source, MCP client functionality is already included. If you installed from PyPI, you may need to install the MCP client package separately with `uv pip install "nvidia-nat[mcp]"`.

## Install this Workflow

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.


memory:
saas_memory:
_type: mem0_memory
Expand Down Expand Up @@ -56,7 +55,7 @@ functions:
The question should be about user preferences which will help you format your response.
For example: "How does the user like responses formatted?"

# To use these tools you will need to install the nvidia-nat[langchain] package
# To use these tools you will need to install the "nvidia-nat[langchain]" package
web_search_tool:
_type: tavily_internet_search
max_results: 5
Expand Down Expand Up @@ -85,11 +84,11 @@ embedders:
workflow:
_type: react_agent
tool_names:
- cuda_retriever_tool
- mcp_retriever_tool
- add_memory
- get_memory
- web_search_tool
- code_generation_tool
- cuda_retriever_tool
- mcp_retriever_tool
- add_memory
- get_memory
- web_search_tool
- code_generation_tool
verbose: true
llm_name: nim_llm
11 changes: 5 additions & 6 deletions examples/RAG/simple_rag/configs/milvus_rag_tools_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.


retrievers:
cuda_retriever:
_type: milvus_retriever
Expand All @@ -38,7 +37,7 @@ functions:
retriever: mcp_retriever
topic: Retrieve information about Model Context Protocol (MCP)

# To use these tools you will need to install the nvidia-nat[langchain] package
# To use these tools you will need to install the "nvidia-nat[langchain]" package
web_search_tool:
_type: tavily_internet_search
max_results: 5
Expand Down Expand Up @@ -67,10 +66,10 @@ embedders:
workflow:
_type: react_agent
tool_names:
- cuda_retriever_tool
- mcp_retriever_tool
- web_search_tool
- code_generation_tool
- cuda_retriever_tool
- mcp_retriever_tool
- web_search_tool
- code_generation_tool
verbose: true
llm_name: nim_llm
additional_instructions: "If a tool call results in code or other artifacts being returned, you MUST include that in your thoughts and response."
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ maintainers = [{ name = "NVIDIA Corporation" }]


[project.optional-dependencies]
# Optional dependencies are things that users would want to install with NAT. i.e. `uv pip install nvidia-nat[langchain]`
# Optional dependencies are things that users would want to install with NAT. i.e. `uv pip install "nvidia-nat[langchain]"`
# Keep sorted!!!
all = ["nvidia-nat-all"] # meta-package
adk = ["nvidia-nat-adk"]
Expand Down
7 changes: 4 additions & 3 deletions src/nat/agent/prompt_optimizer/register.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,8 @@
from nat.cli.register_workflow import register_function
from nat.data_models.component_ref import LLMRef
from nat.data_models.function import FunctionBaseConfig
from nat.profiler.parameter_optimization.prompt_optimizer import PromptOptimizerInputSchema
from nat.profiler.parameter_optimization.prompt_optimizer import \
PromptOptimizerInputSchema


class PromptOptimizerConfig(FunctionBaseConfig, name="prompt_init"):
Expand Down Expand Up @@ -51,7 +52,7 @@ async def prompt_optimizer_function(config: PromptOptimizerConfig, builder: Buil
from .prompt import mutator_prompt
except ImportError as exc:
raise ImportError("langchain-core is not installed. Please install it to use MultiLLMPlanner.\n"
"This error can be resolve by installing nvidia-nat[langchain]") from exc
"This error can be resolve by installing \"nvidia-nat[langchain]\".") from exc

llm = await builder.get_llm(config.optimizer_llm, wrapper_type=LLMFrameworkEnum.LANGCHAIN)

Expand Down Expand Up @@ -111,7 +112,7 @@ async def prompt_recombiner_function(config: PromptRecombinerConfig, builder: Bu
from langchain_core.prompts import PromptTemplate
except ImportError as exc:
raise ImportError("langchain-core is not installed. Please install it to use MultiLLMPlanner.\n"
"This error can be resolve by installing nvidia-nat[langchain].") from exc
"This error can be resolve by installing \"nvidia-nat[langchain]\".") from exc

llm = await builder.get_llm(config.optimizer_llm, wrapper_type=LLMFrameworkEnum.LANGCHAIN)

Expand Down
26 changes: 16 additions & 10 deletions src/nat/profiler/decorators/framework_wrapper.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,7 @@

import functools
import logging
from collections.abc import AsyncIterator
from collections.abc import Callable
from collections.abc import AsyncIterator, Callable
from contextlib import AbstractAsyncContextManager as AsyncContextManager
from contextlib import asynccontextmanager
from contextvars import ContextVar
Expand Down Expand Up @@ -72,13 +71,15 @@ async def wrapper(workflow_config, builder):
if LLMFrameworkEnum.LANGCHAIN in frameworks:
# Always set a fresh handler in the current context so callbacks
# route to the active run. Only register the hook once globally.
from nat.profiler.callbacks.langchain_callback_handler import LangchainProfilerHandler
from nat.profiler.callbacks.langchain_callback_handler import \
LangchainProfilerHandler

handler = LangchainProfilerHandler()
callback_handler_var.set(handler)

if not _library_instrumented["langchain"]:
from langchain_core.tracers.context import register_configure_hook
from langchain_core.tracers.context import \
register_configure_hook
register_configure_hook(callback_handler_var, inheritable=True)
_library_instrumented["langchain"] = True
logger.debug("LangChain/LangGraph callback hook registered")
Expand All @@ -87,30 +88,34 @@ async def wrapper(workflow_config, builder):
from llama_index.core import Settings
from llama_index.core.callbacks import CallbackManager

from nat.profiler.callbacks.llama_index_callback_handler import LlamaIndexProfilerHandler
from nat.profiler.callbacks.llama_index_callback_handler import \
LlamaIndexProfilerHandler

handler = LlamaIndexProfilerHandler()
Settings.callback_manager = CallbackManager([handler])
logger.debug("LlamaIndex callback handler registered")

if LLMFrameworkEnum.CREWAI in frameworks and not _library_instrumented["crewai"]:
from nat.plugins.crewai.crewai_callback_handler import CrewAIProfilerHandler
from nat.plugins.crewai.crewai_callback_handler import \
CrewAIProfilerHandler

handler = CrewAIProfilerHandler()
handler.instrument()
_library_instrumented["crewai"] = True
logger.debug("CrewAI callback handler registered")

if LLMFrameworkEnum.SEMANTIC_KERNEL in frameworks and not _library_instrumented["semantic_kernel"]:
from nat.profiler.callbacks.semantic_kernel_callback_handler import SemanticKernelProfilerHandler
from nat.profiler.callbacks.semantic_kernel_callback_handler import \
SemanticKernelProfilerHandler

handler = SemanticKernelProfilerHandler(workflow_llms=workflow_llms)
handler.instrument()
_library_instrumented["semantic_kernel"] = True
logger.debug("SemanticKernel callback handler registered")

if LLMFrameworkEnum.AGNO in frameworks and not _library_instrumented["agno"]:
from nat.profiler.callbacks.agno_callback_handler import AgnoProfilerHandler
from nat.profiler.callbacks.agno_callback_handler import \
AgnoProfilerHandler

handler = AgnoProfilerHandler()
handler.instrument()
Expand All @@ -119,11 +124,12 @@ async def wrapper(workflow_config, builder):

if LLMFrameworkEnum.ADK in frameworks and not _library_instrumented["adk"]:
try:
from nat.plugins.adk.adk_callback_handler import ADKProfilerHandler
from nat.plugins.adk.adk_callback_handler import \
ADKProfilerHandler
except ImportError as e:
logger.warning(
"ADK profiler not available. " +
"Install NAT with ADK extras: pip install 'nvidia-nat[adk]'. Error: %s",
"Install NAT with ADK extras: pip install \"nvidia-nat[adk]\". Error: %s",
e)
else:
handler = ADKProfilerHandler()
Expand Down
8 changes: 5 additions & 3 deletions src/nat/profiler/forecasting/models/linear_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,10 @@

import numpy as np

from nat.profiler.forecasting.models.forecasting_base_model import ForecastingBaseModel
from nat.profiler.intermediate_property_adapter import IntermediatePropertyAdaptor
from nat.profiler.forecasting.models.forecasting_base_model import \
ForecastingBaseModel
from nat.profiler.intermediate_property_adapter import \
IntermediatePropertyAdaptor

logger = logging.getLogger(__name__)

Expand All @@ -36,7 +38,7 @@ def __init__(self):
except ImportError:
logger.error(
"scikit-learn is not installed. Please install scikit-learn to use the LinearModel "
"profiling model or install `nvidia-nat[profiler]` to install all necessary profiling packages.")
"profiling model or install \"nvidia-nat[profiler]\" to install all necessary profiling packages.")

raise

Expand Down
Loading
Loading