diff --git a/README.md b/README.md index 89d39ee45..4a60d0e24 100644 --- a/README.md +++ b/README.md @@ -81,13 +81,13 @@ pip install nvidia-nat NeMo Agent Toolkit has many optional dependencies which can be installed with the core package. Optional dependencies are grouped by framework and can be installed with the core package. For example, to install the LangChain/LangGraph plugin, run the following: ```bash -pip install nvidia-nat[langchain] +pip install "nvidia-nat[langchain]" ``` Or for all optional dependencies: ```bash -pip install nvidia-nat[all] +pip install "nvidia-nat[all]" ``` The full list of optional dependencies can be found [here](./docs/source/quick-start/installing.md#framework-integrations). diff --git a/docs/source/extend/telemetry-exporters.md b/docs/source/extend/telemetry-exporters.md index b99d2c6d2..f9ba45598 100644 --- a/docs/source/extend/telemetry-exporters.md +++ b/docs/source/extend/telemetry-exporters.md @@ -266,14 +266,14 @@ Before creating a custom exporter, check if your observability service is alread | Service | Type | Installation | Configuration | |---------|------|-------------|---------------| | **File** | `file` | `pip install nvidia-nat` | local file or directory | -| **Langfuse** | `langfuse` | `pip install nvidia-nat[opentelemetry]` | endpoint + API keys | -| **LangSmith** | `langsmith` | `pip install nvidia-nat[opentelemetry]` | endpoint + API key | -| **OpenTelemetry Collector** | `otelcollector` | `pip install nvidia-nat[opentelemetry]` | endpoint + headers | -| **Patronus** | `patronus` | `pip install nvidia-nat[opentelemetry]` | endpoint + API key | -| **Galileo** | `galileo` | `pip install nvidia-nat[opentelemetry]` | endpoint + API key | -| **Phoenix** | `phoenix` | `pip install nvidia-nat[phoenix]` | endpoint | -| **RagaAI/Catalyst** | `catalyst` | `pip install nvidia-nat[ragaai]` | API key + project | -| **Weave** | `weave` | `pip install nvidia-nat[weave]` | project name | +| **Langfuse** | `langfuse` | `pip install "nvidia-nat[opentelemetry]"` | endpoint + API keys | +| **LangSmith** | `langsmith` | `pip install "nvidia-nat[opentelemetry]"` | endpoint + API key | +| **OpenTelemetry Collector** | `otelcollector` | `pip install "nvidia-nat[opentelemetry]"` | endpoint + headers | +| **Patronus** | `patronus` | `pip install "nvidia-nat[opentelemetry]"` | endpoint + API key | +| **Galileo** | `galileo` | `pip install "nvidia-nat[opentelemetry]"` | endpoint + API key | +| **Phoenix** | `phoenix` | `pip install "nvidia-nat[phoenix]"` | endpoint | +| **RagaAI/Catalyst** | `catalyst` | `pip install "nvidia-nat[ragaai]"` | API key + project | +| **Weave** | `weave` | `pip install "nvidia-nat[weave]"` | project name | ### Simple Configuration Example @@ -412,7 +412,7 @@ class CustomSpanExporter(SpanExporter[Span, dict]): > **Note**: OpenTelemetry exporters require the `nvidia-nat-opentelemetry` subpackage. Install it with: > ```bash -> pip install nvidia-nat[opentelemetry] +> pip install "nvidia-nat[opentelemetry]" > ``` For most OTLP-compatible services, use the pre-built `OTLPSpanAdapterExporter`: diff --git a/docs/source/quick-start/installing.md b/docs/source/quick-start/installing.md index db7272282..cec51c328 100644 --- a/docs/source/quick-start/installing.md +++ b/docs/source/quick-start/installing.md @@ -92,13 +92,13 @@ pip install nvidia-nat NeMo Agent toolkit has many optional dependencies which can be installed with the core package. Optional dependencies are grouped by framework and can be installed with the core package. For example, to install the LangChain/LangGraph plugin, run the following: ```bash -pip install nvidia-nat[langchain] +pip install "nvidia-nat[langchain]" ``` Or for all optional dependencies: ```bash -pip install nvidia-nat[all] +pip install "nvidia-nat[all]" ``` The full list of optional dependencies can be found [here](../quick-start/installing.md#framework-integrations). diff --git a/docs/source/reference/api-server-endpoints.md b/docs/source/reference/api-server-endpoints.md index 5cb7ea108..5084ecfb0 100644 --- a/docs/source/reference/api-server-endpoints.md +++ b/docs/source/reference/api-server-endpoints.md @@ -61,7 +61,7 @@ result back to the client. The transaction schema is defined by the workflow. ## Asynchronous Generate The asynchronous generate endpoint allows clients to submit a workflow to run in the background and return a response immediately with a unique identifier for the workflow. This can be used to query the status and results of the workflow at a later time. This is useful for long-running workflows, which would otherwise cause the client to time out. -This endpoint is only available when the `async_endpoints` optional dependency extra is installed. For users installing from source, this can be done by running `uv pip install -e '.[async_endpoints]'` from the root directory of the NeMo Agent toolkit library. Similarly, for users installing from PyPI, this can be done by running `pip install 'nvidia-nat[async_endpoints]'`. +This endpoint is only available when the `async_endpoints` optional dependency extra is installed. For users installing from source, this can be done by running `uv pip install -e '.[async_endpoints]'` from the root directory of the NeMo Agent toolkit library. Similarly, for users installing from PyPI, this can be done by running `pip install "nvidia-nat[async_endpoints]"`. Asynchronous jobs are managed using [Dask](https://docs.dask.org/en/stable/). By default, a local Dask cluster is created at start time, however you can also configure the server to connect to an existing Dask scheduler by setting the `scheduler_address` configuration parameter. The Dask scheduler is used to manage the execution of asynchronous jobs, and can be configured to run on a single machine or across a cluster of machines. Job history and metadata is stored in a SQL database using [SQLAlchemy](https://www.sqlalchemy.org/). By default, a temporary SQLite database is created at start time, however you can also configure the server to use a persistent database by setting the `db_url` configuration parameter. Refer to the [SQLAlchemy documentation](https://docs.sqlalchemy.org/en/20/core/engines.html#database-urls) for the format of the `db_url` parameter. Any database supported by [SQLAlchemy's Asynchronous I/O extension](https://docs.sqlalchemy.org/en/20/orm/extensions/asyncio.html) can be used. Refer to [SQLAlchemy's Dialects](https://docs.sqlalchemy.org/en/20/dialects/index.html) for a complete list (many but not all of these support Asynchronous I/O). diff --git a/docs/source/reference/evaluate-api.md b/docs/source/reference/evaluate-api.md index 22852d298..2b419401a 100644 --- a/docs/source/reference/evaluate-api.md +++ b/docs/source/reference/evaluate-api.md @@ -20,7 +20,7 @@ limitations under the License. It is recommended that the [Evaluating NeMo Agent toolkit Workflows](./evaluate.md) guide be read before proceeding with this detailed documentation. ::: -The evaluation endpoint can be used to start evaluation jobs on a remote NeMo Agent toolkit server. This endpoint is only available when the `async_endpoints` optional dependency extra is installed. For users installing from source, this can be done by running `uv pip install -e '.[async_endpoints]'` from the root directory of the NeMo Agent toolkit library. Similarly, for users installing from PyPI, this can be done by running `pip install 'nvidia-nat[async_endpoints]'`. +The evaluation endpoint can be used to start evaluation jobs on a remote NeMo Agent toolkit server. This endpoint is only available when the `async_endpoints` optional dependency extra is installed. For users installing from source, this can be done by running `uv pip install -e '.[async_endpoints]'` from the root directory of the NeMo Agent toolkit library. Similarly, for users installing from PyPI, this can be done by running `pip install "nvidia-nat[async_endpoints]"`. ## Evaluation Endpoint Overview ```{mermaid} diff --git a/docs/source/workflows/evaluate.md b/docs/source/workflows/evaluate.md index d5d24ddc2..ca989f9bd 100644 --- a/docs/source/workflows/evaluate.md +++ b/docs/source/workflows/evaluate.md @@ -34,7 +34,7 @@ uv pip install -e '.[profiling]' If you are installing from a package, you can install the sub-package by running the following command: ```bash -uv pip install nvidia-nat[profiling] +uv pip install "nvidia-nat[profiling]" ``` ## Evaluating a Workflow diff --git a/docs/source/workflows/mcp/index.md b/docs/source/workflows/mcp/index.md index 95cebc8c5..dcc759dd8 100644 --- a/docs/source/workflows/mcp/index.md +++ b/docs/source/workflows/mcp/index.md @@ -21,7 +21,7 @@ NeMo Agent toolkit [Model Context Protocol (MCP)](https://modelcontextprotocol.i * An [MCP client](./mcp-client.md) to connect to and use tools served by remote MCP servers. * An [MCP server](./mcp-server.md) to publish tools using MCP to be used by any MCP client. -**Note:** MCP client functionality requires the `nvidia-nat-mcp` package. Install it with `uv pip install nvidia-nat[mcp]`. +**Note:** MCP client functionality requires the `nvidia-nat-mcp` package. Install it with `uv pip install "nvidia-nat[mcp]"`. ```{toctree} diff --git a/docs/source/workflows/mcp/mcp-client.md b/docs/source/workflows/mcp/mcp-client.md index acda2fc84..ea2d06ce1 100644 --- a/docs/source/workflows/mcp/mcp-client.md +++ b/docs/source/workflows/mcp/mcp-client.md @@ -28,7 +28,7 @@ This guide will cover how to use a NeMo Agent toolkit workflow as a MCP host wit MCP client functionality requires the `nvidia-nat-mcp` package. Install it with: ```bash -uv pip install nvidia-nat[mcp] +uv pip install "nvidia-nat[mcp]" ``` ## Accessing Protected MCP Servers NeMo Agent toolkit can access protected MCP servers via the MCP client auth provider. For more information, see the [MCP Authentication](./mcp-auth.md) documentation. diff --git a/docs/source/workflows/mcp/mcp-server.md b/docs/source/workflows/mcp/mcp-server.md index 92e84b113..6f2c6a63b 100644 --- a/docs/source/workflows/mcp/mcp-server.md +++ b/docs/source/workflows/mcp/mcp-server.md @@ -62,7 +62,7 @@ nat mcp serve --config_file examples/getting_started/simple_calculator/configs/c To list the tools published by the MCP server you can use the `nat mcp client tool list` command. This command acts as an MCP client and connects to the MCP server running on the specified URL (defaults to `http://localhost:9901/mcp` for streamable-http, with backwards compatibility for `http://localhost:9901/sse`). -**Note:** The `nat mcp client` commands require the `nvidia-nat-mcp` package. If you encounter an error about missing MCP client functionality, install it with `uv pip install nvidia-nat[mcp]`. +**Note:** The `nat mcp client` commands require the `nvidia-nat-mcp` package. If you encounter an error about missing MCP client functionality, install it with `uv pip install "nvidia-nat[mcp]"`. ```bash nat mcp client tool list diff --git a/docs/source/workflows/profiler.md b/docs/source/workflows/profiler.md index 810fddd39..c03344df3 100644 --- a/docs/source/workflows/profiler.md +++ b/docs/source/workflows/profiler.md @@ -41,7 +41,7 @@ uv pip install -e ".[profiling]" If you are installing from a package, you need to install the `nvidia-nat[profiling]` package by running the following command: ```bash -uv pip install nvidia-nat[profiling] +uv pip install "nvidia-nat[profiling]" ``` ## Current Profiler Architecture diff --git a/examples/MCP/simple_auth_mcp/README.md b/examples/MCP/simple_auth_mcp/README.md index 58a5d9e1a..1baf223d6 100644 --- a/examples/MCP/simple_auth_mcp/README.md +++ b/examples/MCP/simple_auth_mcp/README.md @@ -26,7 +26,7 @@ It is recommended to read the [MCP Authentication](../../../docs/source/workflow 1. **Agent toolkit**: Ensure you have the Agent toolkit installed. If you have not already done so, follow the instructions in the [Install Guide](../../../docs/source/quick-start/installing.md#install-from-source) to create the development environment and install NeMo Agent Toolkit. 2. **MCP Server**: Access to an MCP server that requires authentication (e.g., corporate Jira system) -**Note**: If you installed NeMo Agent toolkit from source, MCP client functionality is already included. If you installed from PyPI, you may need to install the MCP client package separately with `uv pip install nvidia-nat[mcp]`. +**Note**: If you installed NeMo Agent toolkit from source, MCP client functionality is already included. If you installed from PyPI, you may need to install the MCP client package separately with `uv pip install "nvidia-nat[mcp]"`. ## Install this Workflow diff --git a/examples/RAG/simple_rag/configs/milvus_memory_rag_tools_config.yml b/examples/RAG/simple_rag/configs/milvus_memory_rag_tools_config.yml index c4d102b8d..d61c09dc1 100644 --- a/examples/RAG/simple_rag/configs/milvus_memory_rag_tools_config.yml +++ b/examples/RAG/simple_rag/configs/milvus_memory_rag_tools_config.yml @@ -13,7 +13,6 @@ # See the License for the specific language governing permissions and # limitations under the License. - memory: saas_memory: _type: mem0_memory @@ -56,7 +55,7 @@ functions: The question should be about user preferences which will help you format your response. For example: "How does the user like responses formatted?" - # To use these tools you will need to install the nvidia-nat[langchain] package + # To use these tools you will need to install the "nvidia-nat[langchain]" package web_search_tool: _type: tavily_internet_search max_results: 5 @@ -85,11 +84,11 @@ embedders: workflow: _type: react_agent tool_names: - - cuda_retriever_tool - - mcp_retriever_tool - - add_memory - - get_memory - - web_search_tool - - code_generation_tool + - cuda_retriever_tool + - mcp_retriever_tool + - add_memory + - get_memory + - web_search_tool + - code_generation_tool verbose: true llm_name: nim_llm diff --git a/examples/RAG/simple_rag/configs/milvus_rag_tools_config.yml b/examples/RAG/simple_rag/configs/milvus_rag_tools_config.yml index b823d9cf5..dbb446797 100644 --- a/examples/RAG/simple_rag/configs/milvus_rag_tools_config.yml +++ b/examples/RAG/simple_rag/configs/milvus_rag_tools_config.yml @@ -13,7 +13,6 @@ # See the License for the specific language governing permissions and # limitations under the License. - retrievers: cuda_retriever: _type: milvus_retriever @@ -38,7 +37,7 @@ functions: retriever: mcp_retriever topic: Retrieve information about Model Context Protocol (MCP) - # To use these tools you will need to install the nvidia-nat[langchain] package + # To use these tools you will need to install the "nvidia-nat[langchain]" package web_search_tool: _type: tavily_internet_search max_results: 5 @@ -67,10 +66,10 @@ embedders: workflow: _type: react_agent tool_names: - - cuda_retriever_tool - - mcp_retriever_tool - - web_search_tool - - code_generation_tool + - cuda_retriever_tool + - mcp_retriever_tool + - web_search_tool + - code_generation_tool verbose: true llm_name: nim_llm additional_instructions: "If a tool call results in code or other artifacts being returned, you MUST include that in your thoughts and response." diff --git a/pyproject.toml b/pyproject.toml index 88c363db1..d1e025b14 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -66,7 +66,7 @@ maintainers = [{ name = "NVIDIA Corporation" }] [project.optional-dependencies] -# Optional dependencies are things that users would want to install with NAT. i.e. `uv pip install nvidia-nat[langchain]` +# Optional dependencies are things that users would want to install with NAT. i.e. `uv pip install "nvidia-nat[langchain]"` # Keep sorted!!! all = ["nvidia-nat-all"] # meta-package adk = ["nvidia-nat-adk"] diff --git a/src/nat/agent/prompt_optimizer/register.py b/src/nat/agent/prompt_optimizer/register.py index 83a7e2458..ed3d1533e 100644 --- a/src/nat/agent/prompt_optimizer/register.py +++ b/src/nat/agent/prompt_optimizer/register.py @@ -51,7 +51,7 @@ async def prompt_optimizer_function(config: PromptOptimizerConfig, builder: Buil from .prompt import mutator_prompt except ImportError as exc: raise ImportError("langchain-core is not installed. Please install it to use MultiLLMPlanner.\n" - "This error can be resolve by installing nvidia-nat[langchain]") from exc + "This error can be resolve by installing \"nvidia-nat[langchain]\".") from exc llm = await builder.get_llm(config.optimizer_llm, wrapper_type=LLMFrameworkEnum.LANGCHAIN) @@ -111,7 +111,7 @@ async def prompt_recombiner_function(config: PromptRecombinerConfig, builder: Bu from langchain_core.prompts import PromptTemplate except ImportError as exc: raise ImportError("langchain-core is not installed. Please install it to use MultiLLMPlanner.\n" - "This error can be resolve by installing nvidia-nat[langchain].") from exc + "This error can be resolve by installing \"nvidia-nat[langchain]\".") from exc llm = await builder.get_llm(config.optimizer_llm, wrapper_type=LLMFrameworkEnum.LANGCHAIN) diff --git a/src/nat/profiler/decorators/framework_wrapper.py b/src/nat/profiler/decorators/framework_wrapper.py index 59f62febc..7a89ed0a4 100644 --- a/src/nat/profiler/decorators/framework_wrapper.py +++ b/src/nat/profiler/decorators/framework_wrapper.py @@ -123,7 +123,7 @@ async def wrapper(workflow_config, builder): except ImportError as e: logger.warning( "ADK profiler not available. " + - "Install NAT with ADK extras: pip install 'nvidia-nat[adk]'. Error: %s", + "Install NAT with ADK extras: pip install \"nvidia-nat[adk]\". Error: %s", e) else: handler = ADKProfilerHandler() diff --git a/src/nat/profiler/forecasting/models/linear_model.py b/src/nat/profiler/forecasting/models/linear_model.py index be6c9d19b..6c3589bd1 100644 --- a/src/nat/profiler/forecasting/models/linear_model.py +++ b/src/nat/profiler/forecasting/models/linear_model.py @@ -36,7 +36,7 @@ def __init__(self): except ImportError: logger.error( "scikit-learn is not installed. Please install scikit-learn to use the LinearModel " - "profiling model or install `nvidia-nat[profiler]` to install all necessary profiling packages.") + "profiling model or install \"nvidia-nat[profiler]\" to install all necessary profiling packages.") raise diff --git a/src/nat/profiler/forecasting/models/random_forest_regressor.py b/src/nat/profiler/forecasting/models/random_forest_regressor.py index 51a3c40d1..3fa36310a 100644 --- a/src/nat/profiler/forecasting/models/random_forest_regressor.py +++ b/src/nat/profiler/forecasting/models/random_forest_regressor.py @@ -36,7 +36,7 @@ def __init__(self): except ImportError: logger.error( "scikit-learn is not installed. Please install scikit-learn to use the RandomForest " - "profiling model or install `nvidia-nat[profiler]` to install all necessary profiling packages.") + "profiling model or install \"nvidia-nat[profiler]\" to install all necessary profiling packages.") raise diff --git a/src/nat/profiler/inference_optimization/bottleneck_analysis/nested_stack_analysis.py b/src/nat/profiler/inference_optimization/bottleneck_analysis/nested_stack_analysis.py index 3ca6b0347..bd02b9872 100644 --- a/src/nat/profiler/inference_optimization/bottleneck_analysis/nested_stack_analysis.py +++ b/src/nat/profiler/inference_optimization/bottleneck_analysis/nested_stack_analysis.py @@ -304,7 +304,7 @@ def save_gantt_chart(all_nodes: list[CallNode], output_path: str) -> None: import matplotlib.pyplot as plt except ImportError: logger.error("matplotlib is not installed. Please install matplotlib to use generate plots for the profiler " - "or install `nvidia-nat[profiler]` to install all necessary profiling packages.") + "or install \"nvidia-nat[profiler]\" to install all necessary profiling packages.") raise diff --git a/src/nat/profiler/inference_optimization/experimental/prefix_span_analysis.py b/src/nat/profiler/inference_optimization/experimental/prefix_span_analysis.py index 475740500..8da76bd1e 100644 --- a/src/nat/profiler/inference_optimization/experimental/prefix_span_analysis.py +++ b/src/nat/profiler/inference_optimization/experimental/prefix_span_analysis.py @@ -212,7 +212,7 @@ def run_prefixspan(sequences_map: dict[int, list[PrefixCallNode]], from prefixspan import PrefixSpan except ImportError: logger.error("prefixspan is not installed. Please install prefixspan to run the prefix analysis in the " - "profiler or install `nvidia-nat[profiler]` to install all necessary profiling packages.") + "profiler or install \"nvidia-nat[profiler]\" to install all necessary profiling packages.") raise