-
Notifications
You must be signed in to change notification settings - Fork 410
Add ability to disable SSL verification #1006
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Merge develop to main for rc-10
Merge develop to main
Merge 1.0.0: upstream/develop to upstream/main
Merge pull request NVIDIA#92 from NVIDIA/develop
Updated changelog with another bug fix (NVIDIA#93)
[RELEASE] AIQToolkit v1.1.0
Changelog updated for 1.1.0 (NVIDIA#291)
## Description <!-- Note: The pull request title will be included in the CHANGELOG. --> <!-- Provide a standalone description of changes in this PR. --> <!-- Reference any issues closed by this PR with "closes #1234". All PRs should have an issue they close--> <!-- SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. SPDX-License-Identifier: Apache-2.0 Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> ## ❄️ Code freeze for `develop` and `v1.2.0` release ### What does this mean? Only critical/hotfix level issues should be merged into `develop` until release (merging of this PR). ### What is the purpose of this PR? - Update documentation - Allow testing for the new release - Enable a means to merge `develop` into `main` for the release ## By Submitting this PR I confirm: - I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing.md). - We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license. - Any contribution which contains commits that are not Signed-Off will not be accepted. - When the PR is ready for review, new or existing tests cover these changes. - When the PR is ready for review, the documentation is up to date with these changes.
<!-- SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. SPDX-License-Identifier: Apache-2.0 Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> ## ❄️ Code freeze for `v1.2.1` release ### What does this mean? Only critical/hotfix level issues should be merged into `develop` until release (merging of this PR). ### What is the purpose of this PR? - Update documentation - Allow testing for the new release - Enable a means to merge `develop` into `main` for the release
Signed-off-by: David Gardner <[email protected]>
WalkthroughAdds a configurable Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant Config as OpenAI*ModelConfig
participant Adapter as Plugin Adapter
participant HTTP as httpx / Env
participant Client as LLM Client
User->>Config: set verify_ssl (True/False)
User->>Adapter: initialize with Config
alt verify_ssl == False
Note over Adapter,HTTP: Disable SSL verification
opt LiteLLM-based
Adapter->>HTTP: set env LITELLM_SSL_VERIFY="false"
end
opt httpx-based
Adapter->>HTTP: create httpx.Client/AsyncClient(verify=False)
Adapter->>Client: instantiate with http_client/http_async_client
end
else verify_ssl == True
Adapter->>Client: instantiate without custom http clients / env
end
Adapter-->>User: yield configured client
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Suggested labels
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 Ruff (0.14.2)packages/nvidia_nat_langchain/src/nat/plugins/langchain/embedder.py�[1;31mruff failed�[0m packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py�[1;31mruff failed�[0m packages/nvidia_nat_langchain/tests/test_embedder_ssl_verification.py�[1;31mruff failed�[0m
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (9)
packages/nvidia_nat_adk/src/nat/plugins/adk/llm.py (1)
108-111: Document global SSL environment‐variable change for LiteLLMLiteLLM only supports SSL configuration via
LITELLM_SSL_VERIFY, so settingos.environis required and applies process-wide. Add a code comment explaining this side effect and consider saving/restoring the original value—or using a context manager—to limit its scope.packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py (2)
147-147: Remove unused import.The
loggingimport is not used anywhere in this function.Apply this diff:
- import logging - from langchain_openai import ChatOpenAI
163-164: Document or extract timeout values as constants.The hardcoded timeout values (60.0 seconds total, 10.0 seconds connect) lack explanation. Consider extracting them as named constants or adding a comment explaining why these specific values were chosen.
Example:
# Configure SSL verification if needed if not llm_config.verify_ssl: import httpx # Use reasonable timeouts: 60s total, 10s for connection establishment timeout = httpx.Timeout(60.0, connect=10.0) sync_client = httpx.Client(verify=False, timeout=timeout) async_client = httpx.AsyncClient(verify=False, timeout=timeout) config_dict["http_client"] = sync_client config_dict["http_async_client"] = async_clientpackages/nvidia_nat_langchain/tests/test_embedder_ssl_verification.py (1)
103-116: Remove unused fixtures from test.The
test_openai_embedder_config_verify_ssl_field_existstest doesn't use themock_openai_embeddingsormock_builderfixtures since it only validates the config field existence.Apply this diff:
@patch("langchain_openai.OpenAIEmbeddings") - async def test_openai_embedder_config_verify_ssl_field_exists(self, mock_openai_embeddings, mock_builder): + async def test_openai_embedder_config_verify_ssl_field_exists(self): """Test that OpenAIEmbedderModelConfig has the verify_ssl field.""" # Test with explicit True config = OpenAIEmbedderModelConfig(model_name="text-embedding-ada-002", api_key="test", verify_ssl=True)Also, the
@patchdecorator on line 102 is unnecessary for this test:- @patch("langchain_openai.OpenAIEmbeddings") async def test_openai_embedder_config_verify_ssl_field_exists(self):packages/nvidia_nat_langchain/tests/test_llm_ssl_verification.py (5)
41-51: Remove unusedclientvariable.The
clientvariable is assigned but never used. Either remove it or use_to indicate it's intentionally unused.Apply this diff:
- async with openai_langchain(openai_config, mock_builder) as client: + async with openai_langchain(openai_config, mock_builder): # Verify ChatOpenAI was called mock_chat_openai.assert_called_once()
53-77: Refactor brittle SSL verification check.Lines 75-77 access private httpx implementation details (
_transport._pool._ssl_context), which is fragile and could break with httpx version changes. Consider mocking thehttpx.Clientandhttpx.AsyncClientconstructors to verify they're called withverify=Falseinstead.Apply this approach:
+@patch("nat.plugins.langchain.llm.httpx.Client") +@patch("nat.plugins.langchain.llm.httpx.AsyncClient") @patch("langchain_openai.ChatOpenAI") -async def test_ssl_verification_disabled(self, mock_chat_openai, openai_config, mock_builder): +async def test_ssl_verification_disabled(self, mock_chat_openai, mock_async_client, mock_sync_client, openai_config, mock_builder): """Test that SSL verification can be disabled.""" # Disable SSL verification openai_config.verify_ssl = False - async with openai_langchain(openai_config, mock_builder) as client: + async with openai_langchain(openai_config, mock_builder): # Verify ChatOpenAI was called mock_chat_openai.assert_called_once() call_kwargs = mock_chat_openai.call_args[1] # When verify_ssl is False, custom http clients should be provided assert "http_client" in call_kwargs assert "http_async_client" in call_kwargs - # Verify the clients are httpx clients with verify=False - sync_client = call_kwargs["http_client"] - async_client = call_kwargs["http_async_client"] - - assert isinstance(sync_client, httpx.Client) - assert isinstance(async_client, httpx.AsyncClient) - - # Check that verify is False (httpx clients don't expose this directly, - # but we can verify they were created with the right config) - assert sync_client._transport._pool._ssl_context is None or not sync_client._transport._pool._ssl_context.check_hostname + # Verify httpx clients were created with verify=False + mock_sync_client.assert_called_once() + sync_call_kwargs = mock_sync_client.call_args[1] + assert sync_call_kwargs["verify"] is False + + mock_async_client.assert_called_once() + async_call_kwargs = mock_async_client.call_args[1] + assert async_call_kwargs["verify"] is False
79-91: Remove unusedclientvariable.Apply this diff:
- async with openai_langchain(openai_config, mock_builder) as client: + async with openai_langchain(openai_config, mock_builder): # Verify ChatOpenAI was called mock_chat_openai.assert_called_once()
93-106: Remove unusedclientvariable.Apply this diff:
- async with openai_langchain(openai_config, mock_builder) as client: + async with openai_langchain(openai_config, mock_builder): call_kwargs = mock_chat_openai.call_args[1]
108-131: Remove unusedclientvariable.Apply this diff:
- async with openai_langchain(openai_config, mock_builder) as client: + async with openai_langchain(openai_config, mock_builder): call_kwargs = mock_chat_openai.call_args[1]
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (20)
examples/configs/openai_ssl_disabled_example.yml(1 hunks)packages/nvidia_nat_adk/src/nat/plugins/adk/llm.py(1 hunks)packages/nvidia_nat_adk/tests/test_llm_ssl_verification.py(1 hunks)packages/nvidia_nat_agno/src/nat/plugins/agno/llm.py(1 hunks)packages/nvidia_nat_agno/tests/test_llm_ssl_verification.py(1 hunks)packages/nvidia_nat_crewai/src/nat/plugins/crewai/llm.py(1 hunks)packages/nvidia_nat_crewai/tests/test_llm_ssl_verification.py(1 hunks)packages/nvidia_nat_langchain/src/nat/plugins/langchain/embedder.py(1 hunks)packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py(1 hunks)packages/nvidia_nat_langchain/tests/test_embedder_ssl_verification.py(1 hunks)packages/nvidia_nat_langchain/tests/test_llm_ssl_verification.py(1 hunks)packages/nvidia_nat_llama_index/src/nat/plugins/llama_index/embedder.py(1 hunks)packages/nvidia_nat_llama_index/src/nat/plugins/llama_index/llm.py(1 hunks)packages/nvidia_nat_llama_index/tests/test_embedder_ssl_verification.py(1 hunks)packages/nvidia_nat_llama_index/tests/test_llm_ssl_verification.py(1 hunks)packages/nvidia_nat_semantic_kernel/src/nat/plugins/semantic_kernel/llm.py(1 hunks)packages/nvidia_nat_semantic_kernel/tests/test_llm_ssl_verification.py(1 hunks)src/nat/embedder/openai_embedder.py(1 hunks)src/nat/llm/openai_llm.py(1 hunks)tests/nat/llm/test_openai_ssl_verification.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (14)
**/*.{py,yaml,yml}
📄 CodeRabbit inference engine (.cursor/rules/nat-test-llm.mdc)
**/*.{py,yaml,yml}: Configure response_seq as a list of strings; values cycle per call, and [] yields an empty string.
Configure delay_ms to inject per-call artificial latency in milliseconds for nat_test_llm.
Files:
packages/nvidia_nat_llama_index/tests/test_embedder_ssl_verification.pypackages/nvidia_nat_semantic_kernel/src/nat/plugins/semantic_kernel/llm.pyexamples/configs/openai_ssl_disabled_example.ymlpackages/nvidia_nat_llama_index/src/nat/plugins/llama_index/embedder.pypackages/nvidia_nat_llama_index/src/nat/plugins/llama_index/llm.pypackages/nvidia_nat_agno/tests/test_llm_ssl_verification.pysrc/nat/llm/openai_llm.pypackages/nvidia_nat_crewai/tests/test_llm_ssl_verification.pypackages/nvidia_nat_adk/tests/test_llm_ssl_verification.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/embedder.pypackages/nvidia_nat_semantic_kernel/tests/test_llm_ssl_verification.pytests/nat/llm/test_openai_ssl_verification.pysrc/nat/embedder/openai_embedder.pypackages/nvidia_nat_adk/src/nat/plugins/adk/llm.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.pypackages/nvidia_nat_llama_index/tests/test_llm_ssl_verification.pypackages/nvidia_nat_agno/src/nat/plugins/agno/llm.pypackages/nvidia_nat_langchain/tests/test_embedder_ssl_verification.pypackages/nvidia_nat_crewai/src/nat/plugins/crewai/llm.pypackages/nvidia_nat_langchain/tests/test_llm_ssl_verification.py
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/nat-test-llm.mdc)
**/*.py: Programmatic use: create TestLLMConfig(response_seq=[...], delay_ms=...), add with builder.add_llm("", cfg).
When retrieving the test LLM wrapper, use builder.get_llm(name, wrapper_type=LLMFrameworkEnum.) and call the framework’s method (e.g., ainvoke, achat, call).
**/*.py: In code comments/identifiers use NAT abbreviations as specified: nat for API namespace/CLI, nvidia-nat for package name, NAT for env var prefixes; do not use these abbreviations in documentation
Follow PEP 20 and PEP 8; run yapf with column_limit=120; use 4-space indentation; end files with a single trailing newline
Run ruff check --fix as linter (not formatter) using pyproject.toml config; fix warnings unless explicitly ignored
Respect naming: snake_case for functions/variables, PascalCase for classes, UPPER_CASE for constants
Treat pyright warnings as errors during development
Exception handling: use bare raise to re-raise; log with logger.error() when re-raising to avoid duplicate stack traces; use logger.exception() when catching without re-raising
Provide Google-style docstrings for every public module, class, function, and CLI command; first line concise and ending with a period; surround code entities with backticks
Validate and sanitize all user input, especially in web or CLI interfaces
Prefer httpx with SSL verification enabled by default and follow OWASP Top-10 recommendations
Use async/await for I/O-bound work; profile CPU-heavy paths with cProfile or mprof before optimizing; cache expensive computations with functools.lru_cache or external cache; leverage NumPy vectorized operations when beneficial
Files:
packages/nvidia_nat_llama_index/tests/test_embedder_ssl_verification.pypackages/nvidia_nat_semantic_kernel/src/nat/plugins/semantic_kernel/llm.pypackages/nvidia_nat_llama_index/src/nat/plugins/llama_index/embedder.pypackages/nvidia_nat_llama_index/src/nat/plugins/llama_index/llm.pypackages/nvidia_nat_agno/tests/test_llm_ssl_verification.pysrc/nat/llm/openai_llm.pypackages/nvidia_nat_crewai/tests/test_llm_ssl_verification.pypackages/nvidia_nat_adk/tests/test_llm_ssl_verification.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/embedder.pypackages/nvidia_nat_semantic_kernel/tests/test_llm_ssl_verification.pytests/nat/llm/test_openai_ssl_verification.pysrc/nat/embedder/openai_embedder.pypackages/nvidia_nat_adk/src/nat/plugins/adk/llm.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.pypackages/nvidia_nat_llama_index/tests/test_llm_ssl_verification.pypackages/nvidia_nat_agno/src/nat/plugins/agno/llm.pypackages/nvidia_nat_langchain/tests/test_embedder_ssl_verification.pypackages/nvidia_nat_crewai/src/nat/plugins/crewai/llm.pypackages/nvidia_nat_langchain/tests/test_llm_ssl_verification.py
packages/*/tests/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
If a package contains Python code, include tests in a tests/ directory at the same level as pyproject.toml
Files:
packages/nvidia_nat_llama_index/tests/test_embedder_ssl_verification.pypackages/nvidia_nat_agno/tests/test_llm_ssl_verification.pypackages/nvidia_nat_crewai/tests/test_llm_ssl_verification.pypackages/nvidia_nat_adk/tests/test_llm_ssl_verification.pypackages/nvidia_nat_semantic_kernel/tests/test_llm_ssl_verification.pypackages/nvidia_nat_llama_index/tests/test_llm_ssl_verification.pypackages/nvidia_nat_langchain/tests/test_embedder_ssl_verification.pypackages/nvidia_nat_langchain/tests/test_llm_ssl_verification.py
**/*
⚙️ CodeRabbit configuration file
**/*: # Code Review Instructions
- Ensure the code follows best practices and coding standards. - For Python code, follow
PEP 20 and
PEP 8 for style guidelines.- Check for security vulnerabilities and potential issues. - Python methods should use type hints for all parameters and return values.
Example:def my_function(param1: int, param2: str) -> bool: pass- For Python exception handling, ensure proper stack trace preservation:
- When re-raising exceptions: use bare
raisestatements to maintain the original stack trace,
and uselogger.error()(notlogger.exception()) to avoid duplicate stack trace output.- When catching and logging exceptions without re-raising: always use
logger.exception()
to capture the full stack trace information.Documentation Review Instructions - Verify that documentation and comments are clear and comprehensive. - Verify that the documentation doesn't contain any TODOs, FIXMEs or placeholder text like "lorem ipsum". - Verify that the documentation doesn't contain any offensive or outdated terms. - Verify that documentation and comments are free of spelling mistakes, ensure the documentation doesn't contain any
words listed in the
ci/vale/styles/config/vocabularies/nat/reject.txtfile, words that might appear to be
spelling mistakes but are listed in theci/vale/styles/config/vocabularies/nat/accept.txtfile are OK.Misc. - All code (except .mdc files that contain Cursor rules) should be licensed under the Apache License 2.0,
and should contain an Apache License 2.0 header comment at the top of each file.
- Confirm that copyright years are up-to date whenever a file is changed.
Files:
packages/nvidia_nat_llama_index/tests/test_embedder_ssl_verification.pypackages/nvidia_nat_semantic_kernel/src/nat/plugins/semantic_kernel/llm.pyexamples/configs/openai_ssl_disabled_example.ymlpackages/nvidia_nat_llama_index/src/nat/plugins/llama_index/embedder.pypackages/nvidia_nat_llama_index/src/nat/plugins/llama_index/llm.pypackages/nvidia_nat_agno/tests/test_llm_ssl_verification.pysrc/nat/llm/openai_llm.pypackages/nvidia_nat_crewai/tests/test_llm_ssl_verification.pypackages/nvidia_nat_adk/tests/test_llm_ssl_verification.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/embedder.pypackages/nvidia_nat_semantic_kernel/tests/test_llm_ssl_verification.pytests/nat/llm/test_openai_ssl_verification.pysrc/nat/embedder/openai_embedder.pypackages/nvidia_nat_adk/src/nat/plugins/adk/llm.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.pypackages/nvidia_nat_llama_index/tests/test_llm_ssl_verification.pypackages/nvidia_nat_agno/src/nat/plugins/agno/llm.pypackages/nvidia_nat_langchain/tests/test_embedder_ssl_verification.pypackages/nvidia_nat_crewai/src/nat/plugins/crewai/llm.pypackages/nvidia_nat_langchain/tests/test_llm_ssl_verification.py
packages/**/*
⚙️ CodeRabbit configuration file
packages/**/*: - This directory contains optional plugin packages for the toolkit, each should contain apyproject.tomlfile. - Thepyproject.tomlfile should declare a dependency onnvidia-nator another package with a name starting
withnvidia-nat-. This dependency should be declared using~=<version>, and the version should be a two
digit version (ex:~=1.0).
- Not all packages contain Python code, if they do they should also contain their own set of tests, in a
tests/directory at the same level as thepyproject.tomlfile.
Files:
packages/nvidia_nat_llama_index/tests/test_embedder_ssl_verification.pypackages/nvidia_nat_semantic_kernel/src/nat/plugins/semantic_kernel/llm.pypackages/nvidia_nat_llama_index/src/nat/plugins/llama_index/embedder.pypackages/nvidia_nat_llama_index/src/nat/plugins/llama_index/llm.pypackages/nvidia_nat_agno/tests/test_llm_ssl_verification.pypackages/nvidia_nat_crewai/tests/test_llm_ssl_verification.pypackages/nvidia_nat_adk/tests/test_llm_ssl_verification.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/embedder.pypackages/nvidia_nat_semantic_kernel/tests/test_llm_ssl_verification.pypackages/nvidia_nat_adk/src/nat/plugins/adk/llm.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.pypackages/nvidia_nat_llama_index/tests/test_llm_ssl_verification.pypackages/nvidia_nat_agno/src/nat/plugins/agno/llm.pypackages/nvidia_nat_langchain/tests/test_embedder_ssl_verification.pypackages/nvidia_nat_crewai/src/nat/plugins/crewai/llm.pypackages/nvidia_nat_langchain/tests/test_llm_ssl_verification.py
packages/*/src/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
Importable Python code inside packages must live under packages//src/
Files:
packages/nvidia_nat_semantic_kernel/src/nat/plugins/semantic_kernel/llm.pypackages/nvidia_nat_llama_index/src/nat/plugins/llama_index/embedder.pypackages/nvidia_nat_llama_index/src/nat/plugins/llama_index/llm.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/embedder.pypackages/nvidia_nat_adk/src/nat/plugins/adk/llm.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.pypackages/nvidia_nat_agno/src/nat/plugins/agno/llm.pypackages/nvidia_nat_crewai/src/nat/plugins/crewai/llm.py
{src/**/*.py,packages/*/src/**/*.py}
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
All public APIs must have Python 3.11+ type hints on parameters and return values; prefer typing/collections.abc abstractions; use typing.Annotated when useful
Files:
packages/nvidia_nat_semantic_kernel/src/nat/plugins/semantic_kernel/llm.pypackages/nvidia_nat_llama_index/src/nat/plugins/llama_index/embedder.pypackages/nvidia_nat_llama_index/src/nat/plugins/llama_index/llm.pysrc/nat/llm/openai_llm.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/embedder.pysrc/nat/embedder/openai_embedder.pypackages/nvidia_nat_adk/src/nat/plugins/adk/llm.pypackages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.pypackages/nvidia_nat_agno/src/nat/plugins/agno/llm.pypackages/nvidia_nat_crewai/src/nat/plugins/crewai/llm.py
**/*.{yaml,yml}
📄 CodeRabbit inference engine (.cursor/rules/nat-test-llm.mdc)
In workflow/config YAML, set llms.._type: nat_test_llm to stub responses.
Files:
examples/configs/openai_ssl_disabled_example.yml
**/configs/**
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
Configuration files consumed by code must be stored next to that code in a configs/ folder
Files:
examples/configs/openai_ssl_disabled_example.yml
examples/**/*
⚙️ CodeRabbit configuration file
examples/**/*: - This directory contains example code and usage scenarios for the toolkit, at a minimum an example should
contain a README.md or file README.ipynb.
- If an example contains Python code, it should be placed in a subdirectory named
src/and should
contain apyproject.tomlfile. Optionally, it might also contain scripts in ascripts/directory.- If an example contains YAML files, they should be placed in a subdirectory named
configs/. - If an example contains sample data files, they should be placed in a subdirectory nameddata/, and should
be checked into git-lfs.
Files:
examples/configs/openai_ssl_disabled_example.yml
src/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
All importable Python code must live under src/ (or packages//src/)
Files:
src/nat/llm/openai_llm.pysrc/nat/embedder/openai_embedder.py
src/nat/**/*
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
Changes in src/nat should prioritize backward compatibility
Files:
src/nat/llm/openai_llm.pysrc/nat/embedder/openai_embedder.py
⚙️ CodeRabbit configuration file
This directory contains the core functionality of the toolkit. Changes should prioritize backward compatibility.
Files:
src/nat/llm/openai_llm.pysrc/nat/embedder/openai_embedder.py
tests/**/*.py
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
Unit tests reside under tests/ and should use markers defined in pyproject.toml (e.g., integration)
Files:
tests/nat/llm/test_openai_ssl_verification.py
⚙️ CodeRabbit configuration file
tests/**/*.py: - Ensure that tests are comprehensive, cover edge cases, and validate the functionality of the code. - Test functions should be named using thetest_prefix, using snake_case. - Any frequently repeated code should be extracted into pytest fixtures. - Pytest fixtures should define the name argument when applying the pytest.fixture decorator. The fixture
function being decorated should be named using thefixture_prefix, using snake_case. Example:
@pytest.fixture(name="my_fixture")
def fixture_my_fixture():
pass
Files:
tests/nat/llm/test_openai_ssl_verification.py
{tests/**/*.py,examples/*/tests/**/*.py}
📄 CodeRabbit inference engine (.cursor/rules/general.mdc)
{tests/**/*.py,examples/*/tests/**/*.py}: Use pytest (with pytest-asyncio for async); name test files test_*.py; test functions start with test_; extract repeated code into fixtures; fixtures must set name in decorator and be named with fixture_ prefix
Mock external services with pytest_httpserver or unittest.mock; do not hit live endpoints
Mark expensive tests with @pytest.mark.slow or @pytest.mark.integration
Files:
tests/nat/llm/test_openai_ssl_verification.py
🧬 Code graph analysis (9)
packages/nvidia_nat_llama_index/tests/test_embedder_ssl_verification.py (3)
src/nat/builder/builder.py (1)
Builder(68-290)src/nat/embedder/openai_embedder.py (1)
OpenAIEmbedderModelConfig(27-39)packages/nvidia_nat_llama_index/src/nat/plugins/llama_index/embedder.py (1)
openai_llama_index(62-82)
packages/nvidia_nat_agno/tests/test_llm_ssl_verification.py (3)
src/nat/builder/builder.py (1)
Builder(68-290)src/nat/llm/openai_llm.py (2)
openai_llm(55-57)OpenAIModelConfig(31-51)packages/nvidia_nat_agno/src/nat/plugins/agno/llm.py (1)
openai_agno(88-107)
packages/nvidia_nat_crewai/tests/test_llm_ssl_verification.py (2)
src/nat/llm/openai_llm.py (2)
openai_llm(55-57)OpenAIModelConfig(31-51)packages/nvidia_nat_crewai/src/nat/plugins/crewai/llm.py (1)
openai_crewai(123-137)
packages/nvidia_nat_adk/tests/test_llm_ssl_verification.py (3)
src/nat/builder/builder.py (1)
Builder(68-290)src/nat/llm/openai_llm.py (2)
openai_llm(55-57)OpenAIModelConfig(31-51)packages/nvidia_nat_adk/src/nat/plugins/adk/llm.py (1)
openai_adk(91-113)
packages/nvidia_nat_semantic_kernel/tests/test_llm_ssl_verification.py (3)
src/nat/builder/builder.py (1)
Builder(68-290)src/nat/llm/openai_llm.py (2)
openai_llm(55-57)OpenAIModelConfig(31-51)packages/nvidia_nat_semantic_kernel/src/nat/plugins/semantic_kernel/llm.py (1)
openai_semantic_kernel(105-125)
tests/nat/llm/test_openai_ssl_verification.py (2)
src/nat/embedder/openai_embedder.py (1)
OpenAIEmbedderModelConfig(27-39)src/nat/llm/openai_llm.py (2)
openai_llm(55-57)OpenAIModelConfig(31-51)
packages/nvidia_nat_llama_index/tests/test_llm_ssl_verification.py (3)
src/nat/builder/builder.py (1)
Builder(68-290)src/nat/llm/openai_llm.py (2)
openai_llm(55-57)OpenAIModelConfig(31-51)packages/nvidia_nat_llama_index/src/nat/plugins/llama_index/llm.py (1)
openai_llama_index(104-118)
packages/nvidia_nat_langchain/tests/test_embedder_ssl_verification.py (3)
src/nat/builder/builder.py (1)
Builder(68-290)src/nat/embedder/openai_embedder.py (1)
OpenAIEmbedderModelConfig(27-39)packages/nvidia_nat_langchain/src/nat/plugins/langchain/embedder.py (1)
openai_langchain(59-79)
packages/nvidia_nat_langchain/tests/test_llm_ssl_verification.py (3)
src/nat/builder/builder.py (1)
Builder(68-290)src/nat/llm/openai_llm.py (2)
openai_llm(55-57)OpenAIModelConfig(31-51)packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py (1)
openai_langchain(145-172)
🪛 Ruff (0.14.0)
packages/nvidia_nat_llama_index/tests/test_embedder_ssl_verification.py
44-44: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
58-58: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
75-75: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
packages/nvidia_nat_semantic_kernel/src/nat/plugins/semantic_kernel/llm.py
121-121: Probable use of httpx call with verify=False disabling SSL certificate checks
(S501)
packages/nvidia_nat_llama_index/src/nat/plugins/llama_index/embedder.py
72-72: Probable use of httpx call with verify=False disabling SSL certificate checks
(S501)
packages/nvidia_nat_llama_index/src/nat/plugins/llama_index/llm.py
114-114: Probable use of httpx call with verify=False disabling SSL certificate checks
(S501)
packages/nvidia_nat_agno/tests/test_llm_ssl_verification.py
44-44: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
58-58: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
75-75: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
91-91: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
packages/nvidia_nat_crewai/tests/test_llm_ssl_verification.py
45-45: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
59-59: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
71-71: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
88-88: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
packages/nvidia_nat_adk/tests/test_llm_ssl_verification.py
45-45: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
59-59: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
71-71: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
90-90: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
107-107: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
packages/nvidia_nat_langchain/src/nat/plugins/langchain/embedder.py
69-69: Probable use of httpx call with verify=False disabling SSL certificate checks
(S501)
packages/nvidia_nat_semantic_kernel/tests/test_llm_ssl_verification.py
44-44: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
58-58: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
75-75: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
92-92: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py
163-163: Probable use of httpx call with verify=False disabling SSL certificate checks
(S501)
164-164: Probable use of httpx call with verify=False disabling SSL certificate checks
(S501)
packages/nvidia_nat_llama_index/tests/test_llm_ssl_verification.py
44-44: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
58-58: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
75-75: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
92-92: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
packages/nvidia_nat_agno/src/nat/plugins/agno/llm.py
103-103: Probable use of httpx call with verify=False disabling SSL certificate checks
(S501)
packages/nvidia_nat_langchain/tests/test_embedder_ssl_verification.py
44-44: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
58-58: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
75-75: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
90-90: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
103-103: Unused method argument: mock_openai_embeddings
(ARG002)
103-103: Unused method argument: mock_builder
(ARG002)
packages/nvidia_nat_langchain/tests/test_llm_ssl_verification.py
44-44: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
59-59: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
84-84: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
98-98: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
117-117: Local variable client is assigned to but never used
Remove assignment to unused variable client
(F841)
134-134: Unused method argument: mock_chat_openai
(ARG002)
134-134: Unused method argument: mock_builder
(ARG002)
🔇 Additional comments (15)
packages/nvidia_nat_crewai/src/nat/plugins/crewai/llm.py (2)
127-128: Config dict preparation looks correct.Properly excludes
verify_sslfrom the config passed to the LLM constructor. However, see the next comment about the overall SSL verification approach.
130-135: Verify per-client SSL configuration support
SettingLITELLM_SSL_VERIFYviaos.environproduces a process-wide side effect. Confirm whethercrewai.LLMaccepts a per-instance SSL-verify or custom HTTP client parameter; otherwise, isolate the environment change (e.g. context manager) or document the global limitation.src/nat/embedder/openai_embedder.py (1)
37-39: LGTM! Secure default with clear documentation.The
verify_sslfield addition follows best practices:
- Defaults to
True(SSL verification enabled)- Clear description indicating when to disable
- Proper type annotation
packages/nvidia_nat_llama_index/src/nat/plugins/llama_index/embedder.py (1)
66-72: LGTM! Proper SSL bypass implementation.The SSL verification handling is implemented correctly:
verify_sslis excluded from the config dict (not passed toOpenAIEmbedding)- Lazy import of
httpxonly when needed- Conditional injection of
http_clientwithverify=FalseNote: The Ruff S501 warning is expected and intentional for this feature. The security implications are documented in the PR and example config.
src/nat/llm/openai_llm.py (1)
49-51: LGTM! Consistent with embedder config.The
verify_sslfield addition mirrors the embedder config pattern with secure defaults.packages/nvidia_nat_langchain/src/nat/plugins/langchain/embedder.py (1)
63-69: LGTM! Consistent SSL bypass implementation.The implementation follows the same correct pattern as the LlamaIndex integration. The Ruff S501 warning is expected and intentional.
examples/configs/openai_ssl_disabled_example.yml (1)
1-43: LGTM! Comprehensive example with proper security warnings.The example configuration demonstrates:
- Clear security warnings about SSL verification bypass (lines 5-6)
- Complete usage for both LLM and embedder
- Documentation of supported frameworks
- Alternative framework examples
The warnings appropriately emphasize that disabling SSL should only be used in development/testing environments.
packages/nvidia_nat_agno/src/nat/plugins/agno/llm.py (1)
94-103: LGTM! Consistent implementation pattern.The SSL verification handling follows the same pattern as LangChain and LlamaIndex integrations. The Ruff S501 warning is expected.
packages/nvidia_nat_llama_index/src/nat/plugins/llama_index/llm.py (1)
108-114: LGTM! Consistent with embedder implementation.The LLM SSL handling mirrors the embedder pattern correctly. The Ruff S501 warning is expected and intentional.
packages/nvidia_nat_langchain/tests/test_llm_ssl_verification.py (2)
1-26: LGTM!The license header, imports, and module docstring are correct.
28-39: LGTM!The fixtures are correctly defined following pytest conventions with the
fixture_prefix and name argument.tests/nat/llm/test_openai_ssl_verification.py (4)
1-21: LGTM!The license header, imports, and module docstring are correct.
23-68: LGTM!The test class comprehensively validates the
verify_sslfield inOpenAIModelConfig, covering default values, explicit settings, interaction with other parameters, and serialization behavior.
70-113: LGTM!The test class mirrors the coverage for
OpenAIEmbedderModelConfig, ensuring consistent SSL verification behavior across LLM and embedder configurations.
115-146: LGTM!The YAML configuration parsing tests validate backward compatibility and configuration dictionary parsing, which is essential for real-world usage scenarios.
| @patch("langchain_openai.ChatOpenAI") | ||
| async def test_openai_model_config_verify_ssl_field_exists(self, mock_chat_openai, mock_builder): | ||
| """Test that OpenAIModelConfig has the verify_ssl field.""" | ||
| # Test with explicit True | ||
| config = OpenAIModelConfig(model_name="gpt-4", api_key="test", verify_ssl=True) | ||
| assert hasattr(config, "verify_ssl") | ||
| assert config.verify_ssl is True | ||
|
|
||
| # Test with explicit False | ||
| config = OpenAIModelConfig(model_name="gpt-4", api_key="test", verify_ssl=False) | ||
| assert config.verify_ssl is False | ||
|
|
||
| # Test default value (should be True) | ||
| config = OpenAIModelConfig(model_name="gpt-4", api_key="test") | ||
| assert config.verify_ssl is True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Remove unused method arguments.
The test doesn't call openai_langchain, so mock_chat_openai and mock_builder parameters are unnecessary.
Apply this diff:
-@patch("langchain_openai.ChatOpenAI")
-async def test_openai_model_config_verify_ssl_field_exists(self, mock_chat_openai, mock_builder):
+def test_openai_model_config_verify_ssl_field_exists(self):
"""Test that OpenAIModelConfig has the verify_ssl field."""📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @patch("langchain_openai.ChatOpenAI") | |
| async def test_openai_model_config_verify_ssl_field_exists(self, mock_chat_openai, mock_builder): | |
| """Test that OpenAIModelConfig has the verify_ssl field.""" | |
| # Test with explicit True | |
| config = OpenAIModelConfig(model_name="gpt-4", api_key="test", verify_ssl=True) | |
| assert hasattr(config, "verify_ssl") | |
| assert config.verify_ssl is True | |
| # Test with explicit False | |
| config = OpenAIModelConfig(model_name="gpt-4", api_key="test", verify_ssl=False) | |
| assert config.verify_ssl is False | |
| # Test default value (should be True) | |
| config = OpenAIModelConfig(model_name="gpt-4", api_key="test") | |
| assert config.verify_ssl is True | |
| def test_openai_model_config_verify_ssl_field_exists(self): | |
| """Test that OpenAIModelConfig has the verify_ssl field.""" | |
| # Test with explicit True | |
| config = OpenAIModelConfig(model_name="gpt-4", api_key="test", verify_ssl=True) | |
| assert hasattr(config, "verify_ssl") | |
| assert config.verify_ssl is True | |
| # Test with explicit False | |
| config = OpenAIModelConfig(model_name="gpt-4", api_key="test", verify_ssl=False) | |
| assert config.verify_ssl is False | |
| # Test default value (should be True) | |
| config = OpenAIModelConfig(model_name="gpt-4", api_key="test") | |
| assert config.verify_ssl is True |
🧰 Tools
🪛 Ruff (0.14.0)
134-134: Unused method argument: mock_chat_openai
(ARG002)
134-134: Unused method argument: mock_builder
(ARG002)
🤖 Prompt for AI Agents
In packages/nvidia_nat_langchain/tests/test_llm_ssl_verification.py around lines
133 to 147, the test function is declared with unused patched arguments and a
patch decorator even though the test doesn't call openai_langchain; remove the
@patch("langchain_openai.ChatOpenAI") decorator (or any other unnecessary patch
decorators) and change the test function signature to def
test_openai_model_config_verify_ssl_field_exists(self): (or remove self if not a
method) so the function has no unused parameters; ensure imports or other
decorators are adjusted accordingly.
| # Prepare kwargs for OpenAIChatCompletion | ||
| kwargs = {"ai_model_id": llm_config.model_name} | ||
|
|
||
| # Add optional parameters if provided | ||
| if llm_config.api_key: | ||
| kwargs["api_key"] = llm_config.api_key | ||
| if llm_config.base_url: | ||
| kwargs["base_url"] = llm_config.base_url | ||
|
|
||
| # Configure SSL verification if needed | ||
| if not llm_config.verify_ssl: | ||
| import httpx | ||
| kwargs["async_client"] = httpx.AsyncClient(verify=False) | ||
|
|
||
| llm = OpenAIChatCompletion(**kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify httpx.AsyncClient lifecycle management to prevent resource leaks.
The implementation correctly supports SSL verification control and follows the coding guideline to prefer SSL verification enabled by default. The static analysis hint (S501) is expected here—disabling SSL verification is intentional per the PR objectives for dev/test environments.
However, the httpx.AsyncClient(verify=False) is created without explicit cleanup. Verify that OpenAIChatCompletion properly manages and closes the provided async_client to prevent resource leaks.
🏁 Script executed:
#!/bin/bash
# Description: Verify that Semantic Kernel's OpenAIChatCompletion manages the httpx AsyncClient lifecycle.
# Expected: Find evidence that OpenAIChatCompletion closes the async_client or documents client lifecycle ownership.
# Search for AsyncClient lifecycle management in Semantic Kernel documentation or source
rg -nP --type=py -C5 'async_client.*close|AsyncClient.*context|async_client.*lifecycle'
# Search for OpenAIChatCompletion constructor signature and async_client parameter handling
ast-grep --pattern $'class OpenAIChatCompletion {
$$$
def __init__($$$, async_client=$_, $$$) {
$$$
}
$$$
}'Length of output: 2547
Ensure AsyncClient is properly closed to prevent resource leaks.
In packages/nvidia_nat_semantic_kernel/src/nat/plugins/semantic_kernel/llm.py (lines 119–121), you instantiate httpx.AsyncClient(verify=False) without cleanup. Implement lifecycle management—e.g., wrap in a context manager or explicitly close the client after use (or handle it inside OpenAIChatCompletion).
🧰 Tools
🪛 Ruff (0.14.0)
121-121: Probable use of httpx call with verify=False disabling SSL certificate checks
(S501)
|
/ok to test |
@mdemoret-nv, there was an error processing your request: See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/1/ |
|
/ok to test d0e2b48 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is consistently implemented/tested across frameworks.
I also think the tests are not necessarily testing what should be tested (e.g. relying on exact implementation, testing orthogonal behavior)
| # Configure SSL verification if needed | ||
| if not llm_config.verify_ssl: | ||
| import httpx | ||
| config_obj["http_client"] = httpx.Client(verify=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of creating a new client, prefer:
config_obj["client_params"] = { "verify": False }| sync_client = httpx.Client(verify=False, timeout=httpx.Timeout(60.0, connect=10.0)) | ||
| async_client = httpx.AsyncClient(verify=False, timeout=httpx.Timeout(60.0, connect=10.0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you specifying timeouts? Specifically, you only appear to do this for LangChain
| # Configure SSL verification if needed | ||
| if not embedder_config.verify_ssl: | ||
| import httpx | ||
| config_dict["http_client"] = httpx.Client(verify=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not also set the async client?
| # Configure SSL verification if needed | ||
| if not embedder_config.verify_ssl: | ||
| import httpx | ||
| config_dict["http_client"] = httpx.Client(verify=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we also need to specify the async http client?
| # Configure SSL verification if needed | ||
| if not llm_config.verify_ssl: | ||
| import httpx | ||
| config_dict["http_client"] = httpx.Client(verify=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we also need to specify the async http client?
| # Configure SSL verification if needed | ||
| if not llm_config.verify_ssl: | ||
| import httpx | ||
| kwargs["async_client"] = httpx.AsyncClient(verify=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the wrong type. It must be AsyncOpenAI
| api_key: ${OPENAI_API_KEY} | ||
| verify_ssl: false # Disable SSL certificate verification | ||
|
|
||
| # The verify_ssl parameter is supported across all frameworks: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be handled across all frameworks, but it is not handled with the LiteLLM config type that is part of NAT.
| def test_model_dump_exclude_verify_ssl(self): | ||
| """Test that verify_ssl can be excluded from model dump.""" | ||
| config = OpenAIModelConfig(model_name="gpt-4", verify_ssl=False) | ||
| dumped = config.model_dump(exclude={"verify_ssl"}) | ||
| assert "verify_ssl" not in dumped |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand the purpose of this test.
<!-- SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. SPDX-License-Identifier: Apache-2.0 Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> ## ❄️ Code freeze for `release/1.3` and `v1.3.0` release ### What does this mean? Only critical/hotfix level issues should be merged into `release/1.3` until release (merging of this PR). ### What is the purpose of this PR? - Update documentation - Allow testing for the new release - Enable a means to merge `release/1.3` into `main` for the release
Signed-off-by: David Gardner <[email protected]>
Signed-off-by: David Gardner <[email protected]>
* Fixes an issue where the `develop` branch is incorrectly reporting itself as v.13 ## By Submitting this PR I confirm: - I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing.md). - We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license. - Any contribution which contains commits that are not Signed-Off will not be accepted. - When the PR is ready for review, new or existing tests cover these changes. - When the PR is ready for review, the documentation is up to date with these changes. ## Summary by CodeRabbit * **Bug Fixes** * Improved version detection to use first-parent tags, yielding more accurate and consistent version reporting in builds and test logs. * Added a safe fallback when no tag is present, preventing failures and showing a clear “no-tag” indicator. * **Chores** * Unified tag retrieval across CI scripts to a single helper, simplifying maintenance and reducing inconsistencies. Authors: - David Gardner (https://github.com/dagardner-nv) Approvers: - Will Killian (https://github.com/willkill07) URL: NVIDIA#940
1. Add intermediate_manager instance tracking similar to observabilty 2. Add configurable tracemalloc to track top users 3. Add a debug end point for dumping stats Sample Usage: ``` nat mcp serve --config_file examples/getting_started/simple_calculator/configs/config.yml --enable_memory_profiling True --memory_profile_interval 10 --memory_profile_log_level=INFO ``` Start the client and run eval against that endpoint to similuate multiple-users: ``` nat serve --config_file examples/MCP/simple_calculator_mcp/configs/config-mcp-client.yml nat eval --config_file examples/evaluation_and_profiling/simple_calculator_eval/configs/config-tunable-rag-eval.yml --endpoint http://localhost:8000 --reps=2 ``` Sample Output (an intentional resource leak was used to reference output; this is not expected with a regular workflow): ``` ================================================================================ 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:166 - MEMORY PROFILE AFTER 20 REQUESTS: 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:167 - Current Memory: 2.95 MB 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:168 - Peak Memory: 7.35 MB 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:169 - 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:170 - NAT COMPONENT INSTANCES: 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:171 - IntermediateStepManagers: 1 active (0 outstanding steps) 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:174 - BaseExporters: 0 active (0 isolated) 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:175 - Subject (event streams): 1 instances 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:182 - TOP 10 MEMORY GROWTH SINCE BASELINE: 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:189 - NVIDIA#1: /home/devcontainers/.local/share/uv/python/cpython-3.12.8-linux-x86_64-gnu/lib/python3.12/linecache.py:139: size=753 KiB (+753 KiB), count=7950 (+7950), average=97 B 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:189 - NVIDIA#2: <frozen importlib._bootstrap_external>:757: size=704 KiB (+704 KiB), count=5558 (+5558), average=130 B 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:189 - NVIDIA#3: <frozen abc>:123: size=188 KiB (+188 KiB), count=2460 (+2460), average=78 B 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:189 - NVIDIA#4: /home/devcontainers/dev/forks/nat/examples/getting_started/simple_calculator/src/nat_simple_calculator/register.py:118: size=98.1 KiB (+98.1 KiB), count=10 (+10), average=10041 B 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:189 - NVIDIA#5: <frozen abc>:106: size=67.9 KiB (+67.9 KiB), count=238 (+238), average=292 B 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:189 - NVIDIA#6: /home/devcontainers/dev/forks/nat/examples/getting_started/simple_calculator/src/nat_simple_calculator/register.py:128: size=48.9 KiB (+48.9 KiB), count=10 (+10), average=5007 B 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:189 - NVIDIA#7: /home/devcontainers/dev/forks/nat/examples/getting_started/simple_calculator/src/nat_simple_calculator/register.py:112: size=37.7 KiB (+37.7 KiB), count=11 (+11), average=3509 B 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:189 - NVIDIA#8: /home/devcontainers/dev/forks/nat/.venv/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py:40: size=30.3 KiB (+30.3 KiB), count=346 (+346), average=90 B 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:189 - NVIDIA#9: /home/devcontainers/dev/forks/nat/.venv/lib/python3.12/site-packages/pydantic/main.py:253: size=26.0 KiB (+26.0 KiB), count=167 (+167), average=159 B 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:189 - NVIDIA#10: /home/devcontainers/dev/forks/nat/.venv/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py:37: size=24.4 KiB (+24.4 KiB), count=500 (+500), average=50 B 2025-10-09 19:53:34 - INFO - nat.front_ends.mcp.memory_profiler:191 - =============================================================================== ``` You can also get aggregate stats via the `debug/memory/stats` endpoint on the MCP server - ``` curl -s http://localhost:9901/debug/memory/stats |jq { "enabled": true, "request_count": 16, "current_memory_mb": 3.41, "peak_memory_mb": 7.75, "active_intermediate_managers": 1, "outstanding_steps": 0, "active_exporters": 0, "isolated_exporters": 0, "subject_instances": 0 } ``` - I am familiar with the [Contributing Guidelines](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/resources/contributing.md). - We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license. - Any contribution which contains commits that are not Signed-Off will not be accepted. - When the PR is ready for review, new or existing tests cover these changes. - When the PR is ready for review, the documentation is up to date with these changes. * **New Features** * Optional memory profiling for the MCP front end with an enable flag, configurable interval/top‑N, and a new debug endpoint exposing current memory stats. * Per-call profiling hooks integrated into function registration and invocation flows. * **Improvements** * Runtime visibility now includes active manager and outstanding-step counts plus exporter/subject counts. * Safer baseline management and defensive handling when tracing is unavailable; configurable per-request logging. * **Tests** * Comprehensive tests for profiler behavior, metrics, and error handling. Authors: - Anuradha Karuppiah (https://github.com/AnuradhaKaruppiah) Approvers: - Will Killian (https://github.com/willkill07) - Yuchen Zhang (https://github.com/yczhang-nv) URL: NVIDIA#961
… when using OpenAI-compatible endpoints with self-signed certificates. This is particularly useful when: - Running OpenAI-compatible endpoints with self-signed certificates in development environments - Working behind corporate proxies that use custom SSL certificates - Testing with local OpenAI-compatible servers When connecting to OpenAI-compatible endpoints that use self-signed certificates, you would encounter errors like: ``` httpcore.ConnectError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain ``` A new `verify_ssl` parameter has been added to `OpenAIModelConfig` and `OpenAIEmbedderModelConfig` that allows you to disable SSL certificate verification. Add `verify_ssl: false` to your OpenAI model or embedder configuration: ```yaml llm: type: openai model_name: gpt-4 base_url: https://your-custom-endpoint.example.com/v1 api_key: ${OPENAI_API_KEY} verify_ssl: false # Disable SSL certificate verification ``` - **Default value**: `true` (SSL verification is enabled by default) - **Security**: SSL verification should only be disabled in trusted development/testing environments Closes: NVIDIA#1005 Signed-off-by: Elliott Davis <[email protected]>
d0e2b48 to
c489fdf
Compare
|
I'll get this cleaned up and reopened. Apologies for the noise |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (22)
packages/compat/aiqtoolkit_semantic_kernel/pyproject.toml (1)
1-1: Missing SPDX Apache-2.0 license header.Per coding guidelines, all files including
.tomlfiles must start with the standard SPDX Apache-2.0 header. This file is missing it entirely.Add the Apache-2.0 SPDX header at the beginning of the file:
+# SPDX-License-Identifier: Apache-2.0 +# Copyright (c) 2025 NVIDIA Corporation + [build-system](Verify the copyright year is current; adjust as needed based on the actual file creation/modification date.)
packages/nvidia_nat_data_flywheel/pyproject.toml (1)
1-1: Missing SPDX Apache-2.0 header.Per coding guidelines, every file must start with the standard SPDX Apache-2.0 header. This file is missing it.
Add the SPDX header at the very beginning of the file:
+# SPDX-License-Identifier: Apache-2.0 +# +# Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# [build-system] build-backend = "setuptools.build_meta"examples/control_flow/router_agent/pyproject.toml (1)
1-3: Add required SPDX Apache-2.0 license header.Per the coding guidelines, all
.tomlfiles must start with the SPDX Apache-2.0 header. The file currently starts directly with[build-system]without the license header.Add the SPDX header at the very beginning of the file:
+# SPDX-License-Identifier: Apache-2.0 +# Copyright 2024-2025 NVIDIA Corporation. +# All Rights Reserved. + [build-system]As per coding guidelines.
examples/observability/simple_calculator_observability/pyproject.toml (1)
1-3: Missing Apache-2.0 SPDX header.Per coding guidelines, all files must start with the standard SPDX Apache-2.0 header. Please add this header at the top of the file.
+# SPDX-License-Identifier: Apache-2.0 +# SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# [build-system]As per coding guidelines, all source files must include the SPDX Apache-2.0 header template. You can copy the template from an existing
pyproject.tomlfile in the repository.examples/getting_started/simple_web_query/pyproject.toml (1)
1-7: Add required Apache-2.0 SPDX header.Per coding guidelines, all files must start with the standard SPDX Apache-2.0 header. This file is missing it.
Apply this diff to add the header:
+# SPDX-License-Identifier: Apache-2.0 +# SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. + [build-system] build-backend = "setuptools.build_meta" requires = ["setuptools >= 64", "setuptools-scm>=8"]examples/frameworks/semantic_kernel_demo/pyproject.toml (1)
1-1: Add SPDX Apache-2.0 header to the file.Per coding guidelines, all files matching
{**/*.toml}must start with the standard SPDX Apache-2.0 header. This file is missing the required license header.Apply this diff to add the header at the top of the file:
+# SPDX-License-Identifier: Apache-2.0 +# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# [build-system]packages/nvidia_nat_ragaai/pyproject.toml (1)
1-55: Add missing SPDX Apache-2.0 header.The file is missing the required Apache-2.0 license header. Per coding guidelines, all source files must start with the standard SPDX Apache-2.0 header.
Add the SPDX header at the very beginning of the file:
+# SPDX-License-Identifier: Apache-2.0 +# +# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + [build-system]examples/object_store/user_report/pyproject.toml (1)
1-3: Add missing SPDX Apache-2.0 header.Per the coding guidelines, all source files (including
.tomlfiles) must start with the standard SPDX Apache-2.0 header. Please add the header at the top of this file.Apply this diff to add the header:
+# SPDX-License-Identifier: Apache-2.0 +# Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + [build-system] build-backend = "setuptools.build_meta" requires = ["setuptools >= 64", "setuptools-scm>=8"]examples/control_flow/sequential_executor/pyproject.toml (1)
1-1: Add missing SPDX Apache-2.0 header.Per coding guidelines, all
.tomlfiles must start with the standard SPDX Apache-2.0 header. This file is missing the required header comment.Apply this diff to add the SPDX header at the top of the file:
+# SPDX-License-Identifier: Apache-2.0 +# +# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + [build-system]examples/getting_started/simple_calculator/pyproject.toml (1)
1-1: Add missing SPDX Apache-2.0 header.Per coding guidelines, all files must start with the standard SPDX Apache-2.0 header. This file is missing it entirely.
Apply this diff to add the required header:
+# SPDX-License-Identifier: Apache-2.0 +# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + [build-system]packages/nvidia_nat_redis/pyproject.toml (1)
1-3: Add missing SPDX Apache-2.0 header.Every file, including
pyproject.toml, must begin with the standard SPDX Apache-2.0 header comment per coding guidelines. The file currently starts with[build-system]without any license header.Add the SPDX header at the very top of the file:
+# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES +# All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + [build-system] build-backend = "setuptools.build_meta" requires = ["setuptools >= 64", "setuptools-scm>=8"]examples/front_ends/simple_calculator_custom_routes/pyproject.toml (1)
1-1: Add SPDX Apache-2.0 header to the file.Per coding guidelines, all
.tomlfiles must include the standard SPDX Apache-2.0 header at the top. The file is currently missing this header.Add the following header at the very beginning of the file:
+# SPDX-License-Identifier: Apache-2.0 +# SPDX-FileCopyrightText: 2025 NVIDIA CORPORATION & AFFILIATES +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + [build-system]packages/nvidia_nat_test/pyproject.toml (1)
1-3: Add missing SPDX Apache-2.0 license header.Per coding guidelines, all TOML files must start with the standard SPDX Apache-2.0 header. This file is currently missing the required header comment.
Add the SPDX header at the very beginning of the file:
+# SPDX-License-Identifier: Apache-2.0 +# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + [build-system] build-backend = "setuptools.build_meta" requires = ["setuptools >= 64", "setuptools-scm>=8"]packages/nvidia_nat_mem0ai/pyproject.toml (1)
1-1: Missing SPDX Apache-2.0 license header.Every file must start with the standard SPDX Apache-2.0 header. Add the license header template at the very beginning of the file (before line 1).
# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.packages/nvidia_nat_weave/pyproject.toml (1)
1-3: Add missing SPDX Apache-2.0 header.The file is missing the required SPDX Apache-2.0 header at the top. Per coding guidelines, all files must include the standard Apache License 2.0 header comment with current copyright year.
Add the following header before line 1:
+# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + [build-system]packages/compat/aiqtoolkit_weave/pyproject.toml (1)
1-3: Add missing SPDX Apache-2.0 header.The file is missing the required SPDX Apache-2.0 header at the top. Per coding guidelines, all files must include the standard Apache License 2.0 header comment with current copyright year.
Add the following header before line 1:
+# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + [build-system]examples/evaluation_and_profiling/email_phishing_analyzer/pyproject.toml (1)
1-32: Add missing SPDX Apache-2.0 header.Per coding guidelines, all source files must include the standard SPDX Apache-2.0 header at the top. Add the header before the
[build-system]section.Apply this diff to add the missing header:
+# SPDX-License-Identifier: Apache-2.0 +# SPDX-FileCopyrightText: Copyright (c) 2024-2025 NVIDIA CORPORATION & AFFILIATES +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + [build-system]packages/nvidia_nat_mysql/pyproject.toml (1)
1-3: Add required Apache-2.0 SPDX header to pyproject.toml.The file is missing the required Apache-2.0 header. Add the following lines at the very beginning of the file:
# SPDX-FileCopyrightText: Copyright (c) 2024-2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. # SPDX-License-Identifier: Apache-2.0Then add a blank line before
[build-system]. Reference:packages/nvidia_nat_adk/pyproject.tomlalready follows this format correctly.examples/advanced_agents/alert_triage_agent/pyproject.toml (1)
1-3: Add missing Apache-2.0 SPDX header.All
.tomlfiles must include the standard SPDX Apache-2.0 header at the start of the file. As per coding guidelines.Add this header at the very beginning of the file (before line 1):
+# SPDX-License-Identifier: Apache-2.0 +# +# SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + [build-system]packages/nvidia_nat_profiling/pyproject.toml (1)
1-1: Missing required SPDX Apache-2.0 header.Per coding guidelines, all files (including
.toml) must start with the standard SPDX Apache-2.0 header. This file is missing it.Add the SPDX Apache-2.0 header at the top of the file. You can copy the template from an existing
pyproject.tomlin the repository. The header should include:
- SPDX license identifier
- Copyright year (update to current: 2025)
- Copyright holder (NVIDIA Corporation)
Example header format:
# SPDX-License-Identifier: Apache-2.0 # Copyright (c) 2025, NVIDIA Corporation & Affiliates. All rights reserved.packages/compat/aiqtoolkit_mem0ai/pyproject.toml (1)
1-3: Add SPDX Apache-2.0 license header.Per coding guidelines, all files must start with the standard SPDX Apache-2.0 header. This file is missing the header comment.
Apply this diff to add the required header:
+# SPDX-License-Identifier: Apache-2.0 +# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA Corporation & Affiliates +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + [build-system]As per coding guidelines.
ci/scripts/gitutils.py (1)
76-84: Add return type hint and document the first-parent tag selection strategy.The method signature is missing a return type hint, and the docstring should explain that
--first-parentrestricts tag selection to the first-parent history (i.e., it follows only the first parent of merge commits). This helps users understand which tag will be returned in repos with complex branching.As per coding guidelines.
Apply this diff to add the type hint and enhance the docstring:
@functools.lru_cache @staticmethod - def get_closest_tag(): + def get_closest_tag() -> str: """ - Determines the version of the repo by using `git describe` + Determines the version of the repo by using `git describe`. + + Uses --first-parent to restrict tag search to the first-parent history, + ignoring tags reachable only through second or later parents of merges. Returns -------
♻️ Duplicate comments (3)
packages/nvidia_nat_semantic_kernel/src/nat/plugins/semantic_kernel/llm.py (1)
117-127: AsyncClient resource leak: no explicit cleanup mechanism.The
httpx.AsyncClient(verify=False)is instantiated without lifecycle management. Since this is a generator function that yields the client, there's no cleanup of the AsyncClient when the LLM client is no longer needed, potentially causing resource leaks.Implement proper lifecycle management, such as:
- Using an async context manager pattern if Semantic Kernel supports it
- Documenting that cleanup is handled by
OpenAIChatCompletion(if verified)- Or explicitly closing the client when the builder context ends
#!/bin/bash # Description: Verify AsyncClient lifecycle handling in Semantic Kernel's OpenAIChatCompletion # Check if OpenAIChatCompletion manages async_client cleanup rg -nP -C5 'async_client.*close|AsyncClient.*__aexit__|async_client.*cleanup' --type=py # Look for OpenAIChatCompletion's handling of the async_client parameter ast-grep --pattern $'class OpenAIChatCompletion { $$$ async_client = $_ $$$ }'packages/nvidia_nat_agno/src/nat/plugins/agno/llm.py (1)
109-112: Disable SSL verification via client_params instead of creating unmanaged httpx.Client.AGNO's OpenAIChat does not manage user-supplied httpx.Client instances, leaving sockets unclosed. Replace the direct
httpx.Client(verify=False)with AGNO'sclient_paramsparameter, which lets AGNO construct and manage the client. Additionally, this avoids disabling SSL verification by default—pass{"verify": True}to maintain security per project learnings.# Configure SSL verification if needed if not llm_config.verify_ssl: config_obj["client_params"] = {"verify": False} client = OpenAIChat(**config_obj, id=llm_config.model_name)packages/nvidia_nat_langchain/tests/test_llm_ssl_verification.py (1)
134-148: Drop unused ChatOpenAI patch in field existence testThis test never exercises
openai_langchain, so the@patch("langchain_openai.ChatOpenAI")decorator just injects an unused argument and obscures coverage. Please remove the decorator and the unused parameters.
🧹 Nitpick comments (5)
packages/compat/aiqtoolkit/pyproject.toml (1)
1-12: Add SPDX Apache-2.0 header to file.The file is missing the standard SPDX Apache-2.0 header comment. Since the file is being modified, add the header at the top per coding guidelines.
+# SPDX-License-Identifier: Apache-2.0 +# Copyright (c) 2025 NVIDIA Corporation + [build-system] build-backend = "setuptools.build_meta" requires = ["setuptools >= 64", "setuptools-scm>=8"]packages/compat/aiqtoolkit_crewai/pyproject.toml (1)
1-12: Add SPDX Apache-2.0 header to file.The file is missing the standard SPDX Apache-2.0 header comment. Since the file is being modified, add the header at the top per coding guidelines.
+# SPDX-License-Identifier: Apache-2.0 +# Copyright (c) 2025 NVIDIA Corporation + [build-system] build-backend = "setuptools.build_meta" requires = ["setuptools >= 64", "setuptools-scm>=8"]packages/nvidia_nat_langchain/tests/test_embedder_ssl_verification.py (1)
102-116: Remove unused OpenAIEmbeddings patch in config field test
test_openai_embedder_config_verify_ssl_field_existsonly instantiatesOpenAIEmbedderModelConfig; it never touchesopenai_langchainorOpenAIEmbeddings. Drop the patch decorator and the unused mock argument to keep the test focused and avoid stray fixtures.src/nat/front_ends/mcp/memory_profiler.py (2)
82-109: Refine exception handling for tracemalloc restart failure.The current pattern logs an error and returns False, but per coding guidelines, when catching without re-raising you should use
logger.exception()to capture the full stack trace.As per coding guidelines.
Apply this diff:
except RuntimeError as e: - logger.error("Failed to restart tracemalloc: %s", e) + logger.exception("Failed to restart tracemalloc") return False
210-219: Document the broad exception handling rationale.The bare
except Exception: passis flagged by static analysis. While this is intentional (iterating gc.get_objects() can encounter objects in invalid states), consider adding a brief comment explaining why the exception is silenced.Apply this diff:
def _count_instances_of_type(self, type_name: str) -> int: """Count instances of a specific type in memory.""" count = 0 for obj in gc.get_objects(): try: if type(obj).__name__ == type_name: count += 1 - except Exception: + except Exception: # Ignore objects in invalid/transitional states during GC iteration pass return count
| <<<<<<< HEAD | ||
| "nvidia-nat[langchain,phoenix]~=1.3", | ||
| ======= | ||
| "nvidia-nat[langchain]~=1.4", | ||
| "arize-phoenix~=11.28", | ||
| >>>>>>> 77247e7a (Updating versions for v1.4.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unresolved merge conflict markers must be resolved.
The file contains unresolved merge-conflict markers that will prevent the PR from merging. You must resolve the conflict by choosing between the HEAD version and the feature branch version (commit 77247e7).
The conflict involves:
- HEAD:
"nvidia-nat[langchain,phoenix]~=1.3" - Feature branch:
"nvidia-nat[langchain]~=1.4"with"arize-phoenix~=11.28"as a separate dependency
Decide which version and dependency structure is correct and remove the conflict markers.
| # Configure SSL verification if needed via LiteLLM environment variable | ||
| # ADK uses LiteLLM under the hood which respects this environment variable | ||
| if not config.verify_ssl: | ||
| os.environ["LITELLM_SSL_VERIFY"] = "false" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Environment variable persists globally without cleanup.
Setting LITELLM_SSL_VERIFY in os.environ modifies global state that persists beyond this function's scope and could affect unrelated LiteLLM calls elsewhere in the process. In concurrent scenarios, this could cause race conditions where one request's SSL preference affects another.
Consider using LiteLLM's client-level configuration if available, or document this side effect clearly. If environment variables are the only option, consider whether this should be set at application startup based on configuration rather than per-request.
🏁 Script executed:
#!/bin/bash
# Description: Check if LiteLLM supports per-client SSL configuration instead of environment variables
# Search for LiteLLM client initialization and SSL-related parameters
rg -nP -C3 'class LiteLlm|def __init__.*LiteLlm' --type=py
# Check if there are other uses of LITELLM_SSL_VERIFY in the codebase
rg -n 'LITELLM_SSL_VERIFY' --type=pyLength of output: 2366
I need to verify whether LiteLLM provides per-client SSL configuration options and examine the test coverage for this pattern.
I need to check if LiteLLM's completion() function accepts an ssl_verify parameter that can be passed per-call.
Now I need to examine the actual code context and test behavior to understand the full scope of the issue.
Based on my verification through documentation and known issues in LiteLLM, I can now provide my final response.
The review comment is valid: environment variable persists globally and affects concurrent requests.
LiteLLM documentation states that environment variables take precedence in SSL configuration, and ssl_verify can be set via the ssl_verify setting. However, there is a known issue where ssl_verify parameter passed to the completion() function doesn't get passed down properly, making the environment variable approach the only practical workaround currently.
The concern is legitimate: setting os.environ["LITELLM_SSL_VERIFY"] modifies process-wide state that affects all subsequent LiteLLM calls, creating potential issues in concurrent scenarios where one request's SSL preference could impact another's.
Recommended actions:
- Document the side effect: Add a comment explaining that this sets a global process state that persists across calls
- Set at application initialization: If possible, configure this once at startup based on global configuration rather than per-function call
- Clean up if per-request setting is necessary: If this must be set per-request, implement cleanup/restoration (e.g., using a context manager) to ensure the environment variable is restored after use
🤖 Prompt for AI Agents
In packages/nvidia_nat_adk/src/nat/plugins/adk/llm.py around lines 108-111, the
code sets os.environ["LITELLM_SSL_VERIFY"] = "false" which mutates
process-global state and can leak SSL settings across concurrent requests;
verify if LiteLLM's completion() accepts a per-call ssl_verify or per-client
config and prefer that; if per-call/per-client SSL isn't available, add a clear
comment documenting the global side-effect, move this environment setting to
application initialization (not per-request), or wrap per-request changes in a
context manager that saves, sets, and restores the original environment value to
avoid cross-request contamination and update/add tests to cover concurrent
behavior and restoration.
| <<<<<<< HEAD | ||
| "nvidia-nat~=1.3", | ||
| "langchain~=0.3.27", # ensure a pin to latest version <1.0.0 (note: langchain packages update separately) | ||
| ======= | ||
| "nvidia-nat~=1.4", | ||
| >>>>>>> 77247e7a (Updating versions for v1.4.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unresolved merge conflict must be fixed.
The file contains unresolved Git merge conflict markers (<<<<<<<, =======, >>>>>>>). This will break the package installation and must be resolved before merging.
Resolve the conflict by choosing the appropriate version (likely nvidia-nat~=1.4 to align with the v1.4.0 release mentioned in the PR) and removing all conflict markers.
Apply this diff to resolve the conflict:
-<<<<<<< HEAD
- "nvidia-nat~=1.3",
"langchain~=0.3.27", # ensure a pin to latest version <1.0.0 (note: langchain packages update separately)
-=======
"nvidia-nat~=1.4",
->>>>>>> 77247e7a (Updating versions for v1.4.0)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <<<<<<< HEAD | |
| "nvidia-nat~=1.3", | |
| "langchain~=0.3.27", # ensure a pin to latest version <1.0.0 (note: langchain packages update separately) | |
| ======= | |
| "nvidia-nat~=1.4", | |
| >>>>>>> 77247e7a (Updating versions for v1.4.0) | |
| "langchain~=0.3.27", # ensure a pin to latest version <1.0.0 (note: langchain packages update separately) | |
| "nvidia-nat~=1.4", |
🤖 Prompt for AI Agents
In packages/nvidia_nat_langchain/pyproject.toml around lines 23 to 28, remove
the Git conflict markers (<<<<<<<, =======, >>>>>>>) and resolve the conflict by
keeping the updated dependency line nvidia-nat~=1.4; also retain the
langchain~=0.3.27 line if it is still required, resulting in clean dependency
entries and no conflict markers.
| # Configure SSL verification if needed | ||
| if not embedder_config.verify_ssl: | ||
| import httpx | ||
| config_dict["http_client"] = httpx.Client(verify=False) | ||
| config_dict["http_async_client"] = httpx.AsyncClient(verify=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
httpx clients created without explicit cleanup mechanism.
Both httpx.Client(verify=False) and httpx.AsyncClient(verify=False) are instantiated and passed to OpenAIEmbeddings without lifecycle management. If OpenAIEmbeddings doesn't take ownership and close these clients, this will leak resources (connections, file descriptors).
Verify that langchain_openai.OpenAIEmbeddings properly manages and closes the provided http_client and http_async_client, or implement explicit cleanup when the embedder is no longer needed.
🏁 Script executed:
#!/bin/bash
# Description: Verify if LangChain's OpenAIEmbeddings manages httpx client lifecycle
# Search for OpenAIEmbeddings client lifecycle management
rg -nP -C5 'http_client.*close|http_async_client.*close|HttpClient.*cleanup' --type=py
# Check OpenAIEmbeddings constructor signature and client handling
ast-grep --pattern $'class OpenAIEmbeddings {
$$$
def __init__($$$, http_client=$_, http_async_client=$_, $$$) {
$$$
}
$$$
}'Length of output: 0
🌐 Web query:
langchain_openai OpenAIEmbeddings http_client lifecycle management
💡 Result:
Short answer — manage a shared httpx client yourself, pass it into OpenAIEmbeddings for reuse, and close it on shutdown (sync: .close(), async: await .aclose()). Key points and minimal example below.
Why: OpenAIEmbeddings accepts http_client and http_async_client so LangChain will reuse whatever you supply instead of creating per-call clients. If you supply a custom client you must ensure it’s the correct type (httpx.Client / httpx.AsyncClient) because openai-python type checks can reject wrong instances. Also AsyncClient must be explicitly closed to avoid warnings/leaks. [1][2][3][4]
Minimal patterns
- Create one shared client at app startup (or module-global).
- Pass both http_client and http_async_client to OpenAIEmbeddings.
- Close on shutdown: sync .close(), async await .aclose() (or use context managers).
Minimal examples
Async (FastAPI-like)
from langchain_openai import OpenAIEmbeddings
import httpx
async_client = httpx.AsyncClient()
emb = OpenAIEmbeddings(http_async_client=async_client, http_client=None) # also set http_client if you need sync
On shutdown:
await async_client.aclose()
Sync
from langchain_openai import OpenAIEmbeddings
import httpx
client = httpx.Client()
emb = OpenAIEmbeddings(http_client=client, http_async_client=None)
On exit:
client.close()
References
- OpenAIEmbeddings parameters (http_client / http_async_client usage). [1]
- LangChain implementation showing how custom httpx clients are passed to openai client. [2]
- Known openai-python / type-check issues when passing wrong client types. [3]
- httpx guidance: reuse a single AsyncClient and call .aclose() or use async context manager. [4]
If you want, I can produce a complete startup/shutdown snippet for FastAPI, or an example that supports rotating API keys (per-request client creation vs pooled reuse).
🌐 Web query:
langchain_openai.embeddings.OpenAIEmbeddings http_client httpx close
💡 Result:
Short answer: create and pass your own httpx.Client / httpx.AsyncClient to OpenAIEmbeddings, then call .close() / .aclose() on those clients when done. LangChain will only auto-create clients if you don't provide them (so you must manage lifecycle if you pass them).
Examples:
-
Sync:
from langchain_openai import OpenAIEmbeddings
import httpx
client = httpx.Client()
emb = OpenAIEmbeddings(http_client=client)
embeddings = emb.embed_query("text")
client.close() # explicitly close -
Async:
from langchain_openai import OpenAIEmbeddings
import httpx
async_client = httpx.AsyncClient()
emb = OpenAIEmbeddings(http_async_client=async_client, http_client=None)
embeddings = await emb.aembed_query("text")
await async_client.aclose() # explicitly close
Notes:
- The OpenAIEmbeddings init accepts http_client (httpx.Client) and http_async_client (httpx.AsyncClient); if you don’t supply them LangChain may create an httpx.AsyncClient (e.g., when using a proxy) so manage lifecycle accordingly. (python.langchain.com)
If you want, tell me whether you need a full example that handles context managers or integration with async lifecycles.
Citations:
🏁 Script executed:
#!/bin/bash
# Check for cleanup methods in the embedder.py file and surrounding context
cat "packages/nvidia_nat_langchain/src/nat/plugins/langchain/embedder.py"Length of output: 3709
httpx clients created without cleanup mechanism—resource leak confirmed.
The concern is valid. OpenAIEmbeddings does not automatically manage httpx client lifecycle; when you supply custom http_client and http_async_client, you must explicitly call .close() (sync) or .aclose() (async).
The current code creates clients at lines 68–69 but never closes them. Since this is a generator function that yields, the clients are never tracked or cleaned up, resulting in leaked connections and file descriptors.
Required fix: Wrap client creation and usage in a try/finally block to ensure httpx clients are closed when the embedder is no longer needed:
async def openai_langchain(embedder_config: OpenAIEmbedderModelConfig, builder: Builder):
from langchain_openai import OpenAIEmbeddings
config_dict = embedder_config.model_dump(exclude={"type", "verify_ssl"}, by_alias=True, exclude_none=True)
http_client = None
http_async_client = None
try:
if not embedder_config.verify_ssl:
import httpx
http_client = httpx.Client(verify=False)
http_async_client = httpx.AsyncClient(verify=False)
config_dict["http_client"] = http_client
config_dict["http_async_client"] = http_async_client
client = OpenAIEmbeddings(**config_dict)
if isinstance(embedder_config, RetryMixin):
client = patch_with_retry(client, ...)
yield client
finally:
if http_async_client:
await http_async_client.aclose()
if http_client:
http_client.close()🤖 Prompt for AI Agents
In packages/nvidia_nat_langchain/src/nat/plugins/langchain/embedder.py around
lines 66 to 70, the httpx.Client and httpx.AsyncClient are created when
verify_ssl is False but never closed, leaking connections; change the generator
to create http_client and http_async_client variables (initialized to None),
assign the clients to those variables and to config_dict only when verify_ssl is
False, wrap the client creation and yield in a try/finally so that in finally
you await http_async_client.aclose() if set and call http_client.close() if set
(ensuring the async close is awaited before returning); keep all other behavior
(patching with retry, yielding the OpenAIEmbeddings instance) identical.
| assert isinstance(sync_client, httpx.Client) | ||
| assert isinstance(async_client, httpx.AsyncClient) | ||
|
|
||
| # Check that verify is False (httpx clients don't expose this directly, | ||
| # but we can verify they were created with the right config) | ||
| ssl_context = sync_client._transport._pool._ssl_context | ||
| assert ssl_context is None or not ssl_context.check_hostname | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix httpx verify assertion to avoid runtime failure
sync_client._transport is still None immediately after constructing an httpx.Client, so dereferencing _transport._pool raises AttributeError, causing this test to fail. Please change the assertion to inspect the constructor arguments instead (e.g., patch httpx.Client/httpx.AsyncClient and assert they were called with verify=False), or otherwise trigger transport creation before peeking at private internals.
🤖 Prompt for AI Agents
In packages/nvidia_nat_langchain/tests/test_llm_ssl_verification.py around lines
72 to 79, the test dereferences sync_client._transport._pool which is None
immediately after constructing an httpx.Client causing AttributeError; instead,
change the assertion to inspect how the clients were constructed (patch/mock
httpx.Client and httpx.AsyncClient and assert they were called with
verify=False) or explicitly force transport creation before inspecting internals
(e.g., perform a no-op request or call client._init_pool_and_backend()) and then
assert the ssl_context behavior. Ensure tests use mocking for constructor args
when possible to avoid relying on private internals.
| if hasattr(llm_config, "verify_ssl") and not llm_config.verify_ssl: | ||
| os.environ["LITELLM_SSL_VERIFY"] = "false" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid globally disabling LiteLLM SSL verification
Setting os.environ["LITELLM_SSL_VERIFY"] = "false" here is process-wide and stays in effect after this generator finishes. Any later LiteLLM invocation—even with verify_ssl still at its default True—will unexpectedly run with SSL verification disabled, which is a security regression. Please preserve and restore the prior value (or remove the key) in a try/finally around the yield, so other configs continue to honor their own verify_ssl settings.
- if hasattr(llm_config, "verify_ssl") and not llm_config.verify_ssl:
- os.environ["LITELLM_SSL_VERIFY"] = "false"
-
- llm = LiteLLM(**llm_config.model_dump(exclude={"type", "thinking"}, by_alias=True, exclude_none=True))
-
- yield _patch_llm_based_on_config(llm, llm_config)
+ prev_ssl_env = os.environ.get("LITELLM_SSL_VERIFY")
+ try:
+ if hasattr(llm_config, "verify_ssl") and not llm_config.verify_ssl:
+ os.environ["LITELLM_SSL_VERIFY"] = "false"
+
+ llm = LiteLLM(**llm_config.model_dump(exclude={"type", "thinking"}, by_alias=True, exclude_none=True))
+
+ yield _patch_llm_based_on_config(llm, llm_config)
+ finally:
+ if prev_ssl_env is None:
+ os.environ.pop("LITELLM_SSL_VERIFY", None)
+ else:
+ os.environ["LITELLM_SSL_VERIFY"] = prev_ssl_env🤖 Prompt for AI Agents
In packages/nvidia_nat_llama_index/src/nat/plugins/llama_index/llm.py around
lines 138 to 140, the code unconditionally sets os.environ["LITELLM_SSL_VERIFY"]
= "false" which is process-wide; change this to save the prior environment value
(e.g. original = os.environ.get("LITELLM_SSL_VERIFY")), set the key to "false"
only when llm_config.verify_ssl is False, and wrap the generator logic in a
try/finally where the finally restores the original value (reassign if original
is not None, or delete the key if original is None) so SSL verification state is
not leaked after the generator yields.
| <<<<<<< HEAD | ||
| "nvidia-nat~=1.3", | ||
| "aiorwlock~=1.5", | ||
| ======= | ||
| "nvidia-nat~=1.4", | ||
| >>>>>>> 77247e7a (Updating versions for v1.4.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolve unresolved merge conflict in dependencies.
The file contains unresolved git merge conflict markers that prevent the TOML from being parsed. You must resolve the conflict by choosing between the HEAD and incoming versions, and ensure dependencies remain alphabetically sorted per the guidance on line 22.
The conflict appears between:
- HEAD:
nvidia-nat~=1.3andaiorwlock~=1.5 - Incoming (77247e7):
nvidia-nat~=1.4
Apply this diff to resolve the conflict with both dependencies present and properly sorted:
-<<<<<<< HEAD
- "nvidia-nat~=1.3",
- "aiorwlock~=1.5",
-=======
- "nvidia-nat~=1.4",
->>>>>>> 77247e7a (Updating versions for v1.4.0)
+ "aiorwlock~=1.5",
+ "nvidia-nat~=1.4",🤖 Prompt for AI Agents
In packages/nvidia_nat_mcp/pyproject.toml around lines 23 to 28, remove the git
conflict markers and resolve the dependency list so both dependencies exist and
are alphabetically sorted: keep aiorwlock~=1.5 and use the updated
nvidia-nat~=1.4 (aiorwlock should come before nvidia-nat), ensuring no leftover
<<<<<<<, =======, >>>>>>> markers and the TOML syntax remains valid.
| # version when adding a new package. If unsure, default to using `~=` instead of `==`. Does not apply to nvidia-nat packages. | ||
| # Keep sorted!!! | ||
| "nvidia-nat~=1.3", | ||
| "nvidia-nat~=1.4", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify the nvidia-nat version bump and uv.lock synchronization.
The nvidia-nat dependency was bumped from ~=1.3 to ~=1.4. Confirm:
- This version bump is intentional and coordinated with the SSL verification feature release
- The
uv.lockfile was updated via the standard process (uv pip install nvidia-nat==1.4 --sync) as required by coding guidelines
Run the following to verify the uv.lock synchronization:
🏁 Script executed:
#!/bin/bash
# Verify nvidia-nat version in uv.lock matches pyproject.toml constraint
rg -A 5 'name = "nvidia-nat"' uv.lock | head -20Length of output: 432
I need to search for the actual package version entry in uv.lock to verify the version.
---
🏁 Script executed:
```shell
#!/bin/bash
# Find the [[package]] entry for nvidia-nat with its version
rg -A 2 '^\[\[package\]\]$' uv.lock | grep -A 2 'name = "nvidia-nat"' | head -10
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Alternative: search for nvidia-nat package definition with version
rg -B 1 -A 3 'name = "nvidia-nat"$' uv.lock | head -15Length of output: 103
🏁 Script executed:
#!/bin/bash
# Also check what version constraint is in the pyproject.toml
cat packages/nvidia_nat_profiling/pyproject.toml | grep -A 5 -B 5 'nvidia-nat'Length of output: 665
🏁 Script executed:
#!/bin/bash
# Get exact version in uv.lock for nvidia-nat
rg 'version = ".*nvidia-nat' uv.lock -A 1 -B 1 || rg '^name = "nvidia-nat"' uv.lock -A 1Length of output: 47
I need to verify the nvidia-nat package location and whether this is actually a workspace package:
---
🏁 Script executed:
```shell
#!/bin/bash
# Find nvidia-nat package in the workspace
fd -t f 'pyproject.toml' packages/ | xargs grep -l 'name = "nvidia-nat"' 2>/dev/null
Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check the structure of packages directory
ls -la packages/ | head -20Length of output: 1254
🏁 Script executed:
#!/bin/bash
# Look at git to see what changed in this pyproject.toml
git log -p --all -- packages/nvidia_nat_profiling/pyproject.toml | grep -A 10 -B 10 'nvidia-nat' | head -40Length of output: 59
---
🏁 Script executed:
```shell
#!/bin/bash
# Find the main nvidia-nat package definition
find . -maxdepth 2 -name 'pyproject.toml' -type f | grep -v '/packages/' | xargs grep -l 'name = "nvidia-nat"' 2>/dev/null
Length of output: 16
🏁 Script executed:
#!/bin/bash
# Check root directory structure
ls -la | grep -E '(pyproject|setup)'Length of output: 67
🏁 Script executed:
#!/bin/bash
# Use git diff to see the actual changes made to this file
git diff HEAD -- packages/nvidia_nat_profiling/pyproject.toml | head -50Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check git show for recent changes
git show HEAD:packages/nvidia_nat_profiling/pyproject.toml | grep -A 2 -B 2 'nvidia-nat'Length of output: 468
---
🏁 Script executed:
```shell
#!/bin/bash
# Check git log history for nvidia-nat version changes
git log -p --all -- packages/nvidia_nat_profiling/pyproject.toml | grep -B 5 -A 5 'nvidia-nat~=' | head -50
Length of output: 59
🏁 Script executed:
#!/bin/bash
# Check if there's a parent commit showing previous version
git log --oneline -- packages/nvidia_nat_profiling/pyproject.toml | head -10Length of output: 623
🏁 Script executed:
#!/bin/bash
# Check the actual PR changes (if on a branch)
git branch -aLength of output: 92
🏁 Script executed:
#!/bin/bash
# Show the root pyproject.toml nvidia-nat version
cat pyproject.toml | grep -A 10 'name = "nvidia-nat"'Length of output: 513
---
🏁 Script executed:
```shell
#!/bin/bash
# Show what changed in the version bump commit
git show b3e0eac -- packages/nvidia_nat_profiling/pyproject.toml | head -80
Length of output: 884
🏁 Script executed:
#!/bin/bash
# Check if uv.lock was also changed in that commit
git show b3e0eac -- uv.lock | grep -A 10 -B 10 'nvidia-nat' | head -40Length of output: 0
🏁 Script executed:
#!/bin/bash
# Show all files changed in the version bump commit
git show --name-status b3e0eac | head -30Length of output: 1497
🏁 Script executed:
#!/bin/bash
# Check the parent commit version for comparison
git show b3e0eac^:packages/nvidia_nat_profiling/pyproject.toml 2>/dev/null | grep 'nvidia-nat'Length of output: 212
| _instance_count: ClassVar[int] = 0 | ||
| _active_instances: ClassVar[set[weakref.ref]] = set() | ||
|
|
||
| def __init__(self, context_state: "ContextState"): # noqa: F821 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove stale # noqa: F821
Now that ContextState is imported in the typing.TYPE_CHECKING block, the forward reference string is sufficient and the # noqa: F821 suppression is unused. Ruff flags this as RUF100, so please delete the directive to keep lint clean. As per coding guidelines.
🧰 Tools
🪛 Ruff (0.14.2)
55-55: Unused noqa directive (unused: F821)
Remove unused noqa directive
(RUF100)
🤖 Prompt for AI Agents
In src/nat/builder/intermediate_step_manager.py around line 55, remove the stale
"# noqa: F821" from the __init__ signature (def __init__(self, context_state:
"ContextState"):), leaving the forward-reference string annotation as-is since
ContextState is provided via typing.TYPE_CHECKING; this deletes the unused
suppression and resolves the RUF100 lint warning.
Description
This feature adds the ability to disable SSL certificate verification… when using OpenAI-compatible endpoints with self-signed certificates. This is particularly useful when:
When connecting to OpenAI-compatible endpoints that use self-signed certificates, you would encounter errors like:
A new
verify_sslparameter has been added toOpenAIModelConfigandOpenAIEmbedderModelConfigthat allows you to disable SSL certificate verification.Add
verify_ssl: falseto your OpenAI model or embedder configuration:true(SSL verification is enabled by default)Closes: #1005
Summary by CodeRabbit