tests: unit and integration tests for the CLI#17
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 18 minutes and 5 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (26)
📝 WalkthroughWalkthroughAdded CI/test tooling and many new unit/integration tests with shared pytest fixtures; made DATA_DIR overridable via JARVIS_DATA_DIR; introduced a new proxy builder ( Changes
Sequence Diagram(s)sequenceDiagram
participant Config as MCPConfig
participant Builder as build_proxy
participant FastMCP as FastMCP
participant Server as MCPServer (stdio/http)
participant ClientFactory as ProxyClient / StatefulProxyClient
Builder->>Config: iterate mcpServers
loop per server
Builder->>Server: to_transport()
alt stdio backend
Builder->>ClientFactory: create StatefulProxyClient(transport)
ClientFactory-->>Builder: new_stateful factory
else other backend
Builder->>ClientFactory: create ProxyClient(transport)
ClientFactory-->>Builder: new factory
end
Builder->>FastMCP: add_provider(namespace=server_name, client_factory)
end
Builder-->>FastMCP: return assembled FastMCP
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (2)
.github/workflows/ci.yml (1)
13-19: Make the CI interpreter explicit.The package advertises Python 3.11+, but this job leaves the interpreter choice implicit. Pinning 3.11 as the baseline—or using a small 3.11/3.12 matrix—would make the new test gate line up with the supported range instead of whatever
uvhappens to pick on the runner.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml around lines 13 - 19, CI currently leaves the Python interpreter implicit; update the workflow so the runner uses an explicit supported interpreter (e.g., Python 3.11 or a 3.11/3.12 matrix). Edit the job steps that include "uses: astral-sh/setup-uv@v5" and either (A) add a matrix/strategy with "python-version: [3.11]" and run the job across that matrix, or (B) add a "with: python-version: 3.11" (or desired list) to the "uses: astral-sh/setup-uv@v5" step so "uv run --group dev pytest tests" executes under the pinned interpreter; ensure the command "uv run --group dev pytest tests" remains unchanged.tests/integration/test_tui_mcp_manager.py (1)
418-420: Tighten parse-error assertion to avoid false positives.Line 420 allows any status containing
"no", which can pass for unrelated messages. Prefer asserting an explicit parse-error signal.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/test_tui_mcp_manager.py` around lines 418 - 420, The assertion is too loose because '"no" in str(status).lower()' can match unrelated messages; update the check on the status rendered from app.query_one("#status").render() to require an explicit parse-error indicator. Locate the test where status = app.query_one("#status").render() (in tests/integration/test_tui_mcp_manager.py) and replace the current assertion with one that asserts "parse error" is present in str(status).lower() (or, if you need to allow the other expected state, check for the exact phrase "no enabled servers" instead of "no"). Ensure you remove the broad '"no" in ...' branch so the test only passes for an explicit parse-error or the exact "no enabled servers" message.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/conftest.py`:
- Around line 20-31: The fixture currently rebinding DATA_DIR/PRESETS_PATH
doesn't isolate the pre-built token cache: recreate or rebind
jarvis.config.token_storage to point at fake_dir (e.g., instantiate a new token
storage using the same class/factory used at import and assign it to
config_mod.token_storage) so the cache is stored under fake_dir, and also
monkeypatch any modules that imported token_storage by name (e.g., set
api_mod.token_storage and probe_mod.token_storage to the new instance) to ensure
tests don't touch the real ~/.jarvis store.
In `@tests/integration/test_api_errors.py`:
- Around line 64-67: The test currently asserts that a missing JSON body causes
a 500 in test_500_when_body_missing when posting to "/api/servers/alpha/toggle";
change the assertion to expect a 400 (client error) instead, and update the test
name to reflect the new expectation (e.g., test_400_when_body_missing). Do the
same change for the other similar test referenced (lines 71-74): update its
asserted status_code from 500 to 400 and rename it to indicate a 400 outcome so
the tests assert client-side validation errors rather than server errors.
In `@tests/integration/test_tui_mcp_manager.py`:
- Around line 18-35: The helper _await_probe currently returns silently when it
times out; change it to raise an explicit error so tests fail fast: inside
_await_probe (the async function using app.query_one("Tree"), iterating
node.data and awaiting pilot.pause) after the attempts loop completes without
early return, raise an AssertionError or RuntimeError with a clear message that
includes context (e.g., which server nodes still have "probing" labels or the
attempts value) so failing tests show why the probe never cleared.
In `@tests/unit/test_api_helpers.py`:
- Around line 67-95: The test for start_api_thread doesn't verify uvicorn.run is
invoked from a background daemon thread; update the fake_run used in the test to
capture threading.current_thread() and its .daemon flag (e.g., record the thread
object in the captured dict inside fake_run), then add assertions that the
thread is not threading.main_thread() and that thread.daemon is True to ensure
start_api_thread spawned a daemon background thread rather than calling
uvicorn.run inline.
In `@tests/unit/test_config.py`:
- Line 76: The test unpacks two values from load_raw_config(path) but never uses
the second variable named raw, triggering a lint warning; change the unpack to
use a throwaway name (e.g., mcp_dict, _raw = load_raw_config(path) or mcp_dict,
_ = load_raw_config(path)) so the unused value is intentionally ignored while
keeping load_raw_config usage intact.
- Around line 120-124: The test_save_and_load_roundtrip test uses a hard-coded
"/tmp/a.json" in the payload which triggers security lint S108; update the
payload to use the fixture-backed temporary path by constructing the filePath
from the test's data_dir (e.g., join data_dir with "a.json" and convert to
string) so filePath references the temp directory provided by the data_dir
fixture instead of /tmp.
---
Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 13-19: CI currently leaves the Python interpreter implicit; update
the workflow so the runner uses an explicit supported interpreter (e.g., Python
3.11 or a 3.11/3.12 matrix). Edit the job steps that include "uses:
astral-sh/setup-uv@v5" and either (A) add a matrix/strategy with
"python-version: [3.11]" and run the job across that matrix, or (B) add a "with:
python-version: 3.11" (or desired list) to the "uses: astral-sh/setup-uv@v5"
step so "uv run --group dev pytest tests" executes under the pinned interpreter;
ensure the command "uv run --group dev pytest tests" remains unchanged.
In `@tests/integration/test_tui_mcp_manager.py`:
- Around line 418-420: The assertion is too loose because '"no" in
str(status).lower()' can match unrelated messages; update the check on the
status rendered from app.query_one("#status").render() to require an explicit
parse-error indicator. Locate the test where status =
app.query_one("#status").render() (in tests/integration/test_tui_mcp_manager.py)
and replace the current assertion with one that asserts "parse error" is present
in str(status).lower() (or, if you need to allow the other expected state, check
for the exact phrase "no enabled servers" instead of "no"). Ensure you remove
the broad '"no" in ...' branch so the test only passes for an explicit
parse-error or the exact "no enabled servers" message.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 81eccfdd-2f61-4560-a881-22809d555a28
📒 Files selected for processing (15)
.github/workflows/ci.yml.gitignorepyproject.tomltests/__init__.pytests/conftest.pytests/integration/__init__.pytests/integration/test_api_endpoints.pytests/integration/test_api_errors.pytests/integration/test_tui_auth_manager.pytests/integration/test_tui_mcp_manager.pytests/unit/__init__.pytests/unit/test_api_helpers.pytests/unit/test_config.pytests/unit/test_probe.pytests/unit/test_tui.py
dc07671 to
c0798f0
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
tests/integration/test_api_errors.py (1)
64-74:⚠️ Potential issue | 🟠 MajorDon't codify request-validation bugs as 500s.
A missing JSON body and missing
"server"/"tool"keys are client input errors, not server faults. Asserting 500 here locks the API into the current catch-all behavior and will make the eventual handler fix look like a regression.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/test_api_errors.py` around lines 64 - 74, The tests test_500_when_body_missing and test_500_when_body_missing_keys assert a 500 for missing/invalid client input; change them to assert a client error (e.g. 400 Bad Request) instead so we don't lock in catch-all server behavior. Update the two assertions in TestToggleToolErrors and the earlier test_500_when_body_missing to expect 400 (or 422 if your API uses that) for POSTs to "/api/servers/alpha/toggle" and "/api/tools/toggle", and keep the request payloads the same so the tests reflect validation errors rather than a server fault.tests/conftest.py (1)
64-70:⚠️ Potential issue | 🟠 MajorRebind the TUI's cached
token_storagetoo.This fixture swaps
config_mod.token_storageandprobe_mod.token_storage, butsrc/jarvis/tui.py:342-344still holds its owntoken_storagereference. Any TUI test that touches auth/cache state will therefore share the session store instead oftmp_path, which breaks the per-test isolation promised here.Suggested fix
# Recreate the token store against the fresh dir and rebind it on every # module that holds a reference. This keeps OAuth tests from leaking # across each other *and* makes it impossible to hit the session-scoped # scratch dir either. fresh_store = DiskStore(directory=str(fake_dir)) + from jarvis import tui as tui_mod + monkeypatch.setattr(config_mod, "token_storage", fresh_store) monkeypatch.setattr(probe_mod, "token_storage", fresh_store) + monkeypatch.setattr(tui_mod, "token_storage", fresh_store)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/conftest.py` around lines 64 - 70, The fixture currently rebinds config_mod.token_storage and probe_mod.token_storage but misses the TUI module's cached token_storage; update the fixture to also rebind the TUI module's token_storage to the same fresh_store used for DiskStore. In practice, import or reference the TUI module (the module backing src/jarvis/tui.py where token_storage is defined) and use monkeypatch.setattr to replace its token_storage with fresh_store so tests touching TUI auth/cache use the per-test tmp_path store.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/jarvis/config.py`:
- Around line 15-17: DATA_DIR currently can remain relative when JARVIS_DATA_DIR
is set, causing mismatches against resolved paths in src/jarvis/api.py (the
compare against Path(...).resolve().parent). Update the DATA_DIR assignment in
src/jarvis/config.py so the env-provided path is normalized: construct
Path(os.environ.get("JARVIS_DATA_DIR") or Path.home() / ".jarvis"), call
expanduser() and then resolve(strict=False) (or resolve()) so DATA_DIR is an
absolute/resolved Path; this keeps the rest of the code (including the checks in
api.py) working correctly.
In `@tests/integration/test_tui_mcp_manager.py`:
- Around line 378-385: The test assumes deleting
app.raw_config["mcpServers"]["alpha"] will prevent _save_config() from writing
back "alpha", but _save_config() rebuilds entries from the UI tree using
servers.setdefault(server_name, {}), so "alpha" is re-created; update the test
to either remove the "alpha" node from whatever tree/structure _save_config()
iterates (so the save truly omits "alpha") or change the expected assertion to
reflect that "alpha" will be present after save; reference _save_config(),
servers.setdefault, and the test's use of app.raw_config and
app.action_quit_save() when making the fix.
- Around line 245-253: Add a precondition assertion so the test fails when there
are no tool nodes: before the existing guarded block, assert that
beta_node.children is truthy (e.g., assert beta_node.children) so the test does
not vacuously pass; then proceed to set tool_node = beta_node.children[0],
capture before = tool_node.data["enabled"], call tree.select_node(tool_node),
await pilot.pause(...), invoke app.action_toggle_item(), pause again and assert
the state unchanged. This ensures the branch does not turn the test into a no-op
and locates the check around beta_node and tool_node.
---
Duplicate comments:
In `@tests/conftest.py`:
- Around line 64-70: The fixture currently rebinds config_mod.token_storage and
probe_mod.token_storage but misses the TUI module's cached token_storage; update
the fixture to also rebind the TUI module's token_storage to the same
fresh_store used for DiskStore. In practice, import or reference the TUI module
(the module backing src/jarvis/tui.py where token_storage is defined) and use
monkeypatch.setattr to replace its token_storage with fresh_store so tests
touching TUI auth/cache use the per-test tmp_path store.
In `@tests/integration/test_api_errors.py`:
- Around line 64-74: The tests test_500_when_body_missing and
test_500_when_body_missing_keys assert a 500 for missing/invalid client input;
change them to assert a client error (e.g. 400 Bad Request) instead so we don't
lock in catch-all server behavior. Update the two assertions in
TestToggleToolErrors and the earlier test_500_when_body_missing to expect 400
(or 422 if your API uses that) for POSTs to "/api/servers/alpha/toggle" and
"/api/tools/toggle", and keep the request payloads the same so the tests reflect
validation errors rather than a server fault.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0e0e2057-c4a5-402e-8c0a-30e0db90ca95
📒 Files selected for processing (16)
.github/workflows/ci.yml.gitignorepyproject.tomlsrc/jarvis/config.pytests/__init__.pytests/conftest.pytests/integration/__init__.pytests/integration/test_api_endpoints.pytests/integration/test_api_errors.pytests/integration/test_tui_auth_manager.pytests/integration/test_tui_mcp_manager.pytests/unit/__init__.pytests/unit/test_api_helpers.pytests/unit/test_config.pytests/unit/test_probe.pytests/unit/test_tui.py
✅ Files skipped from review due to trivial changes (4)
- .gitignore
- pyproject.toml
- .github/workflows/ci.yml
- tests/unit/test_tui.py
🚧 Files skipped from review as they are similar to previous changes (4)
- tests/unit/test_api_helpers.py
- tests/integration/test_tui_auth_manager.py
- tests/integration/test_api_endpoints.py
- tests/unit/test_probe.py
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@CLAUDE.md`:
- Around line 45-52: The header in CLAUDE.md is incorrect: it says "Three
runtime modes" but lists four modes handled by src/jarvis/__main__.py (the
mcp/auth subcommand, --auth [SERVER], --http PORT, and no flags). Fix the title
to match the content by changing the heading text to "Four runtime modes, one
entrypoint" (or alternatively remove one of the listed modes if that was
intended), ensuring the description still references src/jarvis/__main__.py and
the four enumerated execution paths.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
887f382 to
8ec30f3
Compare
|
@CodeRabbit review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/jarvis/__main__.py (1)
90-123: 🛠️ Refactor suggestion | 🟠 MajorHonor
--code-modein the--authpath too.
--authalways adds BM25 today, so--code-mode --authdoes not follow the same single-transform policy used in the other non-TUI paths.As per coding guidelines: `src/jarvis/__main__.py`: "Attach exactly one transform to the FastMCP proxy: CodeMode() if --code-mode is set, otherwise BM25SearchTransform(max_results=5)".Proposed fix
- mcp.add_transform(BM25SearchTransform(max_results=5)) + mcp.add_transform(CodeMode() if code_mode else BM25SearchTransform(max_results=5))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/jarvis/__main__.py` around lines 90 - 123, In the --auth branch (where target is resolved and a proxy is created via create_proxy with auth_config/narrow_dict), respect the --code-mode flag the same way as the non-TUI path: detect whether "--code-mode" is present in filtered_argv and attach exactly one transform to mcp — use CodeMode() when code mode is set, otherwise use BM25SearchTransform(max_results=5); replace the unconditional mcp.add_transform(BM25SearchTransform(...)) with conditional logic around mcp.add_transform so only the appropriate transform (CodeMode or BM25SearchTransform) is attached after creating the proxy.
♻️ Duplicate comments (1)
CLAUDE.md (1)
45-51:⚠️ Potential issue | 🟡 MinorFix heading count: it lists four runtime modes, not three.
The title and numbered list are inconsistent.
Proposed fix
-### Three runtime modes, one entrypoint +### Four runtime modes, one entrypoint🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CLAUDE.md` around lines 45 - 51, Update the CLAUDE.md heading that currently reads "Three runtime modes, one entrypoint" to reflect the actual four modes described; change it to "Four runtime modes, one entrypoint" (or otherwise remove the numeric claim) so it matches the numbered list describing the behaviors in src/jarvis/__main__.py (which references MCPConfig, tui.py, list_tools(), and the --http/--auth modes).
🧹 Nitpick comments (1)
tests/integration/test_tui_auth_manager.py (1)
79-80: Replace fixed sleeps with condition-based waits to reduce test flakiness.The repeated
pilot.pause(...)delays can intermittently fail on slower CI runners. Prefer polling for observable state changes (row count/status text/probe call) with a bounded timeout.Proposed pattern
+async def wait_until(predicate, pilot, timeout=2.0, step=0.02): + elapsed = 0.0 + while elapsed < timeout: + if predicate(): + return + await pilot.pause(step) + elapsed += step + raise AssertionError("Timed out waiting for condition") ... - await pilot.pause(0.05) + await wait_until(lambda: bool(app._server_names), pilot)Also applies to: 89-90, 132-135, 152-157, 166-177, 197-213, 233-249, 262-265, 273-276
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/test_tui_auth_manager.py` around lines 79 - 80, Replace brittle fixed sleeps (pilot.pause(...)) with condition-based polling that retries until the expected state is observed or a bounded timeout is reached; specifically, where the test asserts set(app._server_names) == {"atlassian","github","local"} (and the other occurrences noted), replace the sleep+assert with a small loop or helper like wait_until(timeout=..., interval=...) that repeatedly checks the observable (e.g., set(app._server_names), element text via page.locator(...).inner_text(), or a probe call) and only asserts after the condition is satisfied or the timeout elapses; update all pilot.pause usages referenced (including around app._server_names and the ranges 89-90, 132-135, 152-157, 166-177, 197-213, 233-249, 262-265, 273-276) to use this polling helper so tests are deterministic on slower CI.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@CLAUDE.md`:
- Line 53: The docs statement that "all non-TUI paths use create_proxy" is
outdated; update the runtime description to reflect that current proxy
construction uses build_proxy in the --http and default paths while create_proxy
is only used for --auth. Edit the sentence referencing create_proxy to mention
both build_proxy and create_proxy (or replace create_proxy with build_proxy
where appropriate) and call out the conditional usage tied to the flags
(--http/default → build_proxy; --auth → create_proxy) so the doc matches the
actual behavior implemented in __main__.py.
In `@src/jarvis/__main__.py`:
- Around line 106-107: The unknown-server error is currently printed to stdout
by the print call in __main__.py before sys.exit(1); change the error emission
to stderr so CLI scripts can detect errors properly—locate the print(f"Unknown
server '{target}'...") statement and write the message to stderr (using the
existing sys module) and then call sys.exit(1) as before.
---
Outside diff comments:
In `@src/jarvis/__main__.py`:
- Around line 90-123: In the --auth branch (where target is resolved and a proxy
is created via create_proxy with auth_config/narrow_dict), respect the
--code-mode flag the same way as the non-TUI path: detect whether "--code-mode"
is present in filtered_argv and attach exactly one transform to mcp — use
CodeMode() when code mode is set, otherwise use
BM25SearchTransform(max_results=5); replace the unconditional
mcp.add_transform(BM25SearchTransform(...)) with conditional logic around
mcp.add_transform so only the appropriate transform (CodeMode or
BM25SearchTransform) is attached after creating the proxy.
---
Duplicate comments:
In `@CLAUDE.md`:
- Around line 45-51: Update the CLAUDE.md heading that currently reads "Three
runtime modes, one entrypoint" to reflect the actual four modes described;
change it to "Four runtime modes, one entrypoint" (or otherwise remove the
numeric claim) so it matches the numbered list describing the behaviors in
src/jarvis/__main__.py (which references MCPConfig, tui.py, list_tools(), and
the --http/--auth modes).
---
Nitpick comments:
In `@tests/integration/test_tui_auth_manager.py`:
- Around line 79-80: Replace brittle fixed sleeps (pilot.pause(...)) with
condition-based polling that retries until the expected state is observed or a
bounded timeout is reached; specifically, where the test asserts
set(app._server_names) == {"atlassian","github","local"} (and the other
occurrences noted), replace the sleep+assert with a small loop or helper like
wait_until(timeout=..., interval=...) that repeatedly checks the observable
(e.g., set(app._server_names), element text via page.locator(...).inner_text(),
or a probe call) and only asserts after the condition is satisfied or the
timeout elapses; update all pilot.pause usages referenced (including around
app._server_names and the ranges 89-90, 132-135, 152-157, 166-177, 197-213,
233-249, 262-265, 273-276) to use this polling helper so tests are deterministic
on slower CI.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 298287e8-31c1-4d08-a997-48b3944022ca
📒 Files selected for processing (23)
.github/workflows/ci.yml.gitignoreCLAUDE.mdmacOs/Jarvis/Jarvis/Services/LogViewerView.swiftpyproject.tomlscripts/build_jarvis_binary.shscripts/build_jarvis_binary_linux.shsrc/jarvis/__main__.pysrc/jarvis/config.pysrc/jarvis/proxy.pytests/__init__.pytests/conftest.pytests/integration/__init__.pytests/integration/test_api_endpoints.pytests/integration/test_api_errors.pytests/integration/test_tui_auth_manager.pytests/integration/test_tui_mcp_manager.pytests/unit/__init__.pytests/unit/test_api_helpers.pytests/unit/test_config.pytests/unit/test_probe.pytests/unit/test_proxy.pytests/unit/test_tui.py
✅ Files skipped from review due to trivial changes (8)
- macOs/Jarvis/Jarvis/Services/LogViewerView.swift
- .gitignore
- pyproject.toml
- .github/workflows/ci.yml
- tests/unit/test_tui.py
- tests/integration/test_api_endpoints.py
- tests/integration/test_tui_mcp_manager.py
- tests/integration/test_api_errors.py
🚧 Files skipped from review as they are similar to previous changes (3)
- src/jarvis/config.py
- tests/conftest.py
- tests/unit/test_probe.py
| print(f"Unknown server '{target}'. Available: {', '.join(config.mcpServers)}") | ||
| sys.exit(1) |
There was a problem hiding this comment.
Send unknown-server errors to stderr.
At Line 106 this prints to stdout, which makes CLI error handling/scripting less clean.
Proposed fix
- print(f"Unknown server '{target}'. Available: {', '.join(config.mcpServers)}")
+ print(
+ f"Unknown server '{target}'. Available: {', '.join(config.mcpServers)}",
+ file=sys.stderr,
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| print(f"Unknown server '{target}'. Available: {', '.join(config.mcpServers)}") | |
| sys.exit(1) | |
| print( | |
| f"Unknown server '{target}'. Available: {', '.join(config.mcpServers)}", | |
| file=sys.stderr, | |
| ) | |
| sys.exit(1) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/jarvis/__main__.py` around lines 106 - 107, The unknown-server error is
currently printed to stdout by the print call in __main__.py before sys.exit(1);
change the error emission to stderr so CLI scripts can detect errors
properly—locate the print(f"Unknown server '{target}'...") statement and write
the message to stderr (using the existing sys module) and then call sys.exit(1)
as before.
8ec30f3 to
c9bc0b0
Compare
c9bc0b0 to
e9d3d7e
Compare
Summary by CodeRabbit
New Features
Tests
Chores
Changes
Documentation