Amucaj ft gzip fixes#25
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 58 minutes and 46 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (9)
📝 WalkthroughWalkthroughThis PR implements transparent gzip response decompression in Jarvis's proxy layer. It introduces a new Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant DecompressingProxyClient
participant WrappedHTTPClient
participant Response
Client->>DecompressingProxyClient: Call .new()
DecompressingProxyClient->>DecompressingHTTPClient: Create wrapper
DecompressingHTTPClient->>DecompressingProxyClient: Store reference
Client->>DecompressingHTTPClient: Enter async context (__aenter__)
DecompressingHTTPClient->>WrappedHTTPClient: Enter wrapped context
WrappedHTTPClient->>Response: Receive response (possibly gzipped)
WrappedHTTPClient-->>DecompressingHTTPClient: Return response object
DecompressingHTTPClient-->>Client: Return response
Note over Client,DecompressingHTTPClient: Client reads response within context
Client->>DecompressingHTTPClient: Exit async context (__aexit__)
DecompressingHTTPClient->>DecompressingHTTPClient: Check for gzip (magic bytes or header)
alt Gzip Detected
DecompressingHTTPClient->>DecompressingHTTPClient: Decompress response.content
DecompressingHTTPClient->>DecompressingHTTPClient: Remove content-encoding header
else Non-gzipped
Note over DecompressingHTTPClient: Pass through unchanged
end
DecompressingHTTPClient->>WrappedHTTPClient: Exit wrapped context
WrappedHTTPClient-->>DecompressingHTTPClient: Cleanup
DecompressingHTTPClient-->>Client: Context exit complete
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
🧹 Nitpick comments (3)
src/jarvis/__main__.py (1)
53-70: Consider disabling propagation on thejarvislogger to avoid duplicate records.
log = logging.getLogger("jarvis")withpropagateleft at the defaultTruemeans anything installed on the root logger later (e.g.,uvicorn.Config(log_level="info")triggerslogging.basicConfig-style setup in some paths, or third-party libs callingbasicConfig) could yield duplicated lines via both theRichHandlerhere and the root handler. Safer:log = logging.getLogger("jarvis") log.setLevel(logging.INFO) +log.propagate = False🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/jarvis/__main__.py` around lines 53 - 70, The "jarvis" logger created as log = logging.getLogger("jarvis") currently leaves propagate=True which can cause duplicate log records when a root handler is added later; disable propagation on that logger (set log.propagate = False) after creating the logger and installing the handler (the block where handler is created/added for RichHandler or StreamHandler) so the "jarvis" logger does not forward records to parent/root loggers.src/jarvis/decompressing_client.py (2)
44-55:_is_gzippedis duplicated across both classes.Identical implementation on
DecompressingProxyClientandDecompressingHTTPClient. Extract to a module-level helper (or@staticmethodon one class, referenced from the other) to avoid drift.♻️ Proposed refactor
+def _is_gzipped(data: bytes) -> bool: + """Detect gzip by magic number (0x1f 0x8b).""" + return len(data) >= 2 and data[0] == 0x1F and data[1] == 0x8B + + class DecompressingProxyClient: ... - def _is_gzipped(self, data: bytes) -> bool: - ... - return len(data) >= 2 and data[0] == 0x1F and data[1] == 0x8B + # use module-level _is_gzippedAlso applies to: 190-199
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/jarvis/decompressing_client.py` around lines 44 - 55, The duplicate _is_gzipped implementation in DecompressingProxyClient and DecompressingHTTPClient should be consolidated: extract a single helper (either a module-level function named _is_gzipped or make it a `@staticmethod` on one class) and have both classes call that single implementation; update both DecompressingProxyClient and DecompressingHTTPClient to remove their duplicate methods and reference the shared helper to avoid duplication and future drift.
69-70: Silent no-op whenresponselacks acontentattribute may hide misuse.If a caller passes an object without
content, the method returns silently. That's fine for graceful degradation, but combined with the__aexit__timing issue above it makes it hard to detect that decompression isn't actually happening. Consider alogger.debugline here so the no-decompression case is observable during troubleshooting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/jarvis/decompressing_client.py` around lines 69 - 70, Add an observability log when the method detects a response without a content attribute: replace the silent early return in the block that checks if not hasattr(response, "content") with a logger.debug call (or self.logger.debug if a class logger exists) that notes "no content attribute on response" and includes the response type or repr(response) for debugging, then return as before; reference the existing response variable and the hasattr(response, "content") check so the log is colocated with that condition.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/plans/2026-04-17-gzip-decompression.md`:
- Around line 135-175: The plan's DecompressingHTTPClient block diverges from
the shipped implementation: it shows a __call__ method and a constructor
DecompressingHTTPClient(wrapped_client) while the real code uses __aexit__ for
decompression and a constructor signature wrapped_client, decompressor=None;
update the plan to match the actual implementation by replacing the
__call__-based decompression with the __aexit__ implementation and the correct
constructor signature (DecompressingHTTPClient(wrapped_client,
decompressor=None)), or explicitly mark this doc as historical and note the
differences (mentioning DecompressingHTTPClient, __call__, __aexit__, and the
constructor signatures) so readers aren't misled.
- Around line 62-65: Replace all hardcoded absolute user paths like
"/Users/artemismucaj/Netatmo/github/jarvis" in the document examples (e.g., the
bash snippet invoking pytest) with a project-relative instruction such as "from
the repo root" or "./" so examples read "from the repo root, run: uv run pytest
tests/..." and update all occurrences listed in the comment accordingly.
In `@src/jarvis/decompressing_client.py`:
- Around line 160-170: The fallback branch for decompression is inconsistent:
instead of directly manipulating self._response.content, route both cases
through the same decompression logic by either (A) constructing a minimal
decompressor instance when self._decompressor is None and calling its
_decompress_response(self._response), or (B) factoring the core logic into a
module-level helper (e.g., _decompress_response_helper(response)) and calling it
from both the existing path and the fallback; ensure the helper/constructed
decompressor checks hasattr(response, "content") before touching it, catches
gzip.BadGzipFile/OSError and logs a warning (matching the primary path behavior)
instead of raising, and removes or updates the "content-encoding" header after
successful decompression so downstream consumers don't treat it as compressed;
update the code around _decompressor, _decompress_response, _is_gzipped, and
_response accordingly.
- Around line 85-92: The debug statement logs the same size twice because
response.content is overwritten before measuring the original compressed length;
before calling gzip.decompress, capture the original size (e.g., compressed_len
= len(response.content)), then call gzip.decompress into decompressed, assign
response.content = decompressed, and call logger.debug with compressed_len and
len(decompressed) so the message shows "compressed -> decompressed" sizes;
reference the gzip.decompress call, the response.content assignment, the
decompressed variable, and the logger.debug call in the decompression logic.
- Around line 138-170: The decompression currently happens in __aexit__ so
callers inside "async with" still see compressed content and any gzip.decompress
errors in __aexit__ can mask the original exception; move the decompression step
into __aenter__ so the returned response is already decompressed (call
self._decompressor._decompress_response(self._response) if self._decompressor
else check self._is_gzipped(self._response.content) and use gzip.decompress),
and ensure __aexit__ never raises new exceptions from decompression (catch and
swallow or attach/log errors) so the original exc_type/exc_val/exc_tb passed to
self._wrapped_client.__aexit__ is preserved; use the existing symbols
__aenter__, __aexit__, _wrapped_client, _response, _decompressor,
_decompress_response, _is_gzipped and gzip.decompress to locate and implement
the changes.
In `@tests/integration/test_gzip_handling.py`:
- Around line 32-62: The test only asserts proxy construction and never
exercises gzip decompression; patching jarvis.decompressing_client.ProxyClient
has no effect because build_proxy imports ProxyClient from
fastmcp.server.providers.proxy (see build_proxy in src/jarvis/proxy.py). Fix by
updating the test to actually call a method that triggers an HTTP request
through the proxy client (e.g., list tools or invoke a tool) against the mocked
client factory so the patched DecompressingProxyClient/ProxyClient is used and
the gzip-compressed mock_response (with headers {"content-encoding": "gzip"}) is
fetched and parsed; alternatively move this scenario into a focused unit test
that directly constructs DecompressingProxyClient and asserts it returns the
decompressed JSON payload. Ensure you patch the exact symbol used by build_proxy
(the ProxyClient imported in src/jarvis/proxy.py) or import and test
DecompressingProxyClient directly.
- Around line 14-15: The test function test_proxy_decompresses_gzipped_tool_list
is an async test and currently decorated with `@pytest.mark.asyncio`; remove that
decorator (the `@pytest.mark.asyncio` marker) from the test in
tests/integration/test_gzip_handling.py so the test relies on pytest-asyncio's
"auto" asyncio_mode and remains an async def
test_proxy_decompresses_gzipped_tool_list(): without the marker.
In `@tests/unit/test_decompressing_client.py`:
- Around line 15-39: The test
test_decompressing_client_decompresses_gzip_response currently only constructs
DecompressingProxyClient and never exercises the decompression path; either
update the test to actually invoke the proxied request and assert decompression
or rename it to a constructor smoke test. Specifically, use the mocked
ProxyClient (mock_wrapped_client) via DecompressingProxyClient and call the
outward method that triggers the wrapped client (e.g.,
DecompressingProxyClient.request or send depending on implementation) so the
mock_wrapped_client is awaited and its MagicMock response (with headers
{"content-encoding": "gzip"} and content=compressed) is returned, then assert
the returned response content equals original_json (decompressed) and headers no
longer indicate gzip; otherwise simplify the test to only check construction and
rename test_decompressing_client_constructs_proxy_client.
- Around line 72-73: Remove the redundant `@pytest.mark.asyncio` decorators from
async tests in this file: specifically delete the decorator above
test_decompressing_http_client_decompresses_on_exit and the other two async test
functions referenced in the review (they are already run via pytest-asyncio auto
mode); simply remove the decorator lines so the async test coroutines remain as
plain async def functions (no code changes inside the test functions).
- Around line 141-142: The test currently catches a bare Exception in the with
pytest.raises block for decompressor._decompress_response which is too broad;
restrict the assertion to the actual exceptions the implementation re-raises
(gzip.BadGzipFile and OSError) by changing the pytest.raises call to expect
those specific exception types and ensure gzip is imported in the test so the
tuple (gzip.BadGzipFile, OSError) can be used with pytest.raises for the
decompressor._decompress_response(mock_response) call.
In `@tests/unit/test_proxy.py`:
- Around line 110-117: The test test_zstandard_decoder_available currently
swallows ImportError and can silently pass; change it to explicitly require
zstandard by calling pytest.importorskip("zstandard") at the top of the test,
then unconditionally import httpx._decoders and assert "zstd" in
SUPPORTED_DECODERS (or better: exercise httpx public API by decoding a real
zstd-encoded response) so the assertion fails if zstd support is missing;
reference the existing symbols test_zstandard_decoder_available and
SUPPORTED_DECODERS and use pytest.importorskip("zstandard") to enforce the
dependency.
---
Nitpick comments:
In `@src/jarvis/__main__.py`:
- Around line 53-70: The "jarvis" logger created as log =
logging.getLogger("jarvis") currently leaves propagate=True which can cause
duplicate log records when a root handler is added later; disable propagation on
that logger (set log.propagate = False) after creating the logger and installing
the handler (the block where handler is created/added for RichHandler or
StreamHandler) so the "jarvis" logger does not forward records to parent/root
loggers.
In `@src/jarvis/decompressing_client.py`:
- Around line 44-55: The duplicate _is_gzipped implementation in
DecompressingProxyClient and DecompressingHTTPClient should be consolidated:
extract a single helper (either a module-level function named _is_gzipped or
make it a `@staticmethod` on one class) and have both classes call that single
implementation; update both DecompressingProxyClient and DecompressingHTTPClient
to remove their duplicate methods and reference the shared helper to avoid
duplication and future drift.
- Around line 69-70: Add an observability log when the method detects a response
without a content attribute: replace the silent early return in the block that
checks if not hasattr(response, "content") with a logger.debug call (or
self.logger.debug if a class logger exists) that notes "no content attribute on
response" and includes the response type or repr(response) for debugging, then
return as before; reference the existing response variable and the
hasattr(response, "content") check so the log is colocated with that condition.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 03da4cc8-8867-4396-9012-7183c0f0de98
📒 Files selected for processing (11)
README.mddocs/plans/2026-04-17-gzip-decompression.mdmacOs/Jarvis/Jarvis/Services/ProcessManager.swiftpyproject.tomlsrc/jarvis/__main__.pysrc/jarvis/decompressing_client.pysrc/jarvis/middleware.pysrc/jarvis/proxy.pytests/integration/test_gzip_handling.pytests/unit/test_decompressing_client.pytests/unit/test_proxy.py
| ```bash | ||
| cd /Users/artemismucaj/Netatmo/github/jarvis | ||
| uv run pytest tests/unit/test_decompressing_client.py::test_decompressing_client_decompresses_gzip_response -v | ||
| ``` |
There was a problem hiding this comment.
Remove hardcoded absolute user path from the plan.
/Users/artemismucaj/Netatmo/github/jarvis appears throughout the doc (lines 63, 180, 189, 295, 431, 440, 521, 530, 583, 592, 601, 610). It leaks a contributor's local environment and is meaningless to everyone else. Replace with a project-relative instruction like “from the repo root”.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/plans/2026-04-17-gzip-decompression.md` around lines 62 - 65, Replace
all hardcoded absolute user paths like
"/Users/artemismucaj/Netatmo/github/jarvis" in the document examples (e.g., the
bash snippet invoking pytest) with a project-relative instruction such as "from
the repo root" or "./" so examples read "from the repo root, run: uv run pytest
tests/..." and update all occurrences listed in the comment accordingly.
| async def __call__(self, *args, **kwargs): | ||
| """Call the wrapped client and decompress any gzip response.""" | ||
| response = await self._wrapped(*args, **kwargs) | ||
|
|
||
| # Check if response needs decompression | ||
| if self._is_gzipped(response): | ||
| try: | ||
| response = self._decompress_response(response) | ||
| logger.debug("Decompressed gzip response") | ||
| except Exception as e: | ||
| logger.warning(f"Failed to decompress response: {e}") | ||
| # Return original response if decompression fails | ||
|
|
||
| return response | ||
|
|
||
| @staticmethod | ||
| def _is_gzipped(response: Any) -> bool: | ||
| """Check if response appears to be gzip-compressed.""" | ||
| # Check for gzip magic number (1f 8b) or explicit header | ||
| if hasattr(response, "content") and isinstance(response.content, bytes): | ||
| return response.content.startswith(b"\x1f\x8b") | ||
| return False | ||
|
|
||
| @staticmethod | ||
| def _decompress_response(response: Any) -> Any: | ||
| """Decompress a gzip-encoded response.""" | ||
| if not hasattr(response, "content"): | ||
| return response | ||
|
|
||
| try: | ||
| decompressed = gzip.decompress(response.content) | ||
| # Update response content with decompressed data | ||
| response.content = decompressed | ||
| # Remove or update Content-Encoding header | ||
| if hasattr(response, "headers"): | ||
| response.headers.pop("content-encoding", None) | ||
| return response | ||
| except Exception as e: | ||
| logger.error(f"Gzip decompression failed: {e}") | ||
| raise | ||
| ``` |
There was a problem hiding this comment.
Plan diverges from the shipped implementation.
This section shows decompression happening in a __call__ method on DecompressingHTTPClient, but the merged code in src/jarvis/decompressing_client.py performs decompression in __aexit__ and has no __call__. Similarly, the constructor signature in the plan (DecompressingHTTPClient(wrapped_client)) differs from the actual (wrapped_client, decompressor=None). Either update the plan so it reflects what was built or mark the doc as historical so future readers aren't misled.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/plans/2026-04-17-gzip-decompression.md` around lines 135 - 175, The
plan's DecompressingHTTPClient block diverges from the shipped implementation:
it shows a __call__ method and a constructor
DecompressingHTTPClient(wrapped_client) while the real code uses __aexit__ for
decompression and a constructor signature wrapped_client, decompressor=None;
update the plan to match the actual implementation by replacing the
__call__-based decompression with the __aexit__ implementation and the correct
constructor signature (DecompressingHTTPClient(wrapped_client,
decompressor=None)), or explicitly mark this doc as historical and note the
differences (mentioning DecompressingHTTPClient, __call__, __aexit__, and the
constructor signatures) so readers aren't misled.
| private func logHandle(for url: URL) -> FileHandle? { | ||
| try? FileHandle(forWritingTo: url) | ||
| guard let handle = try? FileHandle(forWritingTo: url) else { return nil } | ||
| handle.truncateFile(atOffset: 0) | ||
| return handle |
There was a problem hiding this comment.
Reuse a single log handle for both streams to prevent log corruption.
logHandle(for:) truncates on every call, and startBundled() calls it twice (Line 69 and Line 70). That creates two independent file handles to the same file, which can overwrite/interleave incorrectly under concurrent stdout/stderr writes.
Suggested fix
@@
- proc.standardOutput = logHandle(for: logURL)
- proc.standardError = logHandle(for: logURL)
+ let handle = logHandle(for: logURL)
+ proc.standardOutput = handle
+ proc.standardError = handle| try: | ||
| decompressed = gzip.decompress(response.content) | ||
| response.content = decompressed | ||
| logger.debug( | ||
| "Decompressed gzip response: %d bytes -> %d bytes", | ||
| len(response.content), | ||
| len(decompressed), | ||
| ) |
There was a problem hiding this comment.
Debug log swaps the two sizes — both will print the decompressed length.
response.content = decompressed is executed before the logger.debug call, so by the time len(response.content) is evaluated it's already the decompressed size. The message "%d bytes -> %d bytes" will log the same number twice instead of compressed -> decompressed.
🛠️ Proposed fix
try:
- decompressed = gzip.decompress(response.content)
- response.content = decompressed
- logger.debug(
- "Decompressed gzip response: %d bytes -> %d bytes",
- len(response.content),
- len(decompressed),
- )
+ original_len = len(response.content)
+ decompressed = gzip.decompress(response.content)
+ response.content = decompressed
+ logger.debug(
+ "Decompressed gzip response: %d bytes -> %d bytes",
+ original_len,
+ len(decompressed),
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| decompressed = gzip.decompress(response.content) | |
| response.content = decompressed | |
| logger.debug( | |
| "Decompressed gzip response: %d bytes -> %d bytes", | |
| len(response.content), | |
| len(decompressed), | |
| ) | |
| try: | |
| original_len = len(response.content) | |
| decompressed = gzip.decompress(response.content) | |
| response.content = decompressed | |
| logger.debug( | |
| "Decompressed gzip response: %d bytes -> %d bytes", | |
| original_len, | |
| len(decompressed), | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/jarvis/decompressing_client.py` around lines 85 - 92, The debug statement
logs the same size twice because response.content is overwritten before
measuring the original compressed length; before calling gzip.decompress,
capture the original size (e.g., compressed_len = len(response.content)), then
call gzip.decompress into decompressed, assign response.content = decompressed,
and call logger.debug with compressed_len and len(decompressed) so the message
shows "compressed -> decompressed" sizes; reference the gzip.decompress call,
the response.content assignment, the decompressed variable, and the logger.debug
call in the decompression logic.
| async def __aenter__(self) -> Any: | ||
| """Enter the async context manager. | ||
|
|
||
| Returns: | ||
| The response object from the wrapped client. | ||
| """ | ||
| self._response = await self._wrapped_client.__aenter__() | ||
| return self._response | ||
|
|
||
| async def __aexit__( | ||
| self, | ||
| exc_type: type[BaseException] | None, | ||
| exc_val: BaseException | None, | ||
| exc_tb: TracebackType | None, | ||
| ) -> None: | ||
| """Exit the async context manager and decompress the response if needed. | ||
|
|
||
| Args: | ||
| exc_type: Exception type if an error occurred. | ||
| exc_val: Exception value if an error occurred. | ||
| exc_tb: Exception traceback if an error occurred. | ||
| """ | ||
| try: | ||
| # Decompress if we have a decompressor | ||
| if self._response and self._decompressor: | ||
| self._decompressor._decompress_response(self._response) | ||
| elif self._response: | ||
| # Fallback: self-decompress if no parent decompressor | ||
| if self._is_gzipped(self._response.content): | ||
| self._response.content = gzip.decompress(self._response.content) | ||
| finally: | ||
| # Always exit the wrapped client | ||
| await self._wrapped_client.__aexit__(exc_type, exc_val, exc_tb) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find how ProxyClient.new()'s context manager is consumed and when response.content is read.
rg -nP -C5 '\bProxyClient\b' --type=py
rg -nP -C5 '\.new\s*\(\s*\)' --type=py
rg -nP -C5 'async with\s+.*proxy' --type=py -i
fd -t f 'proxy' -e py --exec rg -nP -C3 'aenter|aexit|content' {}
# Look at how the integration test exercises this path
fd -t f 'test_gzip_handling|test_decompressing_client' -e py --exec cat {}Repository: ArtemisMucaj/jarvis
Length of output: 24782
🌐 Web query:
fastmcp ProxyClient new() async context manager response usage
💡 Result:
ProxyClient is a subclass of fastmcp.Client from fastmcp.server.providers.proxy (import as from fastmcp.server.providers.proxy import ProxyClient), designed for proxying MCP servers. It forwards advanced MCP features like roots, sampling, elicitation, logging, and progress between backend servers and proxy clients. Creation (no explicit new or async new; uses standard Client constructor): proxy_client = ProxyClient("backend_server.py") # or URL, FastMCP instance, etc. Can configure handlers, e.g., ProxyClient("backend", sampling_handler=None) Async context manager usage (standard for all fastmcp.Client): async with proxy_client: tools = await proxy_client.list_tools result = await proxy_client.call_tool("echo", {"message": "test"}) # Access response via result (CallToolResult or similar) Primary use: Create proxy servers. from fastmcp import FastMCP from fastmcp.server.providers.proxy import ProxyClient backend = ProxyClient("http://example.com/mcp") proxy_server = FastMCP.as_proxy(backend, name="MyProxy") # or create_proxy(backend) proxy_server.run For session reuse in proxies: base = ProxyClient(target); shared = base.new; use lambda: shared in client_factory. Responses from call_tool etc. follow standard MCP types (e.g., ToolResult, content in result.data or result.content). Official docs: gofastmcp.com/v2/servers/proxy, gofastmcp.com/python-sdk/fastmcp-server-providers-proxy. GitHub: PrefectHQ/fastmcp src/fastmcp/server/providers/proxy.py.
Citations:
- 1: https://gofastmcp.com/v2/servers/proxy
- 2: Replace FastMCP.as_proxy() with create_proxy() function PrefectHQ/fastmcp#2829
- 3: https://github.com/jlowin/fastmcp/blob/e87ede07/src/fastmcp/server/providers/proxy.py
- 4: https://gofastmcp.com/python-sdk/fastmcp-server-providers-proxy
- 5: https://gofastmcp.com/v2/clients/client
- 6: https://gofastmcp.com/python-sdk/fastmcp-client-client
- 7: https://github.com/jlowin/fastmcp/blob/e87ede07/examples/in_memory_proxy_example.py
- 8: https://fastmcp.wiki/en/servers/proxy
🏁 Script executed:
# Find how response.content is actually accessed when using the decompressing client
rg -n 'response\.content' --type=py -B 3 -A 3
rg -n 'call_tool|list_tools' --type=py -B 2 -A 2
fd -t f '\.py$' --exec rg -l 'DecompressingHTTPClient|DecompressingProxyClient' {} \;Repository: ArtemisMucaj/jarvis
Length of output: 16548
🏁 Script executed:
# Look at FastMCP ProxyClient source to understand how new() returns and uses the response
rg -n 'class ProxyClient' -A 30 --type=py
rg -n 'def new\(' -A 15 --type=py
# Check how ProxyProvider uses the client
rg -n 'class ProxyProvider' -A 20 --type=pyRepository: ArtemisMucaj/jarvis
Length of output: 1276
🏁 Script executed:
# Check the integration test more carefully to see what it actually tests
cat -n tests/integration/test_gzip_handling.py
# Look for where response content is actually parsed/consumed
rg -n 'json\.loads|json\.load|\.json\(' --type=py -B 2 -A 2Repository: ArtemisMucaj/jarvis
Length of output: 24744
🏁 Script executed:
# Look at the actual ProxyProvider implementation to see when response.content is accessed
python3 << 'EOF'
import subprocess
result = subprocess.run(
['rg', '-n', 'class ProxyProvider', '-A', '50', '--type=py'],
capture_output=True,
text=True,
cwd='/tmp/repo'
)
print(result.stdout)
EOF
# Also search for where list_tools and call_tool parse response.content
rg -n 'list_tools|call_tool' -B 5 -A 10 --type=py | head -100Repository: ArtemisMucaj/jarvis
Length of output: 7356
🏁 Script executed:
# Check how build_proxy integrates the decompressing client
rg -n 'build_proxy\|DecompressingProxyClient' --type=py -B 3 -A 3
# Look at the actual line in proxy.py where client is created
sed -n '60,85p' src/jarvis/proxy.pyRepository: ArtemisMucaj/jarvis
Length of output: 851
🏁 Script executed:
# Find where DecompressingProxyClient is actually instantiated in production code
rg -n 'DecompressingProxyClient' --type=py
# Look at the complete decompressing_client.py to understand the full API
cat -n src/jarvis/decompressing_client.py | head -120Repository: ArtemisMucaj/jarvis
Length of output: 6992
Decompression in __aexit__ runs after the response body has already been consumed—the wrapper does not fix gzip decoding for the caller.
Inside the async with block, response.content is still compressed when the consumer reads it (e.g., during JSON parsing). Decompression happens in __aexit__, which executes after the context exits, so any response parsing inside the block operates on compressed bytes. To fix this, decompression must occur during __aenter__ (before returning the response) or the wrapper must intercept content reads directly.
Additionally, if gzip.decompress() raises in the try block of __aexit__, the exception propagates out via the finally clause. This new exception replaces the original exception from the async with body that was passed to __aexit__, effectively masking it from the outer caller—even though the wrapped client's __aexit__ is still called with the original triple.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/jarvis/decompressing_client.py` around lines 138 - 170, The decompression
currently happens in __aexit__ so callers inside "async with" still see
compressed content and any gzip.decompress errors in __aexit__ can mask the
original exception; move the decompression step into __aenter__ so the returned
response is already decompressed (call
self._decompressor._decompress_response(self._response) if self._decompressor
else check self._is_gzipped(self._response.content) and use gzip.decompress),
and ensure __aexit__ never raises new exceptions from decompression (catch and
swallow or attach/log errors) so the original exc_type/exc_val/exc_tb passed to
self._wrapped_client.__aexit__ is preserved; use the existing symbols
__aenter__, __aexit__, _wrapped_client, _response, _decompressor,
_decompress_response, _is_gzipped and gzip.decompress to locate and implement
the changes.
| with patch("jarvis.decompressing_client.ProxyClient") as mock_proxy_class: | ||
| # Mock the HTTP client to return gzip-compressed response | ||
| original_response = { | ||
| "jsonrpc": "2.0", | ||
| "id": 1, | ||
| "result": { | ||
| "tools": [ | ||
| { | ||
| "name": "test_tool", | ||
| "description": "A test tool", | ||
| "inputSchema": {"type": "object"}, | ||
| } | ||
| ] | ||
| }, | ||
| } | ||
|
|
||
| compressed_json = gzip.compress(json.dumps(original_response).encode()) | ||
|
|
||
| mock_response = MagicMock() | ||
| mock_response.content = compressed_json | ||
| mock_response.headers = {"content-encoding": "gzip"} | ||
|
|
||
| # Set up the mock to return our compressed response | ||
| mock_client = AsyncMock(return_value=mock_response) | ||
| mock_proxy_instance = MagicMock() | ||
| mock_proxy_instance.new.return_value = mock_client | ||
| mock_proxy_class.return_value = mock_proxy_instance | ||
|
|
||
| proxy = build_proxy(config, name="test_gzip") | ||
| assert proxy is not None | ||
| # Proxy should be built successfully without errors |
There was a problem hiding this comment.
Test name promises integration coverage but only asserts proxy is not None.
The compressed payload, mock response, and content-encoding header are set up but never exercised — build_proxy doesn't trigger an HTTP call, and nothing verifies that DecompressingProxyClient actually decompresses. Also the test patches jarvis.decompressing_client.ProxyClient to return mock_proxy_instance, but build_proxy imports ProxyClient from fastmcp.server.providers.proxy (see src/jarvis/proxy.py), so that patch has no effect on construction at all.
To actually exercise gzip handling, either:
- invoke a tool/list through the built proxy against the mocked client factory, and assert the JSON payload is parsed successfully; or
- move this assertion into the unit suite where
DecompressingProxyClientis tested directly end-to-end.
As-is, this test would still pass if gzip handling were removed entirely.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration/test_gzip_handling.py` around lines 32 - 62, The test only
asserts proxy construction and never exercises gzip decompression; patching
jarvis.decompressing_client.ProxyClient has no effect because build_proxy
imports ProxyClient from fastmcp.server.providers.proxy (see build_proxy in
src/jarvis/proxy.py). Fix by updating the test to actually call a method that
triggers an HTTP request through the proxy client (e.g., list tools or invoke a
tool) against the mocked client factory so the patched
DecompressingProxyClient/ProxyClient is used and the gzip-compressed
mock_response (with headers {"content-encoding": "gzip"}) is fetched and parsed;
alternatively move this scenario into a focused unit test that directly
constructs DecompressingProxyClient and asserts it returns the decompressed JSON
payload. Ensure you patch the exact symbol used by build_proxy (the ProxyClient
imported in src/jarvis/proxy.py) or import and test DecompressingProxyClient
directly.
| def test_decompressing_client_decompresses_gzip_response(self) -> None: | ||
| """DecompressingProxyClient should decompress gzip-encoded responses.""" | ||
| from jarvis.decompressing_client import DecompressingProxyClient | ||
|
|
||
| # Create mock transport | ||
| mock_transport = MagicMock() | ||
|
|
||
| # Create a gzip-compressed JSON response | ||
| original_json = b'{"jsonrpc": "2.0", "id": 1, "result": {"tools": []}}' | ||
| compressed = gzip.compress(original_json) | ||
|
|
||
| # Mock the wrapped client to return compressed data | ||
| mock_wrapped_client = AsyncMock() | ||
| mock_response = MagicMock() | ||
| mock_response.content = compressed | ||
| mock_response.headers = {"content-encoding": "gzip"} | ||
| mock_wrapped_client.return_value = mock_response | ||
|
|
||
| # Patch ProxyClient to return our mock | ||
| with patch( | ||
| "jarvis.decompressing_client.ProxyClient", | ||
| return_value=mock_wrapped_client, | ||
| ): | ||
| client = DecompressingProxyClient(mock_transport) | ||
| assert client is not None |
There was a problem hiding this comment.
Test name claims decompression but only asserts construction.
test_decompressing_client_decompresses_gzip_response sets up a gzip payload, a mock wrapped client, and a mocked response — none of which are actually invoked. The only assertion is assert client is not None, which just checks the constructor returns something. Either invoke the wrapper's request method and assert the response content equals original_json, or drop the unused setup and rename the test to reflect what it actually covers (construction smoke test). The decompression-on-exit/call behavior is already covered by the later async tests.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/test_decompressing_client.py` around lines 15 - 39, The test
test_decompressing_client_decompresses_gzip_response currently only constructs
DecompressingProxyClient and never exercises the decompression path; either
update the test to actually invoke the proxied request and assert decompression
or rename it to a constructor smoke test. Specifically, use the mocked
ProxyClient (mock_wrapped_client) via DecompressingProxyClient and call the
outward method that triggers the wrapped client (e.g.,
DecompressingProxyClient.request or send depending on implementation) so the
mock_wrapped_client is awaited and its MagicMock response (with headers
{"content-encoding": "gzip"} and content=compressed) is returned, then assert
the returned response content equals original_json (decompressed) and headers no
longer indicate gzip; otherwise simplify the test to only check construction and
rename test_decompressing_client_constructs_proxy_client.
| @pytest.mark.asyncio | ||
| async def test_decompressing_http_client_decompresses_on_exit(self) -> None: |
There was a problem hiding this comment.
Remove all @pytest.mark.asyncio markers — auto mode is enabled.
Three async tests in this file carry redundant markers:
- `@pytest.mark.asyncio`
async def test_decompressing_http_client_decompresses_on_exit(self) -> None:
...
- `@pytest.mark.asyncio`
async def test_decompressing_http_client_context_manager(self) -> None:
...
- `@pytest.mark.asyncio`
async def test_decompressing_http_client_decompresses_on_call(self) -> None:As per coding guidelines: "Do not add @pytest.mark.asyncio to async tests because pytest-asyncio auto mode is enabled in pyproject.toml".
Also applies to: 144-145, 159-160
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/test_decompressing_client.py` around lines 72 - 73, Remove the
redundant `@pytest.mark.asyncio` decorators from async tests in this file:
specifically delete the decorator above
test_decompressing_http_client_decompresses_on_exit and the other two async test
functions referenced in the review (they are already run via pytest-asyncio auto
mode); simply remove the decorator lines so the async test coroutines remain as
plain async def functions (no code changes inside the test functions).
| with pytest.raises(Exception): # gzip.decompress will raise | ||
| decompressor._decompress_response(mock_response) |
There was a problem hiding this comment.
Narrow the blind Exception assertion.
Ruff B017: asserting a bare Exception masks regressions where an unrelated error (e.g. AttributeError) starts being raised and the test still passes. The implementation re-raises gzip.BadGzipFile/OSError:
- with pytest.raises(Exception): # gzip.decompress will raise
- decompressor._decompress_response(mock_response)
+ with pytest.raises((gzip.BadGzipFile, OSError)):
+ decompressor._decompress_response(mock_response)🧰 Tools
🪛 Ruff (0.15.10)
[warning] 141-141: Do not assert blind exception: Exception
(B017)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/test_decompressing_client.py` around lines 141 - 142, The test
currently catches a bare Exception in the with pytest.raises block for
decompressor._decompress_response which is too broad; restrict the assertion to
the actual exceptions the implementation re-raises (gzip.BadGzipFile and
OSError) by changing the pytest.raises call to expect those specific exception
types and ensure gzip is imported in the test so the tuple (gzip.BadGzipFile,
OSError) can be used with pytest.raises for the
decompressor._decompress_response(mock_response) call.
| def test_zstandard_decoder_available(): | ||
| """Verify that httpx can decode zstd responses (zstandard is installed).""" | ||
| try: | ||
| from httpx._decoders import SUPPORTED_DECODERS | ||
|
|
||
| assert "zstd" in SUPPORTED_DECODERS | ||
| except ImportError: | ||
| pass |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
httpx SUPPORTED_DECODERS public API zstd
💡 Result:
In the httpx library, SUPPORTED_DECODERS is an internal dictionary defined in httpx/_decoders.py that maps Content-Encoding values to decoder classes. It includes "identity", "gzip", "deflate", and conditionally "br" (brotli) and "zstd" (Zstandard). The "zstd" entry is added by default but removed if the 'zstandard' package is not importable. This is not part of the public API. The public API automatically handles Content-Encoding decompression for supported formats when accessing response.content or response.text, without exposing SUPPORTED_DECODERS directly. Official docs do not list SUPPORTED_DECODERS as public. zstd support was added in httpx 0.27.1 (August 2024) via merged PR #3139. Install with pip install 'httpx[zstd]' to include the zstandard dependency. Without it, servers using zstd Content-Encoding will fail decoding (gracefully skipped, but content remains compressed). Example from source: SUPPORTED_DECODERS = { "identity": IdentityDecoder, "gzip": GZipDecoder, "deflate": DeflateDecoder, "br": BrotliDecoder, "zstd": ZStandardDecoder, } if zstandard is None: SUPPORTED_DECODERS.pop("zstd")
Citations:
- 1: https://github.com/encode/httpx/blob/def4778d/httpx/_decoders.py
- 2: https://github.com/encode/httpx/blob/master/httpx/_decoders.py
- 3: Add support for zstd decoding encode/httpx#3139
- 4: https://github.com/encode/httpx
- 5: https://www.encode.io/httpx/
- 6: https://www.python-httpx.org/
- 7: https://github.com/encode/httpx/blob/master/CHANGELOG.md
- 8: https://github.com/encode/httpx/blob/def4778d/CHANGELOG.md
🏁 Script executed:
# First, locate and examine the test file
fd -t f "test_proxy.py" tests/Repository: ArtemisMucaj/jarvis
Length of output: 87
🏁 Script executed:
# Check the actual content at lines 110-117
cat -n tests/unit/test_proxy.py | sed -n '100,125p'Repository: ArtemisMucaj/jarvis
Length of output: 780
🏁 Script executed:
# Check the project configuration for zstandard as a dependency
fd -E "*.egg-info" "(setup.py|setup.cfg|pyproject.toml|requirements.*)" | head -20Repository: ArtemisMucaj/jarvis
Length of output: 77
🏁 Script executed:
# Read the pyproject.toml to check zstandard dependency status
cat pyproject.tomlRepository: ArtemisMucaj/jarvis
Length of output: 701
🏁 Script executed:
# Also check if there's any special handling or setup for zstandard in conftest or elsewhere
rg "zstandard|zstd" tests/ -iRepository: ArtemisMucaj/jarvis
Length of output: 301
🏁 Script executed:
# Check if httpx is in the dependencies and what versions are specified
rg "httpx|fastmcp" pyproject.tomlRepository: ArtemisMucaj/jarvis
Length of output: 95
🏁 Script executed:
# Let's also verify: does pytest.importorskip exist and work with packages like zstandard?
rg "importorskip" tests/Repository: ArtemisMucaj/jarvis
Length of output: 45
Test silently passes when import fails — defeats its purpose.
Swallowing ImportError means that if httpx relocates SUPPORTED_DECODERS (private _decoders module), the test passes without asserting zstd availability. Since zstandard>=0.23 is a hard runtime dependency, the check should be unconditional and explicit. Use pytest.importorskip("zstandard") to make the requirement explicit, then assert unconditionally.
Additionally, this test taps httpx's private _decoders module—consider asserting behavior via httpx's public API (e.g., decoding an actual zstd-encoded response) for better stability.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/test_proxy.py` around lines 110 - 117, The test
test_zstandard_decoder_available currently swallows ImportError and can silently
pass; change it to explicitly require zstandard by calling
pytest.importorskip("zstandard") at the top of the test, then unconditionally
import httpx._decoders and assert "zstd" in SUPPORTED_DECODERS (or better:
exercise httpx public API by decoding a real zstd-encoded response) so the
assertion fails if zstd support is missing; reference the existing symbols
test_zstandard_decoder_available and SUPPORTED_DECODERS and use
pytest.importorskip("zstandard") to enforce the dependency.
3c427fa to
2158933
Compare
2158933 to
4527265
Compare
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Documentation
Tests
Chores