Relay task elicitation through standard MCP protocol#3136
Conversation
When a background task calls ctx.elicit(), the notification subscriber now detects the input_required notification and sends a standard elicitation/create request to the client via session.elicit(). The client's elicitation_handler fires, and the relay pushes the response to Redis for the blocked worker. This means clients can respond to background task elicitation using the same elicitation_handler they'd use for any other elicitation — no need to interact with Redis or call handle_task_input() directly. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
@gfortaine I was doing some testing here and was able to connect the full loop! How does this look to you? Does it match what you were thinking? |
WalkthroughAdds background-task elicitation support: new example Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 361eb08f42
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| async with docket.redis() as redis: | ||
| await redis.lpush( # type: ignore[invalid-await] | ||
| response_key, json.dumps(cancel) | ||
| ) | ||
| await redis.expire(response_key, ELICIT_TTL_SECONDS) |
There was a problem hiding this comment.
Set status when pushing cancel fallback response
When _relay_elicitation() falls back after session.elicit() errors (for example, a client without an elicitation handler), it LPUSHes a cancel response but never updates ELICIT_STATUS_KEY to responded. handle_task_input() still treats the task as waiting, so a concurrent/manual input write can also LPUSH onto the same response list, making the worker’s BLPOP outcome race-dependent between cancel and accept paths; this is a behavioral regression from the single-writer contract in handle_task_input()/elicit_for_task.
Useful? React with 👍 / 👎.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/fastmcp/server/tasks/notifications.py (1)
271-294: Broadexcept Exceptionis acceptable here but consider logging the traceback.The Ruff BLE001 warning flags the broad
except Exceptionat Lines 271 and 288. In this context, catching broadly is intentional: the cancel fallback must fire regardless of the failure mode to prevent the worker from blocking indefinitely on BLPOP. This is a valid resiliency pattern.However, the warning at Line 272 only logs
%sof the exception, losing the traceback. Addingexc_info=True(or usinglogger.exception) would aid debugging relay failures in production without changing behavior.Proposed improvement
except Exception as e: - logger.warning("Failed to relay elicitation for task %s: %s", task_id, e) + logger.warning( + "Failed to relay elicitation for task %s: %s", task_id, e, exc_info=True + )
| # If this is an input_required notification with elicitation metadata, | ||
| # relay the elicitation to the client via standard elicitation/create | ||
| params = notification_dict.get("params", {}) | ||
| if params.get("status") == "input_required": | ||
| meta = notification_dict.get("_meta", {}) | ||
| related_task = meta.get("modelcontextprotocol.io/related-task", {}) | ||
| elicitation = related_task.get("elicitation") | ||
| if elicitation: | ||
| task_id = params.get("taskId") | ||
| if not task_id: | ||
| logger.warning( | ||
| "input_required notification missing taskId, skipping relay" | ||
| ) | ||
| return | ||
| task = asyncio.create_task( | ||
| _relay_elicitation(session, session_id, task_id, elicitation, docket), | ||
| name=f"elicitation-relay-{task_id[:8]}", | ||
| ) | ||
| _background_tasks.add(task) | ||
| task.add_done_callback(_background_tasks.discard) |
There was a problem hiding this comment.
Missing KeyError guard on elicitation["message"] / elicitation["requestedSchema"] access in the spawned task.
If a malformed notification arrives without message or requestedSchema in the elicitation dict, _relay_elicitation will raise a KeyError at Lines 243-244. That exception is caught by the broad except Exception block (Line 271), which triggers the cancel fallback — so the worker won't deadlock. However, the root cause (a schema key missing from the notification payload) would be masked as a generic "Failed to relay elicitation" warning. Consider validating presence of required keys before spawning the relay task so the log message is diagnostic.
Proposed fix
elicitation = related_task.get("elicitation")
if elicitation:
+ if "message" not in elicitation or "requestedSchema" not in elicitation:
+ logger.warning(
+ "Elicitation metadata missing required keys (message/requestedSchema), "
+ "skipping relay for task %s",
+ params.get("taskId"),
+ )
+ return
task_id = params.get("taskId")Moves relay_elicitation() into elicitation.py so it can reuse handle_task_input() for the Redis push instead of duplicating that logic. notifications.py just detects the trigger and calls it. Also fixes the related-task metadata key from modelcontextprotocol.io/ to io.modelcontextprotocol/ to match the current spec: https://modelcontextprotocol.io/specification/2025-11-25/basic/utilities/tasks Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
docs/python-sdk/fastmcp-server-tasks-elicitation.mdx (1)
1-96:⚠️ Potential issue | 🟡 MinorThis file should not be manually modified — it is auto-generated.
As per coding guidelines,
docs/python-sdk/**files are auto-generated and should never be modified directly. These changes will likely be overwritten by the next documentation generation run. The API surface updates (newrelay_elicitationfunction, updatedelicit_for_tasksignature) should be picked up automatically by the doc generator from the source code docstrings.As per coding guidelines:
docs/python-sdk/**: "Never modifydocs/python-sdk/**(auto-generated)".
🧹 Nitpick comments (2)
src/fastmcp/server/tasks/notifications.py (1)
165-221: Unuseddocketparameter in_send_mcp_notification.The
docketparameter (line 169) is accepted but never used within_send_mcp_notification. Ruff confirms this (ARG001). Sincerelay_elicitation(the only downstream consumer) doesn't need it either, remove it from the signature and the call site at line 125.Proposed fix
async def _send_mcp_notification( session: ServerSession, notification_dict: dict[str, Any], session_id: str, - docket: Docket, fastmcp: FastMCP, ) -> None:And at the call site (line 124-126):
await _send_mcp_notification( - session, notification_dict, session_id, docket, fastmcp + session, notification_dict, session_id, fastmcp )src/fastmcp/server/tasks/elicitation.py (1)
324-328: Minor TOCTOU between status check and LPUSH — acceptable given TTL expiry.The
redis.get(status_key)check at line 326 and the subsequentredis.lpushat line 332 are not atomic. A concurrent caller could push a duplicate response between these operations. However, sinceelicit_for_task's BLPOP only consumes the first item and extras expire via TTL, this is safe in practice. Worth noting if this code is ever adapted for stricter exactly-once semantics.
|
yes — this is the piece that was missing. having the client use the same the |
Stacked on #2906.
When a background task calls
ctx.elicit(), the notification subscriber detectsthe
input_requirednotification and sends a standardelicitation/createrequest to the client via
session.elicit(). The client'selicitation_handlerfires, and the relay pushes the response to Redis for the blocked worker.
This means clients can respond to background task elicitation using the same
elicitation_handlerthey'd use for any other elicitation — no need to interactwith Redis or call
handle_task_input()directly.Also fixes
self.session→self._sessionin_elicit_for_task()(the.sessionproperty raises in external workers where there's no requestcontext), and corrects the
sessiontype hint inelicit_for_task()toServerSession | None.Fixes the
_metarelated-task key frommodelcontextprotocol.io/related-taskto
io.modelcontextprotocol/related-taskto match the reverse-DNS style inthe current spec.
🤖 Generated with Claude Code