fix: show real platform adapter status on Settings page (#1031)#1032
fix: show real platform adapter status on Settings page (#1031)#1032CryptixSamurai wants to merge 25 commits intocoleam00:devfrom
Conversation
The Platform Connections section showed all adapters as "Not configured" because statuses were hardcoded to false. The /api/health endpoint also lacked adapter state information. - Add `adapters` field to health endpoint with real adapter status - Pass adapter state from index.ts via lazy callback (Telegram initializes after server.listen(), so eager read would miss it) - Update frontend HealthResponse type and PlatformConnectionsSection to consume real adapter data with optional fallback Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds runtime adapter availability reporting: the health API now returns an Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Server
participant AdapterState as AdapterManager
Client->>Server: GET /api/health
Server->>AdapterState: call getActiveAdapters()
AdapterState-->>Server: { slack: bool, telegram: bool, discord: bool, github: bool, gitea: bool, gitlab: bool }
Server-->>Client: 200 OK { status: "...", adapters: { ... } }
Client->>Client: render PlatformConnectionsSection using adapters
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/server/src/routes/api.ts`:
- Around line 808-817: adaptersSchema is defined inline and a separate
ActiveAdapters type duplicates its shape; move the Zod schema into the route
schemas module (the central schemas directory) as a shared export (e.g., export
const adaptersSchema) and replace the duplicated handcrafted type with type
ActiveAdapters = z.infer<typeof adaptersSchema>; update imports in the route
file to import adaptersSchema and remove the redundant definition and any other
duplicated schemas (the similar duplicate at the other occurrence) so all routes
derive types from the single source-of-truth schema.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ee24ca51-b5f4-4b0d-bdd1-e146d8c3cd66
📒 Files selected for processing (4)
packages/server/src/index.tspackages/server/src/routes/api.tspackages/web/src/lib/api.tspackages/web/src/routes/SettingsPage.tsx
| const adaptersSchema = z | ||
| .object({ | ||
| slack: z.boolean(), | ||
| telegram: z.boolean(), | ||
| discord: z.boolean(), | ||
| github: z.boolean(), | ||
| gitea: z.boolean(), | ||
| gitlab: z.boolean(), | ||
| }) | ||
| .openapi('Adapters'); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Move adaptersSchema to route schemas and derive ActiveAdapters from it.
The schema is currently defined inline in the route module, and ActiveAdapters duplicates the same shape. This is drift-prone and breaks repo conventions.
♻️ Proposed refactor
-const adaptersSchema = z
- .object({
- slack: z.boolean(),
- telegram: z.boolean(),
- discord: z.boolean(),
- github: z.boolean(),
- gitea: z.boolean(),
- gitlab: z.boolean(),
- })
- .openapi('Adapters');
+import { adaptersSchema } from './schemas/config.schemas';
-export interface ActiveAdapters {
- slack: boolean;
- telegram: boolean;
- discord: boolean;
- github: boolean;
- gitea: boolean;
- gitlab: boolean;
-}
+export type ActiveAdapters = z.infer<typeof adaptersSchema>;As per coding guidelines: "Route schemas live in packages/server/src/routes/schemas/ — one file per domain" and "Always use z.infer<typeof schema> to derive types from Zod schemas; never write parallel hand-crafted interfaces".
Also applies to: 849-857
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/routes/api.ts` around lines 808 - 817, adaptersSchema is
defined inline and a separate ActiveAdapters type duplicates its shape; move the
Zod schema into the route schemas module (the central schemas directory) as a
shared export (e.g., export const adaptersSchema) and replace the duplicated
handcrafted type with type ActiveAdapters = z.infer<typeof adaptersSchema>;
update imports in the route file to import adaptersSchema and remove the
redundant definition and any other duplicated schemas (the similar duplicate at
the other occurrence) so all routes derive types from the single source-of-truth
schema.
…leam00#1032) * fix(codex): emit paired tool_result chunks so UI tool cards close Tool cards in the web UI sometimes spin forever for Codex workflows. The Codex client only yielded { type: 'tool', ... } on item.completed events, never the paired tool_result chunk. The web adapter's running tool entry then had nothing to close it, leaving the UI relying on the emitLockEvent fallback at lock release — which never fires inside a multi-node DAG, on cancel, or when SSE is briefly disconnected. The Codex SDK only emits item.completed once a command_execution, web_search, or mcp_tool_call is fully done (it carries aggregated_output, exit_code, status, etc). So we can emit the start and the result back-to-back in the same handler. Changes: - command_execution: emit tool_result with aggregated_output, append [exit code: N] when non-zero so failures are visible. - web_search: emit empty tool_result so the searching card closes. - mcp_tool_call: always emit tool + tool_result, including for the status === 'completed' branch which previously emitted nothing at all (so completed MCP calls were invisible) and for status === 'failed' where we previously emitted only a system message (leaving no card to close, but inconsistent with command_execution failures). - Update codex.test.ts assertions to cover paired chunks and exit codes. Note: tool_result is paired to its tool by the web adapter's name-based reverse-scan in web.ts. Since these chunks are yielded back-to-back with no other tools in between, the match is unambiguous. PR coleam00#1031 will add stable tool_use_id pairing for Claude; a follow-up can plumb Codex's item.id through once that lands. * fix(codex): log silent drops and assert paired web_search tool_result - command_execution: warn when item.command is falsy (was silently dropped) - mcp_tool_call: warn when result.content has unexpected shape (was silent empty) - Simplify exit_code guard to != null, drop redundant String() cast - Test: assert paired tool_result chunk for web_search Addresses review feedback on coleam00#1032.
… tool_results (coleam00#1037) * fix(sse): extend buffer TTL beyond reconnect grace to prevent dropped tool_results The SSE event buffer held events for only 3s, but the conversation reconnect grace period is 5s — meaning events emitted during a reconnect window could expire *before* the client even had a chance to reconnect. When a tool_result happened to land in that gap, the UI would show a perpetually spinning tool card with no recovery path. This is one of the remaining causes from the 'tool cards stuck running' investigation. The two biggest causes (Claude hook coverage and Codex tool_result emission) were already fixed in coleam00#1031 and coleam00#1032. This closes the last high-impact backend gap. Changes: - EVENT_BUFFER_TTL_MS: 3_000 → 60_000. Covers typical EventSource auto-reconnect delays on flaky networks (mobile, VPN, laptop sleep). - EVENT_BUFFER_MAX: 50 → 500. Events are small JSON strings; 500 bounds worst-case memory while giving real headroom for bursts. - Warn when buffer cap evicts oldest (previously silent). - Warn when events expire on TTL at replay time (previously silent). Both warnings give us observability if the new bounds are still ever insufficient. Note: a full Last-Event-ID resume protocol would be more principled but requires monotonic event IDs and client-side offset tracking — a larger change with its own risks. The TTL bump alone closes the vast majority of the window at near-zero cost. * fix(sse): throttle eviction warns, reset cleanup timer, enforce TTL invariant Address review feedback on the SSE buffer TTL bump: - Reset the buffer cleanup timer on each new event so the buffer is held for TTL past the most recent event, not the first one. With the 20x TTL bump this gap became meaningful — a fresh event could be wiped by a cleanup timer scheduled when the first (now-stale) event was buffered. - Throttle 'transport.buffer_evicted_oldest' warns to one per conversation per 5s. A runaway producer overflowing the cap by hundreds would otherwise flood logs. - Fail-fast at module load if EVENT_BUFFER_TTL_MS < RECONNECT_GRACE_MS. Locks in the invariant the comment already documents. - Add test covering the eviction-warn throttle.
beads `bd remember` was pushing to upstream fork (403), causing memory loss. MEMORY.md files were prohibited but beads couldn't replace them. Now: Claude Code memory (MEMORY.md) for persistent knowledge, beads only for issue tracking. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Each forum topic gets a unique conversation ID (chatId:threadId), enabling per-topic conversation isolation and project binding. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Resolves coleam00#1044. Adds deterministic /setproject <name> command that updates conversation.codebase_id and cwd, enabling per-topic project binding in Telegram Forum Topics. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
.claude/settings.json (1)
1-83: Consider splitting this config-only hook reshuffle from the platform-status fix PR.These
.claudelifecycle hook changes look unrelated to issue#1031’s runtime adapter status objective. Splitting them into a separate PR would keep scope tight and reduce review/rollback risk.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/settings.json around lines 1 - 83, The .claude/settings.json lifecycle hook changes (modify Notification, PreCompact, SessionStart, Stop, SubagentStop, UserPromptSubmit hook entries) are unrelated to the runtime adapter status fix and should be split out: revert or remove the hook edits from this PR and create a separate branch/PR that contains only the .claude/settings.json changes (or vice versa—keep only runtime-adapter changes here and move the hook edits to a new PR) so the platform-status fix stays narrowly scoped and the hook reshuffle can be reviewed/rolled back independently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.beads/hooks/h:
- Around line 4-6: The hook shim script h currently calls exit (terminating the
parent hook) when a target script is missing, so BEADS integration in the actual
hook shims (pre-commit, post-merge, post-checkout) never runs; fix by either
ensuring target scripts are installed at .beads/pre-commit, .beads/post-merge,
.beads/post-checkout, etc., or change the logic in h to avoid exiting the
parent: detect the intended .beads/<hook> target and if it doesn't exist, do not
call exit — instead return (or skip sourcing) so the calling hook shim can
continue to run its BEADS integration code; update h and the hook shims
(pre-commit, post-merge, post-checkout) to use this conditional load pattern.
In @.beads/hooks/pre-push:
- Around line 2-26: The beads integration block is never reached because the
Husky helper is sourced unconditionally (". \"$(dirname \"$0\")/h\"") and that
helper calls "exit $c", terminating the hook; move the entire beads block (the
lines between "# --- BEGIN BEADS INTEGRATION v1.0.0 ---" and "# --- END BEADS
INTEGRATION v1.0.0 ---") above the sourcing of the Husky helper in each hook
(pre-push, post-merge, prepare-commit-msg, pre-commit, post-checkout), or
alternatively avoid sourcing the helper into the current shell by executing it
in a subshell/isolated context instead of ". \"$(dirname \"$0\")/h\"" so that
the helper's "exit $c" cannot abort the rest of the hook; adjust all five hook
files consistently.
---
Nitpick comments:
In @.claude/settings.json:
- Around line 1-83: The .claude/settings.json lifecycle hook changes (modify
Notification, PreCompact, SessionStart, Stop, SubagentStop, UserPromptSubmit
hook entries) are unrelated to the runtime adapter status fix and should be
split out: revert or remove the hook edits from this PR and create a separate
branch/PR that contains only the .claude/settings.json changes (or vice
versa—keep only runtime-adapter changes here and move the hook edits to a new
PR) so the platform-status fix stays narrowly scoped and the hook reshuffle can
be reviewed/rolled back independently.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c4771a8c-5466-4991-ad79-9b9caff16d6a
📒 Files selected for processing (24)
.beads/.gitignore.beads/README.md.beads/config.yaml.beads/hooks/applypatch-msg.beads/hooks/commit-msg.beads/hooks/h.beads/hooks/husky.sh.beads/hooks/post-applypatch.beads/hooks/post-checkout.beads/hooks/post-commit.beads/hooks/post-merge.beads/hooks/post-rewrite.beads/hooks/pre-applypatch.beads/hooks/pre-auto-gc.beads/hooks/pre-commit.beads/hooks/pre-merge-commit.beads/hooks/pre-push.beads/hooks/pre-rebase.beads/hooks/prepare-commit-msg.beads/metadata.json.claude/settings.json.gitignoreAGENTS.mdCLAUDE.md
✅ Files skipped from review due to trivial changes (17)
- .gitignore
- .beads/hooks/applypatch-msg
- .beads/hooks/pre-rebase
- .beads/hooks/post-applypatch
- .beads/hooks/commit-msg
- .beads/hooks/pre-merge-commit
- .beads/metadata.json
- .beads/hooks/pre-applypatch
- .beads/hooks/pre-auto-gc
- .beads/hooks/post-commit
- .beads/hooks/post-rewrite
- .beads/hooks/husky.sh
- .beads/README.md
- .beads/config.yaml
- AGENTS.md
- .beads/.gitignore
- CLAUDE.md
| s=$(dirname "$(dirname "$0")")/$n | ||
|
|
||
| [ ! -f "$s" ] && exit 0 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
missing=0
while IFS= read -r hook_path; do
hook_name="$(basename "$hook_path")"
case "$hook_name" in
h|husky.sh) continue ;;
esac
target="$(dirname "$(dirname "$hook_path")")/$hook_name"
if [ -f "$target" ]; then
printf 'OK %s -> %s\n' "$hook_path" "$target"
else
printf 'MISSING %s -> %s\n' "$hook_path" "$target"
missing=1
fi
done < <(fd -t f . .beads/hooks)
exit "$missing"Repository: coleam00/Archon
Length of output: 116
🏁 Script executed:
# Examine the hook helper script
cat -n .beads/hooks/h
# List all hook files
ls -la .beads/hooks/ 2>/dev/null || echo "Directory not found"
# Check if parent-level scripts exist
ls -la .beads/ 2>/dev/null || echo "Parent directory check"Repository: coleam00/Archon
Length of output: 2255
🏁 Script executed:
# Check what target scripts exist in parent .beads directory
echo "=== Checking for target scripts in .beads/ ==="
ls -la .beads/ | grep -v '^d' | grep -v '^total'
echo ""
echo "=== Expected target scripts for each hook ==="
cd .beads/hooks
for hook_file in *; do
case "$hook_file" in
h|husky.sh) continue ;;
esac
target=".beads/$hook_file"
if [ -f "../$hook_file" ]; then
echo "✓ $hook_file has target: ../$hook_file"
else
echo "✗ $hook_file MISSING target: ../$hook_file"
fi
doneRepository: coleam00/Archon
Length of output: 1065
🏁 Script executed:
# Search for the actual target scripts that the hooks expect to call
echo "=== Searching for hook target scripts in repository root ==="
find . -maxdepth 1 -type f \( -name "pre-commit" -o -name "post-commit" -o -name "post-merge" \) 2>/dev/null
echo ""
echo "=== All shell scripts in root ==="
find . -maxdepth 1 -type f -name "*.sh" 2>/dev/null
echo ""
echo "=== Checking if hooks are git hooks installed in .git/hooks ==="
if [ -d ".git/hooks" ]; then
ls -la .git/hooks/ 2>/dev/null | head -20
else
echo ".git/hooks not found in sandbox"
fi
echo ""
echo "=== Examining one of the small hook files to understand the shimming pattern ==="
cat .beads/hooks/pre-commit
</script>Repository: coleam00/Archon
Length of output: 1420
🏁 Script executed:
# Let me check if the beads integration code is actually reached
# by understanding the sourcing behavior
echo "=== Checking the flow in detail ==="
echo "When a git hook runs .beads/hooks/pre-commit:"
echo "1. .beads/hooks/pre-commit sources h via: . \$(dirname \"\$0\")/h"
echo "2. h tries to execute target at .beads/pre-commit"
echo "3. If target doesn't exist, h calls 'exit 0'"
echo ""
echo "Question: Does 'exit 0' in a sourced script terminate the parent?"
echo ""
echo "=== Checking if there are beads hook handlers elsewhere ==="
fd -t f "hooks" . --exclude node_modules 2>/dev/null | grep -E "(run|handler)" | head -20
echo ""
echo "=== Check .bd directory or beads config ==="
if [ -d ".bd" ]; then
find .bd -type f | head -10
else
echo ".bd directory not found"
fi
echo ""
echo "=== Examine beads config ==="
cat .beads/config.yamlRepository: coleam00/Archon
Length of output: 2612
🏁 Script executed:
# Verify critical behavior: what happens when exit 0 is called in a sourced script?
# This is the key to understanding if hooks are truly bypassed
echo "=== Test: Does 'exit' in sourced script terminate parent? ==="
# Create test files to verify
mkdir -p /tmp/test_sourcing
cd /tmp/test_sourcing
cat > helper.sh << 'EOF'
#!/bin/sh
echo "In helper before exit"
exit 0
echo "In helper after exit (should not print)"
EOF
cat > main.sh << 'EOF'
#!/bin/sh
echo "Main: before sourcing"
. ./helper.sh
echo "Main: after sourcing (should not print if exit terminates parent)"
EOF
chmod +x helper.sh main.sh
sh main.sh
echo "Exit code: $?"
# Now go back and verify the actual hook behavior in the repository
cd - > /dev/null
echo ""
echo "=== Simulating actual git hook execution ==="
echo "When git runs: .git/hooks/pre-commit (symlink to .beads/hooks/pre-commit)"
echo "The script does: . \$(dirname \"\$0\")/h"
echo ""
echo "Inside h (line 6): [ ! -f \"\$s\" ] && exit 0"
echo "Since .beads/pre-commit doesn't exist, this exits with 0"
echo ""
echo "This means the BEADS INTEGRATION code in .beads/hooks/pre-commit is NEVER reached"
echo "because exit 0 in the sourced h script terminates the parent script immediately."Repository: coleam00/Archon
Length of output: 599
Critical: Git hooks are non-functional due to missing target scripts causing premature exit.
Line 6 exits with status 0 when the target script is absent. Since no target scripts exist in .beads/ (only .beads/hooks/), every hook immediately exits before reaching the BEADS integration code. This silently disables hook functionality.
The .beads/hooks/pre-commit and other hook shims contain BEADS integration logic (lines 7–21+) that will never execute because the sourced h script terminates the parent before reaching that code.
Ensure target scripts are installed at .beads/pre-commit, .beads/post-merge, .beads/post-checkout, etc., or restructure the hook architecture to conditionally load BEADS integration only when targets exist.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.beads/hooks/h around lines 4 - 6, The hook shim script h currently calls
exit (terminating the parent hook) when a target script is missing, so BEADS
integration in the actual hook shims (pre-commit, post-merge, post-checkout)
never runs; fix by either ensuring target scripts are installed at
.beads/pre-commit, .beads/post-merge, .beads/post-checkout, etc., or change the
logic in h to avoid exiting the parent: detect the intended .beads/<hook> target
and if it doesn't exist, do not call exit — instead return (or skip sourcing) so
the calling hook shim can continue to run its BEADS integration code; update h
and the hook shims (pre-commit, post-merge, post-checkout) to use this
conditional load pattern.
| . "$(dirname "$0")/h" | ||
|
|
||
| # --- BEGIN BEADS INTEGRATION v1.0.0 --- | ||
| # This section is managed by beads. Do not remove these markers. | ||
| if command -v bd >/dev/null 2>&1; then | ||
| export BD_GIT_HOOK=1 | ||
| _bd_timeout=${BEADS_HOOK_TIMEOUT:-300} | ||
| if command -v timeout >/dev/null 2>&1; then | ||
| timeout "$_bd_timeout" bd hooks run pre-push "$@" | ||
| _bd_exit=$? | ||
| if [ $_bd_exit -eq 124 ]; then | ||
| echo >&2 "beads: hook 'pre-push' timed out after ${_bd_timeout}s — continuing without beads" | ||
| _bd_exit=0 | ||
| fi | ||
| else | ||
| bd hooks run pre-push "$@" | ||
| _bd_exit=$? | ||
| fi | ||
| if [ $_bd_exit -eq 3 ]; then | ||
| echo >&2 "beads: database not initialized — skipping hook 'pre-push'" | ||
| _bd_exit=0 | ||
| fi | ||
| if [ $_bd_exit -ne 0 ]; then exit $_bd_exit; fi | ||
| fi | ||
| # --- END BEADS INTEGRATION v1.0.0 --- |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify that hook wrapper code is unreachable due to `exit` in sourced helper.
set -euo pipefail
echo "== .beads/hooks/h (look for exit statements) =="
sed -n '1,80p' .beads/hooks/h
echo
echo "== .beads/hooks/pre-push (source first, integration after) =="
sed -n '1,80p' .beads/hooks/pre-push
echo
echo "== Cross-check all added hook wrappers with same pattern =="
for f in pre-push post-merge prepare-commit-msg pre-commit post-checkout; do
echo "--- .beads/hooks/$f ---"
sed -n '1,40p' ".beads/hooks/$f"
doneRepository: coleam00/Archon
Length of output: 5826
Beads integration block is unreachable due to unconditional exit in sourced helper.
On line 2, . "$(dirname "$0")/h" sources a Husky helper that always exits (line 19: exit $c). Since sourcing executes the script in the current shell context, this unconditional exit terminates the hook before lines 4–26 run. The Beads integration block never executes.
This pattern affects all five hook files: pre-push, post-merge, prepare-commit-msg, pre-commit, and post-checkout. Either the Beads integration must be inserted before sourcing h, or h must be sourced only after the Beads integration block completes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.beads/hooks/pre-push around lines 2 - 26, The beads integration block is
never reached because the Husky helper is sourced unconditionally (".
\"$(dirname \"$0\")/h\"") and that helper calls "exit $c", terminating the hook;
move the entire beads block (the lines between "# --- BEGIN BEADS INTEGRATION
v1.0.0 ---" and "# --- END BEADS INTEGRATION v1.0.0 ---") above the sourcing of
the Husky helper in each hook (pre-push, post-merge, prepare-commit-msg,
pre-commit, post-checkout), or alternatively avoid sourcing the helper into the
current shell by executing it in a subshell/isolated context instead of ".
\"$(dirname \"$0\")/h\"" so that the helper's "exit $c" cannot abort the rest of
the hook; adjust all five hook files consistently.
updateConversation() expects the internal UUID (conversation.id), not the platform_conversation_id string. Fixes "Conversation not found" error in Telegram forum topics. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.claude/rules/orchestrator.md (1)
32-48:⚠️ Potential issue | 🟡 MinorFix deterministic-command count consistency across the doc.
This section correctly says 11 commands, but Line 122 still says “only the 10 listed above bypass the AI router.” Please update that line too, otherwise the rule doc contradicts itself.
Suggested doc fix
-- Never assume a slash command is deterministic — only the 10 listed above bypass the AI router +- Never assume a slash command is deterministic — only the 11 listed above bypass the AI router🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/rules/orchestrator.md around lines 32 - 48, Update the inconsistent count in the orchestrator rules doc: change the phrase "only the 10 listed above bypass the AI router." to reflect 11 deterministic commands so it matches the header that lists 11 commands; ensure the wording references the same set (including `/help`, `/status`, `/reset`, `/workflow`, `/register-project`, `/update-project`, `/remove-project`, `/setproject`, `/commands`, `/init`, `/worktree`) so the document is consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/adapters/src/chat/telegram/adapter.ts`:
- Around line 241-244: The receipt log is emitting raw chat identifiers
(ctx.chat?.id and conversationId) at info level in the getLog().info call for
'telegram.message_received'; change this to mask those identifiers the same way
userId is masked earlier (or omit the full values) before logging—e.g., replace
ctx.chat?.id and conversationId with the masked versions used elsewhere (or a
maskId() helper) and keep threadId as-is or masked if sensitive so the
info-level log never contains raw platform IDs.
- Around line 83-89: The parseChatId function currently permissively uses
parseInt and can accept malformed strings; update parseChatId to validate the
input shape before parsing by enforcing the exact "chatId" or "chatId:threadId"
format (no extra segments, only digits), e.g. check the string matches a strict
pattern and that parts.length is 1 or 2 and each part is /^\d+$/; if validation
fails, throw a clear error describing the expected format; then convert the
validated parts to numbers for numericChatId and optional threadId to avoid
misrouting or downstream Telegraf errors.
In `@packages/core/src/orchestrator/orchestrator-agent.ts`:
- Around line 686-691: The /setproject flow is passing the platform
conversationId into handleSetProject but db.updateConversation expects the DB
conversation row id, causing the project bind to fail; fix by resolving the DB
row id before updating or by changing handleSetProject to accept/lookup the DB
id: either call a helper like db.getConversationByPlatformId(conversationId) (or
getConversationByPlatformId) in orchestrator-agent and pass that
dbConversation.id into handleSetProject (or return it) so that inside
handleSetProject / db.updateConversation uses the DB row id; ensure
platform.sendMessage still uses the platform conversationId while all
db.updateConversation calls use the resolved dbConversation.id.
---
Outside diff comments:
In @.claude/rules/orchestrator.md:
- Around line 32-48: Update the inconsistent count in the orchestrator rules
doc: change the phrase "only the 10 listed above bypass the AI router." to
reflect 11 deterministic commands so it matches the header that lists 11
commands; ensure the wording references the same set (including `/help`,
`/status`, `/reset`, `/workflow`, `/register-project`, `/update-project`,
`/remove-project`, `/setproject`, `/commands`, `/init`, `/worktree`) so the
document is consistent.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 14e15e3b-3360-4bc0-bcad-fbd3f820df0e
📒 Files selected for processing (4)
.claude/rules/orchestrator.mdpackages/adapters/src/chat/telegram/adapter.tspackages/core/src/handlers/command-handler.tspackages/core/src/orchestrator/orchestrator-agent.ts
✅ Files skipped from review due to trivial changes (1)
- packages/core/src/handlers/command-handler.ts
| private parseChatId(chatId: string): { numericChatId: number; threadId: number | undefined } { | ||
| const parts = chatId.split(':'); | ||
| return { | ||
| numericChatId: parseInt(parts[0]), | ||
| threadId: parts[1] ? parseInt(parts[1]) : undefined, | ||
| }; | ||
| } |
There was a problem hiding this comment.
Validate the composite Telegram conversation ID before parsing.
parseInt() is permissive here, so malformed inputs like 123abc:7 or 1:2:3 can be partially accepted and either misroute a reply or fail later inside Telegraf. This should reject anything outside the exact "chatId" / "chatId:threadId" shape up front.
Suggested fix
private parseChatId(chatId: string): { numericChatId: number; threadId: number | undefined } {
- const parts = chatId.split(':');
- return {
- numericChatId: parseInt(parts[0]),
- threadId: parts[1] ? parseInt(parts[1]) : undefined,
- };
+ const match = /^(-?\d+)(?::(\d+))?$/.exec(chatId);
+ if (!match) {
+ throw new Error(`Invalid Telegram conversation ID: ${chatId}`);
+ }
+
+ return {
+ numericChatId: Number(match[1]),
+ threadId: match[2] ? Number(match[2]) : undefined,
+ };
}As per coding guidelines, "Fail fast with clear errors for unsupported or unsafe states; never silently swallow errors or broaden permissions".
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private parseChatId(chatId: string): { numericChatId: number; threadId: number | undefined } { | |
| const parts = chatId.split(':'); | |
| return { | |
| numericChatId: parseInt(parts[0]), | |
| threadId: parts[1] ? parseInt(parts[1]) : undefined, | |
| }; | |
| } | |
| private parseChatId(chatId: string): { numericChatId: number; threadId: number | undefined } { | |
| const match = /^(-?\d+)(?::(\d+))?$/.exec(chatId); | |
| if (!match) { | |
| throw new Error(`Invalid Telegram conversation ID: ${chatId}`); | |
| } | |
| return { | |
| numericChatId: Number(match[1]), | |
| threadId: match[2] ? Number(match[2]) : undefined, | |
| }; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/adapters/src/chat/telegram/adapter.ts` around lines 83 - 89, The
parseChatId function currently permissively uses parseInt and can accept
malformed strings; update parseChatId to validate the input shape before parsing
by enforcing the exact "chatId" or "chatId:threadId" format (no extra segments,
only digits), e.g. check the string matches a strict pattern and that
parts.length is 1 or 2 and each part is /^\d+$/; if validation fails, throw a
clear error describing the expected format; then convert the validated parts to
numbers for numericChatId and optional threadId to avoid misrouting or
downstream Telegraf errors.
| // Debug: log forum topic detection | ||
| const msg = ctx.message; | ||
| const threadId = 'message_thread_id' in msg ? (msg as { message_thread_id?: number }).message_thread_id : undefined; | ||
| getLog().info({ chatId: ctx.chat?.id, threadId, conversationId, chatType: ctx.chat?.type }, 'telegram.message_received'); |
There was a problem hiding this comment.
Mask chat identifiers in the receipt log.
This emits raw chatId and conversationId at info level for every incoming message. Those are stable platform identifiers, so this creates routine identifier leakage in logs. Mask them the same way userId is masked above, or keep full values out of info-level logs.
Suggested fix
- const threadId = 'message_thread_id' in msg ? (msg as { message_thread_id?: number }).message_thread_id : undefined;
- getLog().info({ chatId: ctx.chat?.id, threadId, conversationId, chatType: ctx.chat?.type }, 'telegram.message_received');
+ const threadId = 'message_thread_id' in msg ? (msg as { message_thread_id?: number }).message_thread_id : undefined;
+ const maskedChatId = ctx.chat ? `${String(ctx.chat.id).slice(0, 4)}***` : undefined;
+ const maskedConversationId = `${conversationId.slice(0, 4)}***`;
+ getLog().info(
+ { chatId: maskedChatId, threadId, conversationId: maskedConversationId, chatType: ctx.chat?.type },
+ 'telegram.message_received'
+ );As per coding guidelines, "Never log API keys, tokens (mask sensitive values), user message content, or PII in logs".
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Debug: log forum topic detection | |
| const msg = ctx.message; | |
| const threadId = 'message_thread_id' in msg ? (msg as { message_thread_id?: number }).message_thread_id : undefined; | |
| getLog().info({ chatId: ctx.chat?.id, threadId, conversationId, chatType: ctx.chat?.type }, 'telegram.message_received'); | |
| // Debug: log forum topic detection | |
| const msg = ctx.message; | |
| const threadId = 'message_thread_id' in msg ? (msg as { message_thread_id?: number }).message_thread_id : undefined; | |
| const maskedChatId = ctx.chat ? `${String(ctx.chat.id).slice(0, 4)}***` : undefined; | |
| const maskedConversationId = `${conversationId.slice(0, 4)}***`; | |
| getLog().info( | |
| { chatId: maskedChatId, threadId, conversationId: maskedConversationId, chatType: ctx.chat?.type }, | |
| 'telegram.message_received' | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/adapters/src/chat/telegram/adapter.ts` around lines 241 - 244, The
receipt log is emitting raw chat identifiers (ctx.chat?.id and conversationId)
at info level in the getLog().info call for 'telegram.message_received'; change
this to mask those identifiers the same way userId is masked earlier (or omit
the full values) before logging—e.g., replace ctx.chat?.id and conversationId
with the masked versions used elsewhere (or a maskId() helper) and keep threadId
as-is or masked if sensitive so the info-level log never contains raw platform
IDs.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
packages/core/src/orchestrator/orchestrator-agent.ts (1)
1282-1307: RenameconversationIdtoconversationDbIdfor clarity.This function now expects a DB row ID, not a platform conversation ID. Renaming the parameter reduces ambiguity and helps prevent regressions.
Suggested refactor
-async function handleSetProject(message: string, conversationId: string): Promise<string> { +async function handleSetProject(message: string, conversationDbId: string): Promise<string> { @@ - await db.updateConversation(conversationId, { + await db.updateConversation(conversationDbId, { codebase_id: codebase.id, cwd: codebase.default_cwd, }); @@ - { conversationId, projectName: codebase.name, codebaseId: codebase.id }, + { conversationId: conversationDbId, projectName: codebase.name, codebaseId: codebase.id }, 'project.setproject_completed' );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/orchestrator/orchestrator-agent.ts` around lines 1282 - 1307, Rename the parameter conversationId to conversationDbId in the handleSetProject function signature and all its usages within the function (including the call to db.updateConversation and the getLog().info call) to reflect that this is a database row ID rather than a platform conversation ID; update the parameter name in the signature async function handleSetProject(message: string, conversationDbId: string): Promise<string> and replace conversationId with conversationDbId in commandHandler.parseCommand result handling, the db.updateConversation payload, and the logging call (getLog().info) to keep names consistent and avoid ambiguity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/core/src/orchestrator/orchestrator-agent.ts`:
- Around line 1290-1297: The current lookup uses findCodebaseByName(projects)
which allows suffix/partial matches and returns the first hit, risking binding
the wrong project; update the logic that calls codebaseDb.listCodebases() and
findCodebaseByName(projectName) to detect ambiguous matches: after collecting
matches (e.g., from findCodebaseByName or by filtering codebases where name
endsWith projectName), if more than one match exists return a clear error asking
the user to disambiguate (listing the matching codebase names) instead of
proceeding to bind conversation state; only bind to conversation state when
there is exactly one unambiguous match (use the codebase.id/name from that
single match).
---
Nitpick comments:
In `@packages/core/src/orchestrator/orchestrator-agent.ts`:
- Around line 1282-1307: Rename the parameter conversationId to conversationDbId
in the handleSetProject function signature and all its usages within the
function (including the call to db.updateConversation and the getLog().info
call) to reflect that this is a database row ID rather than a platform
conversation ID; update the parameter name in the signature async function
handleSetProject(message: string, conversationDbId: string): Promise<string> and
replace conversationId with conversationDbId in commandHandler.parseCommand
result handling, the db.updateConversation payload, and the logging call
(getLog().info) to keep names consistent and avoid ambiguity.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 53dfacb6-9fb5-44d7-a6de-e60c6f4242d5
📒 Files selected for processing (1)
packages/core/src/orchestrator/orchestrator-agent.ts
/compact asks the AI to summarize the conversation, saves the summary to conversation.context_summary, and resets the session. The summary is automatically injected into subsequent prompts for continuity. /resume shows the stored summary. Useful for long Telegram forum topic conversations where 200k context may fill up. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Enable user-level MCP servers (Obsidian, Apify, etc.) for all AI agent sessions by adding settingSources: [project, user] to config. Removes hardcoded vault path — agent can now use Obsidian MCP tools directly, same as Claude Code CLI. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
packages/core/src/orchestrator/orchestrator-agent.ts (1)
1391-1404:⚠️ Potential issue | 🟠 MajorReject ambiguous
/setprojectmatches before updating conversation state.This still binds the first suffix match from
findCodebaseByName(). If bothteam-a/repoandteam-b/repoexist,/setproject repocan silently attach the wrong project.Suggested fix
// Find codebase (case-insensitive, partial path match) const codebases = await codebaseDb.listCodebases(); - const codebase = findCodebaseByName(codebases, projectName); + const projectLower = projectName.toLowerCase(); + const exactMatch = codebases.find(c => c.name.toLowerCase() === projectLower); + const tailMatches = codebases.filter(c => + c.name.toLowerCase().endsWith(`/${projectLower}`) + ); + + if (!exactMatch && tailMatches.length > 1) { + return ( + `Project "${projectName}" is ambiguous.\n` + + `Matches:\n${tailMatches.map(c => `- ${c.name}`).join('\n')}\n\n` + + 'Use the full project name.' + ); + } + + const codebase = exactMatch ?? tailMatches[0];Based on learnings: "Prefer throwing early with a clear error for unsupported or unsafe states; never silently swallow errors or silently broaden permissions (Fail Fast + Explicit Errors)"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/orchestrator/orchestrator-agent.ts` around lines 1391 - 1404, The current logic uses findCodebaseByName (called after codebaseDb.listCodebases()) and silently picks the first partial match, which can attach the wrong project; instead, after listing codebases run the same matching logic but detect ambiguous results: if zero matches return the existing "not found" message; if multiple matches return an explicit error asking the user to be more specific and include the matching codebase names; only when there is exactly one match proceed to call db.updateConversation(conversationId, { codebase_id: codebase.id, cwd: codebase.default_cwd }); ensure this check is implemented in the orchestrator-agent flow (around the code that calls findCodebaseByName / processes codebases) so ambiguous suffix matches are rejected rather than silently accepted.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@migrations/000_combined.sql`:
- Line 77: The new column context_summary is added to the fresh-install schema
but missing from the idempotent upgrade block; add an ALTER TABLE statement to
the upgrade section to ALTER TABLE remote_agent_conversations ADD COLUMN IF NOT
EXISTS context_summary TEXT (matching the fresh schema's nullability/defaults)
so existing Postgres installs gain the column and
packages/core/src/db/conversations.ts (uses context_summary) won't fail at
runtime.
---
Duplicate comments:
In `@packages/core/src/orchestrator/orchestrator-agent.ts`:
- Around line 1391-1404: The current logic uses findCodebaseByName (called after
codebaseDb.listCodebases()) and silently picks the first partial match, which
can attach the wrong project; instead, after listing codebases run the same
matching logic but detect ambiguous results: if zero matches return the existing
"not found" message; if multiple matches return an explicit error asking the
user to be more specific and include the matching codebase names; only when
there is exactly one match proceed to call db.updateConversation(conversationId,
{ codebase_id: codebase.id, cwd: codebase.default_cwd }); ensure this check is
implemented in the orchestrator-agent flow (around the code that calls
findCodebaseByName / processes codebases) so ambiguous suffix matches are
rejected rather than silently accepted.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d4d50889-891b-4a11-ae2b-ed8115a1f520
📒 Files selected for processing (8)
.claude/rules/orchestrator.mdmigrations/000_combined.sqlmigrations/022_add_context_summary.sqlpackages/core/src/db/adapters/sqlite.tspackages/core/src/db/conversations.tspackages/core/src/handlers/command-handler.tspackages/core/src/orchestrator/orchestrator-agent.tspackages/core/src/types/index.ts
✅ Files skipped from review due to trivial changes (4)
- packages/core/src/types/index.ts
- migrations/022_add_context_summary.sql
- packages/core/src/handlers/command-handler.ts
- packages/core/src/db/adapters/sqlite.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- .claude/rules/orchestrator.md
| title VARCHAR(255), | ||
| deleted_at TIMESTAMP WITH TIME ZONE, | ||
| hidden BOOLEAN DEFAULT FALSE, | ||
| context_summary TEXT, |
There was a problem hiding this comment.
Add the matching upgrade-path ALTER TABLE for context_summary.
Line 77 updates the fresh-install schema, but the idempotent upgrade block never adds this column for existing PostgreSQL databases. Re-running 000_combined.sql against an older install will still leave remote_agent_conversations.context_summary missing, and packages/core/src/db/conversations.ts Line 256 will then fail at runtime.
Suggested fix
-- From migration 021: allow_env_keys on codebases
ALTER TABLE remote_agent_codebases
ADD COLUMN IF NOT EXISTS allow_env_keys BOOLEAN NOT NULL DEFAULT FALSE;
+
+-- From migration 022: context_summary on conversations
+ALTER TABLE remote_agent_conversations
+ ADD COLUMN IF NOT EXISTS context_summary TEXT;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@migrations/000_combined.sql` at line 77, The new column context_summary is
added to the fresh-install schema but missing from the idempotent upgrade block;
add an ALTER TABLE statement to the upgrade section to ALTER TABLE
remote_agent_conversations ADD COLUMN IF NOT EXISTS context_summary TEXT
(matching the fresh schema's nullability/defaults) so existing Postgres installs
gain the column and packages/core/src/db/conversations.ts (uses context_summary)
won't fail at runtime.
Claude SDK sessions expire on server restart. /compact now catches the resume failure and falls back to building a summary from saved messages in the database. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When Claude SDK session expires (server restart, TTL), the orchestrator now automatically: saves a summary from message history, resets the session, and retries the user's message with context injected. Users see a brief notice and get their response without manual /reset. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Match additional error patterns: invalid UUID, session not found variants. Ensures auto-compact triggers regardless of how the Claude SDK reports an expired session. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Messages are now saved to DB from the orchestrator level (stream + batch modes), enabling auto-compact summaries for Telegram, Slack, GitHub — not just Web/CLI. Also hints agent about Obsidian session logs for deeper history access. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
/compact now saves summaries to both DB (cache) and Obsidian vault
(Claude/Session-Logs/{project}/). New sessions without context_summary
automatically load up to 3 latest session logs from Obsidian — whether
written by CLI (/compress) or Archon (/compact). This creates a
unified memory timeline across both interfaces.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
4-layer memory model: CLAUDE.md (instructions), MEMORY.md (knowledge), Obsidian (session timeline), Beads (issue tracking). Both CLI and Archon share the same files — no more fragmentation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
6 tasks: add computeMemoryPath, inject MEMORY.md into prompt, update /resume, stop writing context_summary, update docs, verify. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Computes the Claude Code CLI memory directory path from a project's CWD and loads the MEMORY.md index for prompt injection. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- buildFullPrompt() now reads MEMORY.md (shared with CLI) instead of DB context_summary or Obsidian auto-load - /resume shows MEMORY.md content and path - Stop writing context_summary to DB (column kept for compat)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
computeMemoryPath now replaces both '/' and spaces with '-' to match Claude Code CLI's actual encoding. Removed unused summary generation from auto-compact (MEMORY.md provides context, no DB write needed). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Extracts saveSessionToObsidian() helper that generates a summary
from message history and writes to Claude/Session-Logs/{project}/.
Called by /reset (before clearing session) and auto-compact (on
expired session detection). Ensures no session context is lost.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…filter 1. computeMemoryPath: encode dots too (fixes .archon paths) 2. Auto-compact: recursion depth guard (max 1 retry) 3. persistConversationMessages: regex match anywhere, not startsWith 4. isSessionExpired: add parentheses for operator precedence 5. handleSetProject: rename param to conversationDbId for clarity Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
CLAUDE.md exceeded 40k char performance threshold (41,747 chars). Moved 7 duplicated sections to context-dependent .claude/rules/ files that auto-load when relevant file paths are touched. Created .claude/rules/logging.md for Pino conventions. Result: 28,036 chars (~33% reduction, well under 40k limit). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Major upstream changes: - Extracted AI clients into new @archon/providers package - Removed all .claude/rules/ files (content back in CLAUDE.md) - Added serve command, update-check, health endpoint - New e2e workflow tests and script discovery Our additions preserved: - Beads issue tracker integration - Session completion protocol Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
getAssistantClient → getAgentProvider (moved to @archon/providers)
{ tools: [] } → { nodeConfig: { allowed_tools: [] } } (new SendQueryOptions shape)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ads/ /reset is now intercepted by handleResetWithSessionLog before reaching handleCommand. Updated test to match new behavior. Added .beads/ to .prettierignore. Fixed prettier formatting in 3 files. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Comprehensive PR Review — 5 AgentsPR: fix: show real platform adapter status on Settings page (#1031) SummaryThe stated fix (exposing real adapter status on the Settings page via the health endpoint) is correct and clean — 32 lines across 3 files, idiomatic callback pattern, good safe fallback. However, the PR bundles ~1,667 lines of unrelated work: a 450-line session management overhaul, three new slash commands, Telegram forum topic support, a dead DB migration, and an entire Verdict:
🔴 Critical IssueC1:
|
Fix Implementation ReportBranch: CRITICAL (1/1 fixed)
HIGH (8/9 applicable fixed, 1 skipped by design)
Tests Added
Validation
|
|
Hey @CryptixSamurai — first off, thank you for tracking this down and putting up a fix! The diagnosis in #1031 is spot-on and the Unfortunately I'm going to close this PR as-is, because it bundles a lot of unrelated work alongside the bug fix:
Each of these is interesting on its own, but bundling them makes the PR hard to review, hard to revert, and it's also picked up merge conflicts. A smart-PR-review run also surfaced ~27 findings (1 critical, 10 high) — most of them in the bundled features, not in the actual bug fix. Could you split this up?
Happy to help shepherd the focused fix PR through review once it's up. Thanks again for the contribution! |
…leam00#1032) * fix(codex): emit paired tool_result chunks so UI tool cards close Tool cards in the web UI sometimes spin forever for Codex workflows. The Codex client only yielded { type: 'tool', ... } on item.completed events, never the paired tool_result chunk. The web adapter's running tool entry then had nothing to close it, leaving the UI relying on the emitLockEvent fallback at lock release — which never fires inside a multi-node DAG, on cancel, or when SSE is briefly disconnected. The Codex SDK only emits item.completed once a command_execution, web_search, or mcp_tool_call is fully done (it carries aggregated_output, exit_code, status, etc). So we can emit the start and the result back-to-back in the same handler. Changes: - command_execution: emit tool_result with aggregated_output, append [exit code: N] when non-zero so failures are visible. - web_search: emit empty tool_result so the searching card closes. - mcp_tool_call: always emit tool + tool_result, including for the status === 'completed' branch which previously emitted nothing at all (so completed MCP calls were invisible) and for status === 'failed' where we previously emitted only a system message (leaving no card to close, but inconsistent with command_execution failures). - Update codex.test.ts assertions to cover paired chunks and exit codes. Note: tool_result is paired to its tool by the web adapter's name-based reverse-scan in web.ts. Since these chunks are yielded back-to-back with no other tools in between, the match is unambiguous. PR coleam00#1031 will add stable tool_use_id pairing for Claude; a follow-up can plumb Codex's item.id through once that lands. * fix(codex): log silent drops and assert paired web_search tool_result - command_execution: warn when item.command is falsy (was silently dropped) - mcp_tool_call: warn when result.content has unexpected shape (was silent empty) - Simplify exit_code guard to != null, drop redundant String() cast - Test: assert paired tool_result chunk for web_search Addresses review feedback on coleam00#1032.
… tool_results (coleam00#1037) * fix(sse): extend buffer TTL beyond reconnect grace to prevent dropped tool_results The SSE event buffer held events for only 3s, but the conversation reconnect grace period is 5s — meaning events emitted during a reconnect window could expire *before* the client even had a chance to reconnect. When a tool_result happened to land in that gap, the UI would show a perpetually spinning tool card with no recovery path. This is one of the remaining causes from the 'tool cards stuck running' investigation. The two biggest causes (Claude hook coverage and Codex tool_result emission) were already fixed in coleam00#1031 and coleam00#1032. This closes the last high-impact backend gap. Changes: - EVENT_BUFFER_TTL_MS: 3_000 → 60_000. Covers typical EventSource auto-reconnect delays on flaky networks (mobile, VPN, laptop sleep). - EVENT_BUFFER_MAX: 50 → 500. Events are small JSON strings; 500 bounds worst-case memory while giving real headroom for bursts. - Warn when buffer cap evicts oldest (previously silent). - Warn when events expire on TTL at replay time (previously silent). Both warnings give us observability if the new bounds are still ever insufficient. Note: a full Last-Event-ID resume protocol would be more principled but requires monotonic event IDs and client-side offset tracking — a larger change with its own risks. The TTL bump alone closes the vast majority of the window at near-zero cost. * fix(sse): throttle eviction warns, reset cleanup timer, enforce TTL invariant Address review feedback on the SSE buffer TTL bump: - Reset the buffer cleanup timer on each new event so the buffer is held for TTL past the most recent event, not the first one. With the 20x TTL bump this gap became meaningful — a fresh event could be wiped by a cleanup timer scheduled when the first (now-stale) event was buffered. - Throttle 'transport.buffer_evicted_oldest' warns to one per conversation per 5s. A runaway producer overflowing the cap by hundreds would otherwise flood logs. - Fail-fast at module load if EVENT_BUFFER_TTL_MS < RECONNECT_GRACE_MS. Locks in the invariant the comment already documents. - Add test covering the eviction-warn throttle.
…leam00#1032) * fix(codex): emit paired tool_result chunks so UI tool cards close Tool cards in the web UI sometimes spin forever for Codex workflows. The Codex client only yielded { type: 'tool', ... } on item.completed events, never the paired tool_result chunk. The web adapter's running tool entry then had nothing to close it, leaving the UI relying on the emitLockEvent fallback at lock release — which never fires inside a multi-node DAG, on cancel, or when SSE is briefly disconnected. The Codex SDK only emits item.completed once a command_execution, web_search, or mcp_tool_call is fully done (it carries aggregated_output, exit_code, status, etc). So we can emit the start and the result back-to-back in the same handler. Changes: - command_execution: emit tool_result with aggregated_output, append [exit code: N] when non-zero so failures are visible. - web_search: emit empty tool_result so the searching card closes. - mcp_tool_call: always emit tool + tool_result, including for the status === 'completed' branch which previously emitted nothing at all (so completed MCP calls were invisible) and for status === 'failed' where we previously emitted only a system message (leaving no card to close, but inconsistent with command_execution failures). - Update codex.test.ts assertions to cover paired chunks and exit codes. Note: tool_result is paired to its tool by the web adapter's name-based reverse-scan in web.ts. Since these chunks are yielded back-to-back with no other tools in between, the match is unambiguous. PR coleam00#1031 will add stable tool_use_id pairing for Claude; a follow-up can plumb Codex's item.id through once that lands. * fix(codex): log silent drops and assert paired web_search tool_result - command_execution: warn when item.command is falsy (was silently dropped) - mcp_tool_call: warn when result.content has unexpected shape (was silent empty) - Simplify exit_code guard to != null, drop redundant String() cast - Test: assert paired tool_result chunk for web_search Addresses review feedback on coleam00#1032.
… tool_results (coleam00#1037) * fix(sse): extend buffer TTL beyond reconnect grace to prevent dropped tool_results The SSE event buffer held events for only 3s, but the conversation reconnect grace period is 5s — meaning events emitted during a reconnect window could expire *before* the client even had a chance to reconnect. When a tool_result happened to land in that gap, the UI would show a perpetually spinning tool card with no recovery path. This is one of the remaining causes from the 'tool cards stuck running' investigation. The two biggest causes (Claude hook coverage and Codex tool_result emission) were already fixed in coleam00#1031 and coleam00#1032. This closes the last high-impact backend gap. Changes: - EVENT_BUFFER_TTL_MS: 3_000 → 60_000. Covers typical EventSource auto-reconnect delays on flaky networks (mobile, VPN, laptop sleep). - EVENT_BUFFER_MAX: 50 → 500. Events are small JSON strings; 500 bounds worst-case memory while giving real headroom for bursts. - Warn when buffer cap evicts oldest (previously silent). - Warn when events expire on TTL at replay time (previously silent). Both warnings give us observability if the new bounds are still ever insufficient. Note: a full Last-Event-ID resume protocol would be more principled but requires monotonic event IDs and client-side offset tracking — a larger change with its own risks. The TTL bump alone closes the vast majority of the window at near-zero cost. * fix(sse): throttle eviction warns, reset cleanup timer, enforce TTL invariant Address review feedback on the SSE buffer TTL bump: - Reset the buffer cleanup timer on each new event so the buffer is held for TTL past the most recent event, not the first one. With the 20x TTL bump this gap became meaningful — a fresh event could be wiped by a cleanup timer scheduled when the first (now-stale) event was buffered. - Throttle 'transport.buffer_evicted_oldest' warns to one per conversation per 5s. A runaway producer overflowing the cap by hundreds would otherwise flood logs. - Fail-fast at module load if EVENT_BUFFER_TTL_MS < RECONNECT_GRACE_MS. Locks in the invariant the comment already documents. - Add test covering the eviction-warn throttle.
Summary
Fixes #1031 — The Settings page showed all platform adapters (Slack, Telegram, Discord, GitHub) as "Not configured" even when they were actively running.
Root cause: Statuses were hardcoded to
falseinPlatformConnectionsSection, and the/api/healthendpoint didn't expose adapter state.Changes:
ActiveAdaptersinterface andadaptersfield to/api/healthresponseindex.tsvia lazy callback (needed because Telegram initializes afterserver.listen())HealthResponsetype with optionaladaptersfield (backward-compatible)PlatformConnectionsSectionto display real adapter status with graceful fallbackTest plan
bun run type-check— all packages passbun run lint— zero warningsbun run test— all tests passcurl /api/healthreturnsadapters: { slack: true, telegram: true, github: true, ... }adaptersfield gracefully (older server versions)🤖 Generated with Claude Code
Summary by CodeRabbit
Bug Fixes
New Features
Documentation
Chores