feat(providers): add GitHub Copilot community provider (@github/copilot-sdk)#1
feat(providers): add GitHub Copilot community provider (@github/copilot-sdk)#1
Conversation
New community provider wired through @github/copilot-sdk. Registered as builtIn: false alongside Pi. Covers session resume, effort/thinking controls, envInjection, and a config-aware binary resolver (env > config > vendor > PATH, with vendor winning over PATH in binary mode). Advanced workflow features (MCP, skills, tool restrictions, structured output, fallback model, sandbox) are intentionally flagged false — they are not wired to Archon's workflow surface yet. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis release introduces GitHub Copilot as a new community provider with full SDK integration, refactors the Pi provider for lazy loading, adds shared structured-output and skills utilities, extends DAG workflows with script/approval/cancel nodes, implements approval auto-resume on web, and includes extensive documentation updates and release infrastructure improvements. Version bumped to 0.3.9. Changes
Sequence Diagram(s)sequenceDiagram
participant Web as Web UI
participant API as API Server
participant Orch as Orchestrator
participant Exec as Workflow<br/>Executor
Web->>API: PUT /approve (with reason?)
activate API
API->>API: Record approval event
API->>API: Update workflow state
alt Has parent_conversation_id
API->>API: Fetch parent conversation
alt Parent is web platform
API->>Orch: Dispatch resume message<br/>(/workflow run ...)
Note over API,Orch: tryAutoResumeAfterGate()
Orch->>Exec: Execute workflow<br/>(with parent context)
Exec-->>Orch: Results
Orch-->>API: Dispatch complete
API-->>Web: "Resuming workflow."
else Non-web parent
API-->>Web: "Send a message to continue."
end
else No parent context
API-->>Web: "Send a message to continue."
end
deactivate API
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes This PR introduces substantial new functionality (GitHub Copilot provider with 600+ lines of implementation and tests), refactors existing critical provider code (Pi lazy-loading), extends workflow schema with new node types and capabilities, modifies approval/orchestration flow for auto-resume, updates Web UI components for node management and rejection workflow, and includes extensive documentation and infrastructure changes. The changes are heterogeneous, affecting multiple subsystems with varying complexity and density (provider internals, workflow engine, server API, Web UI, documentation). The variety of file types and reasoning patterns (provider SDK integration, workflow DSL schema, server orchestration, React components, Bash scripts, documentation, YAML workflows) compounds review effort significantly. Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
Translates nodeConfig.allowed_tools/denied_tools to Copilot SessionConfig availableTools/excludedTools. SDK enforces availableTools precedence when both are set. Also refactors sendQuery to route all SessionConfig construction through buildSessionConfig() with a ProviderWarning collector, so subsequent parity phases (MCP, skills, structured output, agents, fallbackModel) can plug in as single applyX() calls. Flips COPILOT_CAPABILITIES.toolRestrictions = true. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Reuses Claude's loadMcpConfig() to parse the MCP JSON file referenced by nodeConfig.mcp, expand $ENV_VAR references, and assign the result to Copilot's SessionConfig.mcpServers. Missing env vars surface as a system warning chunk; IO/JSON errors propagate. Flips COPILOT_CAPABILITIES.mcp = true. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Extracts Pi's resolvePiSkills into a provider-agnostic shared utility at packages/providers/src/shared/skills.ts as resolveSkillDirectories. Pi re-exports it under the historical name for back-compat. Copilot's applySkills() maps nodeConfig.skills (names) → absolute skill directory paths → SessionConfig.skillDirectories. Missing names surface as a single system warning chunk. Flips COPILOT_CAPABILITIES.skills = true. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copilot has no native JSON-mode equivalent to Claude's outputFormat or Codex's outputSchema, so this mirrors Pi's best-effort approach: augment the user prompt with a "respond with JSON matching this schema" instruction, accumulate the assistant transcript, and parse it on the terminal result chunk. Parse failure leaves structuredOutput unset — the dag-executor's existing dag.structured_output_missing warning handles downstream degradation. Also extracts augmentPromptForJsonSchema and tryParseStructuredOutput into packages/providers/src/shared/structured-output.ts. Pi re-exports them under the original paths for back-compat. Flips COPILOT_CAPABILITIES.structuredOutput = true. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Abort-guard: check abortSignal.aborted BEFORE awaiting sendAndWait,
since addEventListener('abort', ...) is a no-op on already-aborted
signals and would otherwise enter the 24h timeout path after cancel.
- Cleanup: wrap session.disconnect() and client.stop() in independent
try/catch-log blocks in the finally so a cleanup throw can't replace
a successful result or the friendly auth/model error.
- Fallback-content flag: flip sawAssistantContent = true when emitting
the final-message fallback, so a session.error received on the same
turn doesn't double-emit as a spurious system warning (Devin finding).
- Model trim: strip leading/trailing whitespace on requestOptions.model
and copilotConfig.model before assigning to SessionConfig.model.
- Binary resolver: replace existsSync() with isExecutableFile() (isFile
+ exec bit on posix, isFile on win32) so env/config/vendor overrides
fail early instead of passing a directory or non-executable to the
SDK.
- Binary resolver test: per-test tmpdir + chmod 0o755, no more shared
/tmp/.archon state.
- Docs: trust-boundary note that enableConfigDiscovery = true bypasses
Archon's workflow validation surface for MCP/skills.
- Registry test: drop stale "Pi is currently the only community
provider" comment now that Copilot is bundled too.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Maps nodeConfig.agents (Record<name, AgentDef>) to Copilot SessionConfig.customAgents. Direct pass-through for the fields Copilot's CustomAgentConfig supports: name (from the map key), description, prompt, and tools (allowlist — Copilot has no per-agent denylist). Archon agent fields Copilot cannot represent (model, disallowedTools, skills, maxTurns) surface as one consolidated system-warning chunk per agent. We deliberately do NOT set SessionConfig.agent: Archon's workflow model invokes sub-agents via the Task tool, not by switching the active agent at session start. Flips COPILOT_CAPABILITIES.agents = true. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…eam00#1126) (coleam00#1184) * fix: detect completion signal in any XML tag, not just <promise> (coleam00#1126) Loop nodes with `until:` reported max_iterations_reached when the AI wrapped the completion signal in XML tags other than `<promise>` (e.g., `<COMPLETE>ALL_CLEAN</COMPLETE>`). The three existing regex patterns all missed this format, causing the loop to exhaust iterations and fail. Changes: - Add generic XML-wrapped signal pattern to `detectCompletionSignal` - Extend `stripCompletionTags` to strip matched XML-wrapped signals from output - Pass `loop.until` to `stripCompletionTags` call site in dag-executor - Add unit tests for detection and stripping of XML-wrapped signals - Add integration test for loop completing on final iteration with XML tags Fixes coleam00#1126 * fix: address review findings for completion signal detection - Update detectCompletionSignal JSDoc to document all three detection formats - Update stripCompletionTags JSDoc to mention the `until` parameter - Remove superfluous `m` flag from xmlWrappedPattern (no anchors, no effect) - Document that XML tag names are matched independently (intentional permissiveness) - Add test: detects signal in mismatched XML tags (permissive behavior) - Add test: strips both <promise> and XML-tagged signal in same chunk - Add assertion in DAG integration test that raw XML tags don't appear in sent messages * simplify: reduce complexity in changed files * fix: require matching XML tag names in completion-signal detection Follow-up to the initial broadening in this PR. The first version of the regex accepted mismatched open/close tags (e.g. `<COMPLETE>X</done>`) which was a small false-positive surface when the AI interleaves tags in prose. Tightens both detectCompletionSignal and stripCompletionTags to capture the tag name and enforce it on the close via \1 backreference. Case-insensitivity on the tag name is preserved. Test updates: - Flip the "permissive mismatch" case to assert strict rejection with a comment explaining the guard. - Add a case-insensitive matching case to lock that behavior in. No behavior change for workflows that use matching tags (the overwhelming common case) or for <promise>...</promise>. Behavior change is limited to the narrow "open tag and close tag disagree" case, which only happens when the AI is confused — in which case we'd rather report max_iterations_reached and let the author inspect than silently call the loop complete.
…oleam00#1113) * fix(web): allow deleting nodes from Workflow Builder (coleam00#971) Three independent gaps prevented users from deleting nodes added to the Workflow Builder canvas: dropped nodes were never auto-selected so keyboard shortcuts silently no-oped, no right-click context menu existed, and the Delete Node button was buried in the Advanced tab (hidden below the viewport for Prompt/Command, completely absent for Bash since bash nodes have no Advanced tab). Fixes coleam00#971. * fix(web): push undo snapshot before adding nodes on canvas Call onPushSnapshot() before setNodes() in both onDrop and quick-add handlers so that node additions are captured by undo/redo history. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(web): address PR coleam00#1113 review feedback - Hold nodes/edges in refs so handleNodeDeleteById and onPushSnapshot can't capture stale pre-drop state (fixes undo-stack correctness). - Clamp context-menu x/y to viewport so right-click near edges stays fully on-screen. - Drop non-conformant role=menu/menuitem from the single-item context menu; rely on the native button for accessibility. - Extend isInputTarget() to cover ARIA combobox/textbox/searchbox so Backspace in Radix/shadcn widgets never nukes a node. - Extract handleBuilderKeydown as a pure function and add tests covering the Delete/Backspace + isInputTarget invariant. - Remove issue-number references from code comments per CLAUDE.md. - Document the new delete affordances in the Workflow Builder docs. - Inline context-menu dismissal, rename pointer handler, drop unused deps in keyboardActions useMemo. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
coleam00#1155) * fix(workflows): make adversarial init sed portable on macOS * chore: regenerate bundled-defaults after adversarial-dev sed fix Sync generated bundle with the new temp-file sed pattern in archon-adversarial-dev.yaml so check:bundled passes and binary distributions ship the macOS-safe version. --------- Co-authored-by: laplace young <yangqk12@whu.edu.cn> Co-authored-by: Rasmus Widing <rasmus.widing@gmail.com>
…coleam00#1327) * fix(workflows): filter user-plugin MCP noise out of workflow warnings Before this change, the dag-executor surfaced every entry in the Claude SDK's "MCP server connection failed: …" system message to the user. That message includes user-level plugin MCPs inherited from ~/.claude/ (e.g. `telegram`) that fail to connect in the headless workflow subprocess — they're non-actionable noise for the workflow author. Fix: - Pre-compute the set of workflow-configured MCP server names per node by parsing the `mcp:` config file once at the start of executeNodeInternal. No caller-facing API change; no duplication of the provider's env-var expansion logic (we only need the keys). - Split the system-message handler: the `MCP server connection failed:` path now surfaces only the subset of failing names that match the node's configured set; user-plugin failures are debug-logged as `dag.mcp_plugin_connection_suppressed`. The `⚠️ ` branch is unchanged. Supersedes coleam00#1134 (closed as stale — the Windows HOME fix in that PR was already shipped via coleam00#1302, and the claude.ts enabledPlugins change targeted a file that has since moved into @archon/providers). Credits @MrFadiAi for identifying and reporting the underlying issue. Co-authored-by: Fadi Ai <MrFadiAi@users.noreply.github.com> * fix(workflows): preserve MCP failure status in filtered message + observability Address review feedback on PR coleam00#1327: - parseMcpFailureServerNames now returns {name, segment} entries so the forwarded "MCP server connection failed: ..." message preserves the per-server status detail (e.g. "(timeout)", "(disconnected)") that the bare-name reconstruction was dropping. - loadConfiguredMcpServerNames now debug-logs read failures as dag.mcp_filter_config_read_failed instead of swallowing them silently. A transient EMFILE/EBUSY at filter time would otherwise silently reclassify a real workflow-MCP failure as plugin noise. - Add 4 integration tests through executeDagWorkflow covering the mixed workflow/plugin split, all-plugin suppression, no-mcp:-config nodes, and the unchanged⚠️ warning path. - Drop a WHAT comment above configuredMcpNames and a temporal phrase ("before this filter landed") that would rot on merge. - Document the filtering boundary in guides/mcp-servers.md and add a troubleshooting row for users debugging silently-suppressed plugin MCPs. --------- Co-authored-by: Fadi Ai <MrFadiAi@users.noreply.github.com>
Replace the real-filesystem vendor binary with spyOn(isExecutableFile) + path-fragment assertion, matching the sibling Codex resolver test. The previous test hardcoded the posix binary name 'copilot' while the resolver looks for 'copilot.exe' on win32, so the vendor branch never matched on Windows and the test then leaked to the system-installed Copilot CLI via PATH. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…oleam00#1330) axios <1.15.0 can be coerced to bypass NO_PROXY rules via hostname normalization, enabling SSRF in the right network shape. Archon pulls axios transitively through @slack/bolt (^1.12.0) and @slack/web-api (^1.13.5); before this change bun.lock resolved axios@1.13.6 — within the vulnerable range. Adding "axios": "^1.15.0" to the root package.json overrides bumps the transitive resolution to axios@1.15.1 (latest compatible 1.x). Both Slack range specs accept it without API surface changes — no downstream code touches axios directly. Supersedes coleam00#1153. Credits @stefans71 for identifying and reporting the vulnerability; their PR was stale on the lockfile (0.3.5 → 0.3.6 drift on dev), so this is a fresh one-line re-do on current dev. Closes coleam00#1053. Co-authored-by: Stefans71 <stefans71@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (5)
packages/providers/package.json (1)
22-22: Test script is getting unwieldy — consider consolidating.The
testscript now chains ~25 individualbun test <file>invocations with&&, which fails-fast on the first failing suite (so later suites won't run and give feedback) and keeps growing with each new test file. Bun supports passing multiple file arguments or a glob in a single invocation.♻️ Suggested simplification
- "test": "bun test src/claude/provider.test.ts && bun test src/codex/provider.test.ts && bun test src/registry.test.ts && bun test src/codex/binary-guard.test.ts && bun test src/codex/binary-resolver.test.ts && bun test src/codex/binary-resolver-dev.test.ts && bun test src/claude/binary-resolver.test.ts && bun test src/claude/binary-resolver-dev.test.ts && bun test src/community/pi/model-ref.test.ts && bun test src/community/pi/config.test.ts && bun test src/community/pi/event-bridge.test.ts && bun test src/community/pi/options-translator.test.ts && bun test src/community/pi/session-resolver.test.ts && bun test src/community/pi/provider.test.ts && bun test src/community/copilot/config.test.ts && bun test src/community/copilot/binary-resolver.test.ts && bun test src/community/copilot/provider.test.ts && bun test src/community/copilot/tool-restrictions.test.ts && bun test src/community/copilot/mcp-translation.test.ts && bun test src/community/copilot/skills-translation.test.ts && bun test src/community/copilot/structured-output.test.ts && bun test src/community/copilot/provider-hardening.test.ts && bun test src/community/copilot/agents-translation.test.ts", + "test": "bun test src/**/*.test.ts",If the original sequencing matters (e.g., registry tests relying on other files' mocks having run first), the per-file flow can be preserved while still running the whole set via a single bun process.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/package.json` at line 22, The test script in package.json currently chains many individual "bun test <file>" commands with &&; replace this with a single bun invocation that accepts multiple files or a glob to run all tests in one process (e.g., use bun test "src/**/*.test.ts" or pass multiple file args to bun test) to simplify maintenance and avoid the long && chain; update the "test" npm script entry (the "test" property) to use that single bun test command while preserving any required test ordering if necessary.packages/providers/src/community/pi/event-bridge.ts (1)
154-158: Hoist the import to the top of the file.Same note as in
packages/providers/src/community/pi/provider.ts: the mid-fileimport { tryParseStructuredOutput } from '../../shared/structured-output';is purely stylistic — ES imports are hoisted — but it tends to trip lint rules and makes the module's dependency graph harder to scan. Prefer a top-of-file import plus either a separateexport { tryParseStructuredOutput }there or a singleexport { tryParseStructuredOutput } from '../../shared/structured-output';re-export to preserve the back-compat surface.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/community/pi/event-bridge.ts` around lines 154 - 158, Move the inline import of tryParseStructuredOutput to the top of the module and replace the mid-file import+export with a top-level re-export; specifically, remove the mid-file "import { tryParseStructuredOutput } ..." and instead add either a top-of-file "import { tryParseStructuredOutput }" followed by an "export { tryParseStructuredOutput }" or a single top-level "export { tryParseStructuredOutput } from '...';" so the symbol tryParseStructuredOutput is declared/re-exported at the module root for linters and clearer dependency scanning.packages/providers/src/community/pi/provider.ts (1)
68-72: Move the import to the top of the file.Placing
import { augmentPromptForJsonSchema } from '../../shared/structured-output';in the middle of the module is non-idiomatic — ES module imports are hoisted anyway, so there's no semantic benefit from the placement, and most lint configs (import/first,@typescript-eslintconventions) will flag it. Move the import up with the other imports and keep theexport { augmentPromptForJsonSchema };re-export alongside the comment at the current location if you want the "shared-but-re-exported" signal, or convert to a single top-levelexport { augmentPromptForJsonSchema } from '../../shared/structured-output';.♻️ Proposed refactor
At the top of the file, alongside the existing imports:
import { createArchonUIBridge, createArchonUIContext } from './ui-context-stub'; +import { augmentPromptForJsonSchema } from '../../shared/structured-output';And replace lines 68-72 with a single re-export:
-// Structured-output prompt augmentation is shared across providers. Import -// once for local use and re-export so existing callers and tests keep their -// import path stable; new providers should import from `../../shared/structured-output`. -import { augmentPromptForJsonSchema } from '../../shared/structured-output'; -export { augmentPromptForJsonSchema }; +// Structured-output prompt augmentation is shared across providers. Re-exported +// here so existing Pi callers/tests keep their import path stable; new providers +// should import from `../../shared/structured-output`. +export { augmentPromptForJsonSchema } from '../../shared/structured-output';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/community/pi/provider.ts` around lines 68 - 72, Move the import of augmentPromptForJsonSchema to the top with the other imports and replace the mid-file import with a re-export; specifically, remove the inline "import { augmentPromptForJsonSchema } from '../../shared/structured-output';" from the middle of the module, add the import or single-line re-export at the top alongside other imports, and keep or add "export { augmentPromptForJsonSchema }" (or use "export { augmentPromptForJsonSchema } from '../../shared/structured-output';") to preserve the public API; update references to augmentPromptForJsonSchema accordingly.packages/providers/src/shared/structured-output.ts (1)
51-54: Fence stripping accepts onlyjsonor unlabeled fences.The regex
^\``(?:json)?\s*\n?strips ```` ``` ```` and ```` ```json ```` but leaves other labels (e.g. ```` ```javascript ````, ```` ```ts ````) in place, which will causeJSON.parseto fail and returnundefined. That's a safe degradation, but if instruction-following drift produces labels likejavascriptortypescript` it will silently miss the structured output. Consider widening the match to any identifier, since the fence context already implies the content is supposed to be the JSON:♻️ Suggested tweak
- .replace(/^```(?:json)?\s*\n?/i, '') + .replace(/^```[\w-]*\s*\n?/i, '')This accepts any single-word language tag (including empty) without compromising the boundary anchoring.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/shared/structured-output.ts` around lines 51 - 54, The opening-fence strip in the cleaned variable only matches unlabeled or "json" fences; update the first .replace used to build cleaned (the one stripping the opening ``` fence) so it accepts any single-word language tag (letters/digits/underscore/hyphen) or no tag instead of only "json", while preserving the start-anchor and optional trailing newline/whitespace; keep the existing closing-fence strip unchanged so the overall trimming still removes fenced blocks before JSON.parse.packages/providers/src/community/copilot/tool-restrictions.test.ts (1)
110-123: Add a test case for overlapping allow/deny lists to verify documented SDK precedence.The current test uses disjoint lists and doesn't validate the documented behavior where
availableToolstakes precedence when a tool appears in both lists. Add a case with the same tool in bothallowed_toolsanddenied_toolsto ensure the provider correctly propagates this conflict to the SDK, which will handle it per its documented contract.Suggested test case
test('passes both through when both present (SDK enforces availableTools precedence)', async () => { await drain( new CopilotProvider().sendQuery('hi', '/repo', undefined, { model: 'gpt-5', nodeConfig: { allowed_tools: ['read_file'], denied_tools: ['shell'], }, }) ); const cfg = capturedSessionConfigs[0] ?? {}; expect(cfg.availableTools).toEqual(['read_file']); expect(cfg.excludedTools).toEqual(['shell']); }); + + test('defines behavior when the same tool is both allowed and denied', async () => { + await drain( + new CopilotProvider().sendQuery('hi', '/repo', undefined, { + model: 'gpt-5', + nodeConfig: { + allowed_tools: ['shell', 'read_file'], + denied_tools: ['shell'], + }, + }) + ); + + const cfg = capturedSessionConfigs[0] ?? {}; + expect(cfg.availableTools).toEqual(['shell', 'read_file']); + expect(cfg.excludedTools).toEqual(['shell']); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/community/copilot/tool-restrictions.test.ts` around lines 110 - 123, Add a new test that sends a query via CopilotProvider.sendQuery with the same tool in both nodeConfig.allowed_tools and nodeConfig.denied_tools (e.g., 'read_file') and then assert that the capturedSessionConfigs entry for that session still contains both availableTools and excludedTools entries listing that tool (e.g., expect(cfg.availableTools).toEqual(['read_file']) and expect(cfg.excludedTools).toEqual(['read_file'])); this verifies the provider propagates overlapping allow/deny lists to the SDK which enforces the documented precedence.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/providers/src/community/copilot/provider-hardening.test.ts`:
- Around line 108-123: Test currently only ensures sendAndWait is not called but
doesn't ensure the provider avoids creating an external Copilot session; update
CopilotProvider.sendQuery to check abortSignal?.aborted at the very start and
throw an AbortError before any session setup or external calls, and add/adjust
the test to assert that any session-creation mock (e.g., mockCreateSession or
whatever function initializes a Copilot session) is not called in addition to
mockSendAndWait remaining at 0 calls.
In `@packages/providers/src/community/copilot/skills-translation.test.ts`:
- Around line 123-127: The assertions currently use toContain which allows
partial matches; tighten them to assert exact absolute paths by replacing the
two toContain checks with exact equality checks against resolved absolute paths
(e.g., compute expectedAlpha = path.resolve(join('.agents','skills','alpha'))
and expectedBeta = path.resolve(join('.claude','skills','beta')) and then assert
dirs?.[0] === expectedAlpha and dirs?.[1] === expectedBeta using toBe or
toEqual). Update the test variables capturedSessionConfigs, cfg.skillDirectories
and the two expectations accordingly so the test fails on relative-path
regressions.
In `@packages/providers/src/shared/skills.ts`:
- Around line 62-74: The loop that builds candidate = join(root, rawName) in the
skill resolution (inside the for...of skillNames loop, using seen, roots,
candidate, existsSync(join(candidate, 'SKILL.md'))) is vulnerable to
path-traversal via rawName; validate/sanitize rawName before joining: reject or
skip names that are absolute or contain ".." or path separators that would
escape the root, or safer — compute candidate = resolve(root, rawName) and then
ensure path.relative(root, candidate) does not start with ".." and candidate is
not absolute; only then call existsSync(join(candidate, 'SKILL.md')). Apply this
check where candidate is constructed so malicious rawName values cannot probe
outside the intended roots.
---
Nitpick comments:
In `@packages/providers/package.json`:
- Line 22: The test script in package.json currently chains many individual "bun
test <file>" commands with &&; replace this with a single bun invocation that
accepts multiple files or a glob to run all tests in one process (e.g., use bun
test "src/**/*.test.ts" or pass multiple file args to bun test) to simplify
maintenance and avoid the long && chain; update the "test" npm script entry (the
"test" property) to use that single bun test command while preserving any
required test ordering if necessary.
In `@packages/providers/src/community/copilot/tool-restrictions.test.ts`:
- Around line 110-123: Add a new test that sends a query via
CopilotProvider.sendQuery with the same tool in both nodeConfig.allowed_tools
and nodeConfig.denied_tools (e.g., 'read_file') and then assert that the
capturedSessionConfigs entry for that session still contains both availableTools
and excludedTools entries listing that tool (e.g.,
expect(cfg.availableTools).toEqual(['read_file']) and
expect(cfg.excludedTools).toEqual(['read_file'])); this verifies the provider
propagates overlapping allow/deny lists to the SDK which enforces the documented
precedence.
In `@packages/providers/src/community/pi/event-bridge.ts`:
- Around line 154-158: Move the inline import of tryParseStructuredOutput to the
top of the module and replace the mid-file import+export with a top-level
re-export; specifically, remove the mid-file "import { tryParseStructuredOutput
} ..." and instead add either a top-of-file "import { tryParseStructuredOutput
}" followed by an "export { tryParseStructuredOutput }" or a single top-level
"export { tryParseStructuredOutput } from '...';" so the symbol
tryParseStructuredOutput is declared/re-exported at the module root for linters
and clearer dependency scanning.
In `@packages/providers/src/community/pi/provider.ts`:
- Around line 68-72: Move the import of augmentPromptForJsonSchema to the top
with the other imports and replace the mid-file import with a re-export;
specifically, remove the inline "import { augmentPromptForJsonSchema } from
'../../shared/structured-output';" from the middle of the module, add the import
or single-line re-export at the top alongside other imports, and keep or add
"export { augmentPromptForJsonSchema }" (or use "export {
augmentPromptForJsonSchema } from '../../shared/structured-output';") to
preserve the public API; update references to augmentPromptForJsonSchema
accordingly.
In `@packages/providers/src/shared/structured-output.ts`:
- Around line 51-54: The opening-fence strip in the cleaned variable only
matches unlabeled or "json" fences; update the first .replace used to build
cleaned (the one stripping the opening ``` fence) so it accepts any single-word
language tag (letters/digits/underscore/hyphen) or no tag instead of only
"json", while preserving the start-anchor and optional trailing
newline/whitespace; keep the existing closing-fence strip unchanged so the
overall trimming still removes fenced blocks before JSON.parse.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 147adf7f-4bce-4022-b6a6-1e57d6268099
📒 Files selected for processing (18)
packages/docs-web/src/content/docs/getting-started/ai-assistants.mdpackages/providers/package.jsonpackages/providers/src/community/copilot/agents-translation.test.tspackages/providers/src/community/copilot/binary-resolver.test.tspackages/providers/src/community/copilot/binary-resolver.tspackages/providers/src/community/copilot/capabilities.tspackages/providers/src/community/copilot/mcp-translation.test.tspackages/providers/src/community/copilot/provider-hardening.test.tspackages/providers/src/community/copilot/provider.tspackages/providers/src/community/copilot/skills-translation.test.tspackages/providers/src/community/copilot/structured-output.test.tspackages/providers/src/community/copilot/tool-restrictions.test.tspackages/providers/src/community/pi/event-bridge.tspackages/providers/src/community/pi/options-translator.tspackages/providers/src/community/pi/provider.tspackages/providers/src/registry.test.tspackages/providers/src/shared/skills.tspackages/providers/src/shared/structured-output.ts
✅ Files skipped from review due to trivial changes (1)
- packages/providers/src/community/copilot/capabilities.ts
🚧 Files skipped from review as they are similar to previous changes (4)
- packages/docs-web/src/content/docs/getting-started/ai-assistants.md
- packages/providers/src/community/copilot/binary-resolver.test.ts
- packages/providers/src/community/copilot/binary-resolver.ts
- packages/providers/src/community/copilot/provider.ts
| test('rejects early when abortSignal is already aborted', async () => { | ||
| const controller = new AbortController(); | ||
| controller.abort(); | ||
|
|
||
| const { error } = await collect( | ||
| new CopilotProvider().sendQuery('hi', '/repo', undefined, { | ||
| model: 'gpt-5', | ||
| abortSignal: controller.signal, | ||
| }) | ||
| ); | ||
|
|
||
| expect(error).toBeDefined(); | ||
| expect(error?.name).toBe('AbortError'); | ||
| // sendAndWait must NOT have been entered | ||
| expect(mockSendAndWait).toHaveBeenCalledTimes(0); | ||
| }); |
There was a problem hiding this comment.
Assert already-aborted requests do not create a Copilot session.
This currently only proves sendAndWait is skipped. For a true early-abort guard, the provider should avoid the external session setup path too.
Proposed test assertion
expect(error).toBeDefined();
expect(error?.name).toBe('AbortError');
+ expect(mockCreateSession).toHaveBeenCalledTimes(0);
// sendAndWait must NOT have been entered
expect(mockSendAndWait).toHaveBeenCalledTimes(0);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test('rejects early when abortSignal is already aborted', async () => { | |
| const controller = new AbortController(); | |
| controller.abort(); | |
| const { error } = await collect( | |
| new CopilotProvider().sendQuery('hi', '/repo', undefined, { | |
| model: 'gpt-5', | |
| abortSignal: controller.signal, | |
| }) | |
| ); | |
| expect(error).toBeDefined(); | |
| expect(error?.name).toBe('AbortError'); | |
| // sendAndWait must NOT have been entered | |
| expect(mockSendAndWait).toHaveBeenCalledTimes(0); | |
| }); | |
| test('rejects early when abortSignal is already aborted', async () => { | |
| const controller = new AbortController(); | |
| controller.abort(); | |
| const { error } = await collect( | |
| new CopilotProvider().sendQuery('hi', '/repo', undefined, { | |
| model: 'gpt-5', | |
| abortSignal: controller.signal, | |
| }) | |
| ); | |
| expect(error).toBeDefined(); | |
| expect(error?.name).toBe('AbortError'); | |
| expect(mockCreateSession).toHaveBeenCalledTimes(0); | |
| // sendAndWait must NOT have been entered | |
| expect(mockSendAndWait).toHaveBeenCalledTimes(0); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/providers/src/community/copilot/provider-hardening.test.ts` around
lines 108 - 123, Test currently only ensures sendAndWait is not called but
doesn't ensure the provider avoids creating an external Copilot session; update
CopilotProvider.sendQuery to check abortSignal?.aborted at the very start and
throw an AbortError before any session setup or external calls, and add/adjust
the test to assert that any session-creation mock (e.g., mockCreateSession or
whatever function initializes a Copilot session) is not called in addition to
mockSendAndWait remaining at 0 calls.
| const cfg = capturedSessionConfigs[0] ?? {}; | ||
| const dirs = cfg.skillDirectories as string[] | undefined; | ||
| expect(dirs).toHaveLength(2); | ||
| expect(dirs?.[0]).toContain(join('.agents', 'skills', 'alpha')); | ||
| expect(dirs?.[1]).toContain(join('.claude', 'skills', 'beta')); |
There was a problem hiding this comment.
Assert exact absolute skill directories.
This test says it verifies absolute paths, but toContain(...) would also pass for relative suffixes. Pin the full expected paths so SDK-breaking relative-path regressions fail.
Proposed assertion tightening
const cfg = capturedSessionConfigs[0] ?? {};
const dirs = cfg.skillDirectories as string[] | undefined;
expect(dirs).toHaveLength(2);
- expect(dirs?.[0]).toContain(join('.agents', 'skills', 'alpha'));
- expect(dirs?.[1]).toContain(join('.claude', 'skills', 'beta'));
+ expect(dirs).toEqual([
+ join(workDir, '.agents', 'skills', 'alpha'),
+ join(tmpRoot, 'home', '.claude', 'skills', 'beta'),
+ ]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const cfg = capturedSessionConfigs[0] ?? {}; | |
| const dirs = cfg.skillDirectories as string[] | undefined; | |
| expect(dirs).toHaveLength(2); | |
| expect(dirs?.[0]).toContain(join('.agents', 'skills', 'alpha')); | |
| expect(dirs?.[1]).toContain(join('.claude', 'skills', 'beta')); | |
| const cfg = capturedSessionConfigs[0] ?? {}; | |
| const dirs = cfg.skillDirectories as string[] | undefined; | |
| expect(dirs).toHaveLength(2); | |
| expect(dirs).toEqual([ | |
| join(workDir, '.agents', 'skills', 'alpha'), | |
| join(tmpRoot, 'home', '.claude', 'skills', 'beta'), | |
| ]); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/providers/src/community/copilot/skills-translation.test.ts` around
lines 123 - 127, The assertions currently use toContain which allows partial
matches; tighten them to assert exact absolute paths by replacing the two
toContain checks with exact equality checks against resolved absolute paths
(e.g., compute expectedAlpha = path.resolve(join('.agents','skills','alpha'))
and expectedBeta = path.resolve(join('.claude','skills','beta')) and then assert
dirs?.[0] === expectedAlpha and dirs?.[1] === expectedBeta using toBe or
toEqual). Update the test variables capturedSessionConfigs, cfg.skillDirectories
and the two expectations accordingly so the test fails on relative-path
regressions.
| for (const rawName of skillNames) { | ||
| if (typeof rawName !== 'string' || rawName.length === 0) continue; | ||
| if (seen.has(rawName)) continue; | ||
| seen.add(rawName); | ||
|
|
||
| let found: string | undefined; | ||
| for (const root of roots) { | ||
| const candidate = join(root, rawName); | ||
| if (existsSync(join(candidate, 'SKILL.md'))) { | ||
| found = candidate; | ||
| break; | ||
| } | ||
| } |
There was a problem hiding this comment.
Skill name is joined without path-traversal sanitization.
join(root, rawName) will happily resolve .. segments (e.g. rawName = "../../etc" → candidate outside any intended root). Because the existence check is on join(candidate, 'SKILL.md') the practical impact is bounded — an attacker would need a SKILL.md to already exist at the traversed path — but:
- It still lets a crafted
skills:entry probe the filesystem outside the four documented roots, which leaks information via the warning string (missinglist names). - If a future change ever starts reading files from the resolved
pathsbeyondSKILL.md(or passes them to a model/shell), this becomes a real traversal vector.
Skill names come from workflow YAML (usually trusted), so this is minor, but an early reject is cheap insurance:
🛡️ Suggested validation
for (const rawName of skillNames) {
if (typeof rawName !== 'string' || rawName.length === 0) continue;
+ // Reject path separators and traversal segments — skill names must be
+ // simple directory names within the configured roots.
+ if (rawName.includes('/') || rawName.includes('\\') || rawName === '..' || rawName === '.') {
+ missing.push(rawName);
+ continue;
+ }
if (seen.has(rawName)) continue;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for (const rawName of skillNames) { | |
| if (typeof rawName !== 'string' || rawName.length === 0) continue; | |
| if (seen.has(rawName)) continue; | |
| seen.add(rawName); | |
| let found: string | undefined; | |
| for (const root of roots) { | |
| const candidate = join(root, rawName); | |
| if (existsSync(join(candidate, 'SKILL.md'))) { | |
| found = candidate; | |
| break; | |
| } | |
| } | |
| for (const rawName of skillNames) { | |
| if (typeof rawName !== 'string' || rawName.length === 0) continue; | |
| // Reject path separators and traversal segments — skill names must be | |
| // simple directory names within the configured roots. | |
| if (rawName.includes('/') || rawName.includes('\\') || rawName === '..' || rawName === '.') { | |
| missing.push(rawName); | |
| continue; | |
| } | |
| if (seen.has(rawName)) continue; | |
| seen.add(rawName); | |
| let found: string | undefined; | |
| for (const root of roots) { | |
| const candidate = join(root, rawName); | |
| if (existsSync(join(candidate, 'SKILL.md'))) { | |
| found = candidate; | |
| break; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/providers/src/shared/skills.ts` around lines 62 - 74, The loop that
builds candidate = join(root, rawName) in the skill resolution (inside the
for...of skillNames loop, using seen, roots, candidate,
existsSync(join(candidate, 'SKILL.md'))) is vulnerable to path-traversal via
rawName; validate/sanitize rawName before joining: reject or skip names that are
absolute or contain ".." or path separators that would escape the root, or safer
— compute candidate = resolve(root, rawName) and then ensure path.relative(root,
candidate) does not start with ".." and candidate is not absolute; only then
call existsSync(join(candidate, 'SKILL.md')). Apply this check where candidate
is constructed so malicious rawName values cannot probe outside the intended
roots.
…"not a git repo" (coleam00#1332) * fix(cli): surface stale-workspace registration error instead of fake "not a git repo" When workflowRunCommand auto-registers an unregistered repo, a stale ~/.archon/workspaces/<owner>/<repo>/source symlink (pointing to an old checkout) causes createProjectSourceSymlink() in @archon/paths to throw: Source symlink at <linkPath> already points to <existing>, expected <target> The CLI caught that in a try/catch, logged it at warn level, continued with `codebase = null`, and then the isolation / resume branches hit their "codebase missing" fallback and threw the generic: Cannot create worktree: not in a git repository. That message is false — the repo is valid; the Archon workspace entry is stale. It sends users down the wrong diagnostic path (checking git config, permissions, etc.) instead of pointing at the workspace dir. Fix: preserve the registration error on a new `codebaseRegistrationError` local, and at both fallback sites (resume + worktree-creation) check it before the generic "not a git repo" branch. When set, throw a truthful: Cannot {create worktree,resume}: repository registration failed. Error: <original message> Hint: Remove the stale workspace entry at <dir> and retry, or use --no-worktree to skip isolation. The hint's exact path comes from a small parser that extracts the workspace directory from the known "Source symlink at …" format; when the message shape doesn't match (future error text changes), the parser returns null and we fall back to a generic "check registration under <archon-home>/workspaces" hint — safe degradation. Regression test in workflow.test.ts asserts the new error message and negatively asserts the old "not in a git repository" string is gone. Supersedes coleam00#1157 — that PR was draft + CONFLICTING against current dev, and also mentioned Windows test-compat changes that weren't in the diff (pruned scope). This is a fresh re-do focused strictly on coleam00#1146. Closes coleam00#1146. Co-authored-by: Bortlesboat <Bortlesboat@users.noreply.github.com> * review: add resume-path test, null-fallback test, update troubleshooting docs Addresses multi-agent review feedback on this PR: - Add regression test for the --resume fallback site (the worktree-create site was already covered; the resume site had identical wiring but zero test coverage). - Add test for the unrecognized-error-shape branch of buildRegistrationFailureError so the generic workspace hint is pinned (prevents accidental inversion of the stale-entry vs generic-hint ternary). - Update the troubleshooting page to key on the new "Cannot create worktree: repository registration failed." message. Users hitting the new error won't find the page under the old heading, and the "In the future..." note is obsolete now that the error itself contains the cleanup path. - Trim both new docblocks: keep the load-bearing cross-package error string contract in extractStaleWorkspaceEntry, drop narration of what the code already shows. Drop the "Before this helper existed..." paragraph from buildRegistrationFailureError — that's CHANGELOG material. Drop PR-reference suffix from the test section divider. * review: guard getArchonHome in hint + export parser for direct tests Two follow-up fixes to the multi-agent review commit (f32f002): CodeRabbit finding — unguarded getArchonHome() in the fallback hint. If getArchonHome() ever throws (misconfigured env vars, permission issues on the resolution path), the registration-failure Error would never get constructed: we'd throw a secondary home-resolution error that masks the root cause. Wrap the fallback branch in try/catch — prefer losing the exact path in the hint over replacing the actionable registration error. A safe generic hint ("Check your Archon workspace registration and retry") takes over when getArchonHome() throws. The original error.message is always embedded verbatim in the re-thrown Error. S2 — export extractStaleWorkspaceEntry for direct table tests. The parser is where the cross-package string contract with @archon/paths actually lives; direct tests against it are cheaper than end-to-end CLI tests and pin the edge cases: - POSIX path with forward slashes (typical unix user) - Windows path with backslashes (verifies Math.max(lastIndexOf / , lastIndexOf \)) - Unrelated error message (no prefix) → null - Prefix matches but delimiter missing → null - Source path without any separator → null (guards against returning empty string, which would produce a nonsense "Remove the stale workspace entry at " hint) - Empty string → null Six new cases in the test file. The claim of Windows support in the PR description is now actually verified. * fix(test): make generic-hint assertion path-separator agnostic Windows test runner (CI) hit: Expected to contain: "Check your Archon workspace registration under /home/test/.archon/workspaces" Received: "... under \home\test\.archon\workspaces and retry, ..." path.join normalizes to `\` on Windows and `/` on POSIX. The test hardcoded forward slashes in the expected substring. Split into two separator-agnostic asserts: the prefix up to "under", then `/workspaces\b/` regex for the final path segment. Behavior doesn't change — the hint still gets the full path.join'd workspaces dir on either platform. --------- Co-authored-by: Bortlesboat <Bortlesboat@users.noreply.github.com>
…th-reason dialog (coleam00#1329) * fix(server,web,workflows): web approval gates auto-resume + reject-with-reason dialog Fixes three tightly-coupled bugs that made web approval gates unusable: 1. orchestrator-agent did not pass parentConversationId to executeWorkflow for any web-dispatched foreground / interactive / resumable run. Without that field, findResumableRunByParentConversation (the machinery the CLI relies on for resume) couldn't find the paused run from the same conversation on a follow-up message, and the approve/reject API handlers had no conversation to dispatch back to. 2. POST /api/workflows/runs/:runId/{approve,reject} recorded the decision and returned "Send a message to continue the workflow." — the workflow never actually resumed. Added tryAutoResumeAfterGate() that mirrors what workflowApproveCommand / workflowRejectCommand already do on the CLI: look up the parent conversation, dispatch `/workflow run <name> <userMessage>` back through dispatchToOrchestrator. Failures are non-fatal — the user can still send a manual message as a fallback. 3. The during-streaming cancel-check in dag-executor aborted any streaming node whenever the run status left 'running', including the legitimate transition to 'paused' that an approval node performs. A concurrent AI node in the same DAG layer now tolerates 'paused' and finishes its own stream; only truly terminal / unknown states (null, cancelled, failed, completed) abort the in-flight stream. Web UI: ConfirmRunActionDialog gains an optional reasonInput prop (label + placeholder) that renders a textarea and passes the trimmed value to onConfirm. WorkflowRunCard (dashboard) and WorkflowProgressCard (chat) both use it for Reject now — the chat card was still on window.confirm, which was both inconsistent with the dashboard and couldn't collect a reason. The trimmed reason threads through to $REJECTION_REASON in the workflow's on_reject prompt. Supersedes coleam00#1147. @jonasvanderhaegen surfaced the root cause and shape of the fix; that PR was 87 commits stale and pre-dated the reject-UX upgrade (coleam00#1261 area), so this is a fresh re-do on current dev. Tests: - packages/server/src/routes/api.workflow-runs.test.ts — 5 new cases: approve with parent dispatches; approve without parent returns "Send a message"; approve with deleted parent conversation skips safely; reject dispatches on-reject flows; reject that cancels (no on_reject) does NOT dispatch. - packages/core/src/orchestrator/orchestrator.test.ts — updated the two synthesizedPrompt-dispatch tests for the new executeWorkflow arity. Closes coleam00#1131. Co-authored-by: Jonas Vanderhaegen <7755555+jonasvanderhaegen@users.noreply.github.com> * fix: address multi-agent review findings for web approval auto-resume C1 (critical) — cross-adapter misrouting guard tryAutoResumeAfterGate now checks parentConv.platform_type === 'web' before dispatching. Non-web parents (Slack/Telegram/GitHub/Discord) being approved from the dashboard skip auto-resume rather than dispatching a Slack thread_ts or Telegram chat_id through the web adapter's lock manager. C2 (critical) — fire-and-forget dispatch replaced with await void dispatchToOrchestrator() meant the "Resuming workflow." response fired before async work completed, and the outer try/catch couldn't observe dispatch failures. Changed to await; response now accurately reflects dispatch outcome. I1 — replaced logPrefix string-template (which produced 3-segment api.workflow_*.dispatched event names violating {domain}.{action}_{state}) with literal event names per action, branched inside the helper. Accepts action: 'approve' | 'reject' instead. I2 — corrected misleading "foreground/interactive" qualifier in the approve-endpoint comment; background web dispatches also set parent_conversation_id via the pre-created run, so they auto-resume too. I3 — extracted shouldContinueStreamingForStatus() as a small exported policy and added 7 unit tests covering running/paused/null/cancelled/ failed/completed/unknown. Full-integration coverage of the paused- tolerance invariant would require manipulating the 10s CANCEL_CHECK_INTERVAL_MS, which is flaky-prone; unit test of the policy function captures the same invariant deterministically. I4 — updated approval-nodes.md and authoring-workflows.md to reflect that Web UI approve/reject now auto-resumes (no "send a follow-up message" copy), documented the reject-with-reason dialog and $REJECTION_REASON flow, and called out the cross-platform caveat. S1 — rewrote streaming status check as positive shouldContinue safe-list via the extracted policy function, matching the inline comment. S2 — inlined handleReject on the dashboard rather than squeezing rejectWorkflowRun through runAction with a closure; keeps runAction narrow for the single-arg lifecycle actions. S5 — new regression test covering the non-web-parent skip path (slack-platform parent → dispatch skipped → response falls back to "Send a message to continue"). S6 — removed stale reference to runAction in ConfirmRunActionDialog's onConfirm JSDoc (no longer accurate now that WorkflowProgressCard calls the dialog without runAction). S7 — fixed misleading "user can resume manually by sending any message" docstring (resume is triggered by re-running the workflow command, not by an arbitrary message). Skipped as out-of-scope: S3 — cancelWorkflowRun rowCount check (pre-existing defect; separate PR) S4 — tightening expect.anything() to UUID regex (deferred) S8 — 12-positional-arg executeWorkflow → options-bag refactor (tracked follow-up) bun run validate green locally; 68 tests in api.workflow-runs.test.ts (up from 67), 173 in dag-executor.test.ts (up from 166). * review: close I1/I2/I3/I4/I6 — paused tolerance in loop + emitter, resume test, useId I1 (loop inter-iteration check) — dag-executor.ts:1715 Used `!== 'running'` in the loop node's between-iteration status check. A sibling approval node pausing the run in the same topological layer would abort the loop mid-iteration with "Loop node '<id>' stopped at iteration N (paused)". Switched to the shared shouldContinueStreamingForStatus helper so paused is tolerated — same semantics the streaming check got. Extended inline comment explains the sibling-layer concurrency reason. I2 (skipIfStatusChanged emitter unregister) — dag-executor.ts:2886 At DAG-finalization writes the helper correctly skipped writing on any non-running state (paused included — don't mark a paused run complete), but it *also* called getWorkflowEventEmitter().unregisterRun() which broke SSE observability for a run that's still live (waiting for user approval). Split the two responsibilities: skip the write for all non-running states, but only unregister the emitter for terminal states (cancelled / deleted / completed / failed). `paused` keeps the emitter registered so resume stays visible on the dashboard. I3 (foreground_resume_detected branch untested) — orchestrator-agent.test.ts That branch was modified as part of the original fix (added parentConversationId as 11th positional arg) but no existing test configured mockFindResumableRunByParentConversation to return non-null. A positional mistake (e.g. accidentally swapping issueContext and parentConversationId) would silently break auto-resume with no failing test. New regression test configures the mock, asserts both the cwd comes from the resumable run's working_path AND parentConversationId is passed correctly at position 10. I4 (null-parent log level) — api.ts tryAutoResumeAfterGate `getConversationById` returning null is a data-integrity signal (the parent conversation was deleted while the run was paused) — worth surfacing at info level so operators notice, not hiding at debug. Missing platform_conversation_id on an existing row would be an unusual DB state and stays at debug. Added `parentDeleted: boolean` to the log context so the two cases are distinguishable in observability. I6 (hardcoded DOM id) — ConfirmRunActionDialog.tsx `id="confirm-run-action-reason"` collided when multiple dialog instances share the same page (Radix portals mitigate in practice but the code was fragile). Switched to React.useId() so each instance gets a unique id — htmlFor/id wiring preserved. S11 (arity-only assertion) — orchestrator-agent.test.ts:1092 area The interactive-workflow-on-web test asserted mockExecuteWorkflow was called, but nothing about the args. Added a specific assertion that position 10 (parentConversationId) equals 'conv-1' (the caller conversation id) — pins the wiring that I1/I2 depend on being correct. Deferred (from review S1-S10, I5, I7): - S1 (ExecuteWorkflowOptions bag) — tracked as standalone follow-up; 12 positional args with 2 adjacent optionals is a real maintenance hazard but the refactor deserves its own PR. - S7 (WHY comment on non-web else branch) — review text says the branch "correctly omits" parentConversationId but the code passes it; the combination with the web-parent guard in tryAutoResumeAfterGate is intentional. Not adding a justify-what-we-don't-do comment. - S2/S3/S4/S5/S8/S9/S10 — pure polish (event-map ternary, platformConvId inlining, shared constant for REJECTION_REASON_INPUT, onChange arrow shorthand, discriminated union, docblock trim, suffix comment drop) - I5 (soften "Resuming workflow." to "— check the dashboard for progress") — users clicking from the dashboard are already on the dashboard; the current text is accurate (enqueue completed) and concise. - I7 (test dispatch-throws path) — covered implicitly by the try/catch branch of tryAutoResumeAfterGate returning false; a direct test would require mocking handleMessage to throw and would couple to dispatchToOrchestrator internals. bun run validate green; 189 dag-executor tests, 98 orchestrator-agent tests, 68 api.workflow-runs tests — all the new cases pass. --------- Co-authored-by: Jonas Vanderhaegen <7755555+jonasvanderhaegen@users.noreply.github.com>
Release 0.3.7
Bun 1.3.11's bytecode generation produces broken output for our
module graph ("TypeError: Expected CommonJS module to have a function
wrapper" at runtime, reproduces natively on darwin-arm64 too). The
compile already fails with "Failed to generate bytecode for ./cli.js"
and proceeds without it, but the resulting binary still crashes at
module instantiation.
Only --minify remains. Binary size is unchanged in practice
(bytecode was being skipped after the error anyway), but now the
build succeeds cleanly and the binary runs.
Surfaced while building the v0.3.7 release.
…failure recovery
Two gaps caused the v0.3.7 release to ship an empty GitHub Release and break
install.sh for all users for ~30 min. Both are now handled in the skill:
1. Step 1.5 — mandatory pre-flight `bun build --compile` smoke test before
any user-visible step (version bump, PR, tag). Compiles the native target
only (~15-30s), runs `archon version`, greps the output for runtime-error
markers, aborts if anything fails. This would have caught both v0.3.7
bugs (the --bytecode flag producing broken bytecode, and the Pi SDK's
readFileSync of a path only present in node_modules) before the tag
was ever pushed. Escape hatch: fix the underlying bug on a feature
branch and re-run /release — no workaround to "just skip".
2. Step 10 now detects deterministic CI failure (two reruns with same
error) and jumps to a new "Recovery: deterministic release CI failure"
section instead of looping until timeout. Recovery documents:
- Delete the empty GitHub Release (keep the tag — immutability)
so releases/latest falls back to the prior working version and
install.sh stops 404-ing within seconds.
- Diagnose via `gh run view --log-failed`, with a cheat-sheet of
common failure modes (bundler bytecode, module-init crashes,
Windows timeouts).
- Cut the NEXT patch version with the fix — never reuse the broken
tag, never force-push, never upload binaries built from a
different SHA onto the original release.
- CHANGELOG note template that references the broken prior version
so the audit trail is clear for anyone inspecting tags later.
Also added two Important Rules:
- Never skip Step 1.5. The ~30s cost keeps failure modes local.
- If Step 1.5 fails, abort the release. No "skip and hope CI passes."
Skill grows from 389 → 548 lines. No changes to Steps 2–9, 11, 12.
…coleam00#1355) * fix(providers/pi): lazy-load Pi SDK to unbreak compiled archon binary Pi's @mariozechner/pi-coding-agent/dist/config.js runs `readFileSync(getPackageJsonPath(), 'utf-8')` at module top level. Inside a compiled archon binary `getPackageJsonPath()` resolves to `dirname(process.execPath) + '/package.json'`, which doesn't exist next to `/usr/local/bin/archon` — so archon crashes with ENOENT at startup before any command runs. v0.3.7's release binary build appeared to compile clean (CI fell over first on an unrelated --bytecode issue) but even the fixed-bytecode binary fails the same way locally. Convert all Pi SDK value imports and Pi-dependent helper imports to dynamic imports inside `PiProvider.sendQuery()`. Type-only imports stay static (erased by TS). Effect: registering the Pi provider and creating an instance no longer loads Pi's SDK — load happens only when a Pi workflow actually runs. Claude and Codex providers keep their static import style (their SDKs have no module-init side effects that fail in a binary); see the file header comment in provider.ts for why Pi is the deliberate outlier. Class constructors (AuthStorage, ModelRegistry, SettingsManager) are accessed via `piCodingAgent.X` rather than destructured to keep eslint's naming-convention rule happy without a disable. Add a regression test (provider-lazy-load.test.ts) that mocks @mariozechner/pi-coding-agent and @mariozechner/pi-ai, walks the same registerCommunityProviders() → getAgentProvider('pi') path the CLI and server take, and asserts neither SDK module was loaded. Runs in its own `bun test` invocation (mock.module is process-wide). Verified locally: `bun build --compile --minify --target=bun-darwin-arm64` produces a binary whose `archon version` runs cleanly and reports Build: binary, where previously every command crashed at boot. * address review: changelog, docstring, type tighten, Theme type-only import From the multi-agent review on coleam00#1355: - Add CHANGELOG.md entries under [Unreleased] ### Fixed for this PR's Pi lazy-load fix and the --bytecode removal from coleam00#1354 — both user-visible fixes (compiled binary unusable without them). - Rewrite the test-header docstring in provider-lazy-load.test.ts to describe counter-based detection instead of "mocks throw" (contradicted the actual code directly below it). - Tighten `lookupPiModel`'s first parameter from `unknown` to a local `GetModelFn` alias, moving the runtime-string cast to the single call site with a pointer to the docblock. - Update the class docblock on `PiProvider` — "v1 capabilities are all false" was stale; PI_CAPABILITIES has seven flags set to true. - `ui-context-stub.ts` imports `Theme` as a non-type value even though every usage is in a type position. Fold it into the existing `import type {…}` block so a future runtime-class `Theme` in Pi can't reintroduce an eager module load via this sibling. No behavior change. Type-check, lint, format, tests, and a local darwin-arm64 compile + version smoke all clean.
Release 0.3.8
…leam00#1357) The two "Smoke-test Claude binary-path resolver" steps in .github/workflows/release.yml run `archon workflow run archon-assist "hello"` against a fresh `git init` temp repo with no origin. As of coleam00#1310's worktree-policy changes, default isolation auto-syncs the worktree with origin before creating it, which fails with "neither origin/HEAD nor origin/main exist" — hit before Claude's resolver is even reached, so the test assertions ("Claude Code not found", "CLAUDE_BIN_PATH") never match and the linux-x64 build aborts the whole release matrix. The tests exercise the Claude resolver path, not worktree setup, so --no-worktree is the correct fix: it short-circuits validateAndResolveIsolation and skips the origin sync entirely. Matches the documented usage in CLAUDE.md (`archon workflow run quick-fix --no-worktree "Fix typo"`). Surfaced while cutting v0.3.8 — the release CI failed deterministically on both builds. Binaries themselves are fine (the v0.3.7 Pi-lazy-load fix works; local pre-flight passed on --help). v0.3.8's GitHub Release has been deleted so `releases/latest` falls back to v0.3.6; next release will be v0.3.9 with this fix.
Status of @Wirasm's asks on coleam00#1351His comment (link) asks for four things before merge. Here's what's done locally and what's still on you. 1. Models testedLive, end-to-end on Linux + Copilot CLI 1.0.36 (
The PR description also lists three earlier scenarios on Windows + CLI 1.0.31 with I did not run live tests for 2. Configs usedAuth on this Linux box: No MCP, skills, or extension config staged for the live runs — those are exercised by the unit tests (52 tests in
3. Results — what was tested
Per-node timings (Pino logs from the run):
Workflow run id: Capability matrix recap (from PR body, validated by where):
4. Smoke workflow with run output capturedTwo files now under
To reproduce: copilot login # one-time
bun run cli workflow run e2e-copilot-smoke
bun run cli workflow run e2e-copilot-all-featuresConflict status — resolvedUpstream
Resolution in commit
Upstream PR mergeability after push: What I haven't done yet — your call
Recommendation: post the reply + offer (2)(3)(4) on demand. The current evidence already exceeds what the Pi PRs landed with at the same review stage. |
There was a problem hiding this comment.
Actionable comments posted: 9
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (12)
.claude/skills/archon/references/authoring-commands.md-9-21 (1)
9-21:⚠️ Potential issue | 🟡 MinorAdd a language to the fenced block to satisfy markdownlint.
Line 9 opens a fenced code block without a language (
MD040). Please mark it astextsince this is a directory tree.Suggested fix
-``` +```text <repoRoot>/.archon/commands/ # 1. Repo-scoped (wins) ├── my-command.md # Custom command for this repo ├── archon-assist.md # Overrides the bundled archon-assist └── triage/ # Subfolders allowed, 1 level deep └── review.md # Resolves as 'review', not 'triage/review' ~/.archon/commands/ # 2. Home-scoped (user-level, shared across all repos) ├── review-checklist.md # Personal helper available in every repo └── pr-style-guide.md <bundled defaults> # 3. Shipped with Archon (archon-assist, etc.)</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In @.claude/skills/archon/references/authoring-commands.md around lines 9 - 21,
The fenced code block that begins with "/.archon/commands/ # 1.
Repo-scoped (wins)" is missing a language tag (MD040); update the opening fence
fromtotext so the directory-tree block is marked as text (i.e., replace
the triple backticks that start the block with ```text).</details> </blockquote></details> <details> <summary>packages/docs-web/src/content/docs/getting-started/ai-assistants.md-240-257 (1)</summary><blockquote> `240-257`: _⚠️ Potential issue_ | _🟡 Minor_ **Document the compiled-mode vendor fallback for Copilot CLI resolution.** The Copilot section currently documents env/config override paths but not the compiled-binary vendor fallback path/behavior described in this PR. That can mislead users into unnecessary global installs. <details> <summary>📝 Proposed docs patch</summary> ```diff For compiled Archon binaries, install the Copilot CLI yourself and point Archon at it if needed: @@ Optional override paths: @@ assistants: copilot: copilotCliPath: /absolute/path/to/copilot ``` ```diff +In compiled binary mode, Archon also checks its vendor Copilot binary location before falling back to PATH. If you ship Archon with a vendored Copilot CLI, no global install is required. ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@packages/docs-web/src/content/docs/getting-started/ai-assistants.md` around lines 240 - 257, The docs omit describing the compiled-mode vendor fallback used to resolve the Copilot CLI, which can cause users to unnecessarily install the global `@github/copilot` package; update the Getting Started / Copilot section to document the compiled-binary vendor fallback behavior and the exact resolution order (env var COPILOT_CLI_PATH, YAML config assistants.copilot.copilotCliPath, then the compiled-mode vendor fallback path), and indicate the typical vendor path used by compiled Archon binaries and when a global install is actually required; reference the Copilot CLI override examples already present so readers understand precedence. ``` </details> </blockquote></details> <details> <summary>.archon/commands/maintainer-review-code-review.md-115-117 (1)</summary><blockquote> `115-117`: _⚠️ Potential issue_ | _🟡 Minor_ **Add language specifier to fenced code block.** The fenced code block is missing a language specifier, which triggers a markdownlint warning. Since this is an example output format (not executable code), specify `text` as the language. <details> <summary>📝 Suggested fix</summary> ```diff -``` +```text Code review complete. <N> CRITICAL, <N> HIGH, <N> MEDIUM, <N> LOW findings. Verdict: <ready|fixes-needed|blocking>. ``` ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In @.archon/commands/maintainer-review-code-review.md around lines 115 - 117,
The fenced code block in the example output (the triple-backtick block
containing "Code review complete. CRITICAL...") lacks a language specifier;
update that fenced block to include thetextlanguage (i.e., change ``` tonon-code text.packages/providers/src/codex/binary-resolver.test.ts-90-104 (1)
90-104:⚠️ Potential issue | 🟡 MinorUse
os.homedir()to stay aligned with production.The probe path is built from
process.env.HOME, but the resolver builds it fromhomedir()(node:os). On POSIX they usually agree, but ifHOMEis unset/overridden in CI,homedir()falls back togetpwuid_r, the mock won't match the real probe path, and the test will fail with a misleading "not found" error rather than a clear assertion failure. Source the expected path the same way the production code does.♻️ Suggested change
+import { homedir } from 'node:os'; @@ test('autodetects npm global install at ~/.npm-global/bin/codex (POSIX)', async () => { if (process.platform === 'win32') return; // POSIX-only probe - const home = process.env.HOME ?? '/Users/test'; - const expected = `${home}/.npm-global/bin/codex`; + const expected = `${homedir()}/.npm-global/bin/codex`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/codex/binary-resolver.test.ts` around lines 90 - 104, The test "autodetects npm global install at ~/.npm-global/bin/codex (POSIX)" builds the expected path from process.env.HOME which can diverge from production's os.homedir(); update the test to derive the expected path using homedir() (imported from node:os or referenced the same way production does) so the mocked resolver.fileExists check (fileExistsSpy on resolver.fileExists) matches the path used by resolveCodexBinaryPath, ensuring the assertion and mock hit consistently..archon/workflows/defaults/archon-piv-loop.yaml-398-403 (1)
398-403:⚠️ Potential issue | 🟡 Minor
TASK_COUNTfallback can swallow the friendly “no tasks” error.
grep -cwrites its count to stdout before exiting 1 when there are no matches, so on a malformed plan the substitution captures"0"from grep and then"0"from the|| echo "0"fallback. Withset -e(Line 368) and the resulting multi-lineTASK_COUNT="0\n0", the next[ "$TASK_COUNT" -eq 0 ]errors withinteger expression requiredand the script aborts before theERROR: No '### Task N:' sections found …message is printed, defeating the whole point of this guard.🛠️ Suggested fix
- TASK_COUNT=$(grep -c "^### Task [0-9]" "$PLAN_FILE" 2>/dev/null || echo "0") - if [ "$TASK_COUNT" -eq 0 ]; then + TASK_COUNT=$(grep -c "^### Task [0-9]" "$PLAN_FILE" 2>/dev/null) || TASK_COUNT=0 + if [ "$TASK_COUNT" -eq 0 ]; then echo "ERROR: No '### Task N:' sections found in $PLAN_FILE. Plan may be malformed." exit 1 fiThis keeps the original intent (default to
0when grep fails or finds nothing) while ensuringTASK_COUNTis a single integer so the friendly error path actually runs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.archon/workflows/defaults/archon-piv-loop.yaml around lines 398 - 403, The TASK_COUNT assignment can produce a multi-line value and break the integer check; change the capture to avoid the fallback echo adding a second "0" and ensure TASK_COUNT is a single integer: replace the line that sets TASK_COUNT (the grep -c ... || echo "0") with a safer sequence such as capturing grep's output but allowing failure (TASK_COUNT=$(grep -c "^### Task [0-9]" "$PLAN_FILE" 2>/dev/null || true)), then normalize it to a single integer (e.g. trim any newline and default to 0: TASK_COUNT=${TASK_COUNT%%$'\n'*}; TASK_COUNT=${TASK_COUNT:-0}) before the [ "$TASK_COUNT" -eq 0 ] check so the friendly error for PLAN_FILE is reliably reached.packages/workflows/src/schemas/workflow.ts-71-71 (1)
71-71:⚠️ Potential issue | 🟡 MinorTrim tag strings at schema validation to reject whitespace-only entries.
z.string().min(1)allows values like" ". That diverges from loader behavior (trim/drop), so the same workflow can validate in one path and normalize differently in another.Proposed fix
- tags: z.array(z.string().min(1)).optional(), + tags: z.array(z.string().trim().min(1)).optional(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/workflows/src/schemas/workflow.ts` at line 71, The tags schema currently uses z.array(z.string().min(1)).optional(), which accepts whitespace-only strings; update the tags item schema so each tag is trimmed and empty results are rejected (or filtered) at validation. Specifically, modify the tags array element (the symbol "tags" in the workflow schema) to transform each string by trimming (e.g., .transform(s => s.trim())) and then require non-empty (e.g., .min(1) or a .refine check) so whitespace-only entries fail validation (or are removed) and behavior matches the loader normalization.packages/workflows/src/loader.ts-364-381 (1)
364-381:⚠️ Potential issue | 🟡 Minor
tagsdedupe is still case-sensitive, so semantic duplicates can leak through.
Setonly dedupes exact strings. Inputs like["GitLab", "gitlab"]remain two tags despite the block’s stated normalization intent.Proposed fix
- if (Array.isArray(raw.tags)) { - tags = [ - ...new Set( - raw.tags - .filter((t): t is string => typeof t === 'string') - .map(t => t.trim()) - .filter(t => t.length > 0) - ), - ]; + if (Array.isArray(raw.tags)) { + const byCanonical = new Map<string, string>(); + for (const rawTag of raw.tags) { + if (typeof rawTag !== 'string') continue; + const trimmed = rawTag.trim(); + if (trimmed.length === 0) continue; + const canonical = trimmed.toLowerCase(); + if (!byCanonical.has(canonical)) { + byCanonical.set(canonical, trimmed); + } + } + tags = [...byCanonical.values()]; } else if (raw.tags !== undefined) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/workflows/src/loader.ts` around lines 364 - 381, The dedupe step for tags is currently case-sensitive (Set on the trimmed strings), so semantic duplicates like "GitLab" vs "gitlab" pass through; change the normalization/dedupe pipeline used when computing tags from raw.tags to perform case-insensitive deduplication: after filtering and trimming each string (the existing raw.tags filter/map), compute a casefold key (e.g., t.toLowerCase() or t.toLocaleLowerCase()) and use a Map keyed by that lowercased value to keep the first-seen original-cased trimmed tag, then set tags to the Map.values() array; update the code paths that currently use new Set(...) so they use this case-insensitive dedupe while preserving first-seen casing and still warn via getLog() on invalid raw.tags..archon/workflows/defaults/archon-adversarial-dev.yaml-120-120 (1)
120-120:⚠️ Potential issue | 🟡 MinorModel alias
opus[1m]will work, but the YAML parsing has a subtle risk.Good catch replacing
claude-opus-4-6[1m(missing closing bracket — would not resolve) with the documented Claude Code aliasopus[1m]. Per Anthropic's model-config docs,opusresolves to the latest Opus and the[1m]suffix opts into the 1M context variant on supported plans.Two confirmed points and one risk to address:
Model resolver handles
[1m]correctly — The Claude provider'sisModelCompatible()function accepts any string (no bracket stripping). The model string passes through to the Claude SDK/CLI as-is, bracket suffix intact. ✅Schema validation doesn't trim model field — Unlike the
providerfield (which has.trim()in the schema), themodelfield is defined asz.string().optional()with no trimming. This is safe for bracket suffixes; however, it represents an inconsistency vs. provider handling that may be intentional. ✅Unquoted YAML scalar
opus[1m]is valid but risky — Bun's native YAML parser (used in the loader) correctly parsesmodel: opus[1m]as the string"opus[1m]"in block context, following YAML 1.2 plain-scalar rules. However, the eight workflows already using this syntax show it works in practice. If the codebase ever switches YAML loaders or moves to flow-style mappings, the unquoted brackets become flow indicators and parsing breaks. Quoting (model: "opus[1m]") would be safer for long-term resilience.Recommendation: Keep the current fix (
opus[1m]unquoted) since it matches eight existing workflows and is documented in Anthropic's model-config. For robustness across potential future loader changes, consider quoting it as a follow-up refactor, but this is not critical.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.archon/workflows/defaults/archon-adversarial-dev.yaml at line 120, The YAML model scalar "opus[1m]" should be quoted to avoid future YAML parser/flow-style ambiguity: update the workflow YAML so the model value is written as a quoted string (e.g., "opus[1m]") instead of an unquoted plain scalar, and ensure any loader/schema expectations still accept the quoted value (note the schema uses z.string().optional() for model and provider trimming is done elsewhere with provider.trim(), while isModelCompatible() passes the model through unchanged); this change is purely formatting and should not require code changes other than ensuring the YAML loader continues to produce the same string.packages/docs-web/src/content/docs/book/quick-reference.md-145-145 (1)
145-145:⚠️ Potential issue | 🟡 MinorUpdate documentation:
agentsis supported by both Claude and Copilot, not Claude-only.The codebase shows Copilot's capabilities include
agents: truewith a dedicated translation layer (applyAgents()inpackages/providers/src/community/copilot/provider.ts). Archon's inline YAMLagents:definitions map to Copilot'sSessionConfig.customAgents, though with field constraints (Copilot cannot representmodel,disallowedTools,skills, ormaxTurnsand surfaces these as warnings). Update the table to clarify support is provider-specific but present, or add a note explaining the data shape difference.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/docs-web/src/content/docs/book/quick-reference.md` at line 145, Update the table entry so it no longer says "Claude only" — change the provider support note for `agents` to indicate both Claude and Copilot support it (e.g., "Supported by Claude and Copilot (provider-specific)"). Also add a short clarifying note that Copilot maps inline YAML `agents:` into SessionConfig.customAgents via applyAgents() and that Copilot cannot represent `model`, `disallowedTools`, `skills`, or `maxTurns` (these are surfaced as warnings), so the data shape differs by provider.packages/providers/src/community/pi/provider.ts-195-207 (1)
195-207:⚠️ Potential issue | 🟡 MinorCatch message conflates AuthStorage and ModelRegistry init failures.
The
trycovers bothAuthStorage.create()(line 198) andModelRegistry.create(authStorage)(line 199), but the thrown error hardcodesPi auth storage init failedand points the operator at~/.pi/agent/auth.json. Per the doc comment above (lines 192–194),ModelRegistry.create()is expected not to throw onmodels.jsonload errors, but if it ever does (e.g. unexpected I/O failure, future SDK change), the operator gets sent to the wrong file to debug. Either narrow each call to its owntry/catchwith a tailored message, or generalize the wording to "Pi SDK init failed" and mention both files.🛡️ Proposed fix — split the two init calls
- let authStorage: ReturnType<typeof piCodingAgent.AuthStorage.create>; - let modelRegistry: ReturnType<typeof piCodingAgent.ModelRegistry.create>; - try { - authStorage = piCodingAgent.AuthStorage.create(); - modelRegistry = piCodingAgent.ModelRegistry.create(authStorage); - } catch (err) { - const e = err as Error; - getLog().error({ err: e, piProvider: parsed.provider }, 'pi.auth_storage_init_failed'); - throw new Error( - `Pi auth storage init failed: ${e.message}. Check that ~/.pi/agent/auth.json ` + - '(or $PI_CODING_AGENT_DIR/auth.json) is valid JSON and readable.' - ); - } + let authStorage: ReturnType<typeof piCodingAgent.AuthStorage.create>; + try { + authStorage = piCodingAgent.AuthStorage.create(); + } catch (err) { + const e = err as Error; + getLog().error({ err: e, piProvider: parsed.provider }, 'pi.auth_storage_init_failed'); + throw new Error( + `Pi auth storage init failed: ${e.message}. Check that ~/.pi/agent/auth.json ` + + '(or $PI_CODING_AGENT_DIR/auth.json) is valid JSON and readable.' + ); + } + let modelRegistry: ReturnType<typeof piCodingAgent.ModelRegistry.create>; + try { + modelRegistry = piCodingAgent.ModelRegistry.create(authStorage); + } catch (err) { + const e = err as Error; + getLog().error({ err: e, piProvider: parsed.provider }, 'pi.model_registry_init_failed'); + throw new Error( + `Pi model registry init failed: ${e.message}. Check that ~/.pi/agent/models.json ` + + '(or $PI_CODING_AGENT_DIR/models.json) is valid JSON.' + ); + }Note that the existing test at lines 285–308 only exercises the
AuthStorage.create()failure path — if you adopt the split, the second branch should get its own test as well.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/community/pi/provider.ts` around lines 195 - 207, The catch currently covers both piCodingAgent.AuthStorage.create() and piCodingAgent.ModelRegistry.create(authStorage) but logs a hardcoded "Pi auth storage init failed" pointing to auth.json; split the calls into two separate try/catch blocks (one around AuthStorage.create() and one around ModelRegistry.create(authStorage)) so each catch logs a specific error and guidance (AuthStorage errors mention ~/.pi/agent/auth.json or $PI_CODING_AGENT_DIR/auth.json and ModelRegistry errors mention models.json / SDK init issues) and use getLog().error with the caught Error and parsed.provider; alternatively, if you prefer a single catch, change the message to a generalized "Pi SDK init failed" that references both auth.json and models.json and includes the actual error details..archon/scripts/maintainer-standup-gh-data.ts-38-44 (1)
38-44:⚠️ Potential issue | 🟡 MinorFrontmatter regex captures surrounding quotes verbatim.
/^gh_handle:\s*(\S+)\s*$/mwill capture"popemkt"(including the quotes) for the common YAML formgh_handle: "popemkt", after which everyghinvocation that interpolatesghHandle(lines 104, 112, 121, 174) will pass a literal quoted string and return zero results. Strip optional surrounding quotes:🛡️ Proposed fix
- const match = profile.match(/^gh_handle:\s*(\S+)\s*$/m); - if (match) ghHandle = match[1]; + const match = profile.match(/^gh_handle:\s*["']?([^"'\s]+)["']?\s*$/m); + if (match) ghHandle = match[1];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.archon/scripts/maintainer-standup-gh-data.ts around lines 38 - 44, The frontmatter regex currently captures surrounding quotes into ghHandle causing quoted values like "\"popemkt\"" to be used; update the match and assignment in the block that sets ghHandle (the const profile / match logic) so it either uses a regex that allows optional surrounding single or double quotes and captures the inner handle (e.g. change the capture group to exclude surrounding quotes) or strip surrounding quotes from match[1] before assigning to ghHandle; ensure the variable ghHandle used later in gh invocations (references around lines where ghHandle is interpolated) contains the raw handle without quotes..archon/commands/maintainer-review-report.md-60-60 (1)
60-60:⚠️ Potential issue | 🟡 MinorEdge case:
.label-appliedmay be absent ifpost-declineaborted before the label step.The instruction reads
.label-appliedand falls back to.label-error, but ifgh pr commentitself failed on line 287 of the workflow, the script exits before either file is written. In that case neither file exists, and the LLM has no instruction for what to surface. Consider adding a "if neither file exists, the post-decline node failed before reaching the label step — say so" clause.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.archon/commands/maintainer-review-report.md at line 60, The current logic that reads `$ARTIFACTS_DIR/.label-applied` and falls back to `$ARTIFACTS_DIR/.label-error` must also handle the case where neither file exists; update the label-reporting code (the block that reads `.label-applied` / `.label-error`) to detect missing files and output a clear one-line message like “no, post-decline node failed before reaching the label step — no `.label-applied` or `.label-error` present” so maintainers know the labeling step was never attempted; keep the existing behavior that if `.label-applied` exists and equals `skipped` you produce “no, label add failed: <reason>” using the contents of `.label-error`, and do not report “yes” in skipped/error cases.
🧹 Nitpick comments (23)
packages/workflows/src/dag-executor.test.ts (2)
6086-6087: Avoid hardcoding source line numbers in comments.The reference to
dag-executor.ts ~line 2956will silently drift on every refactor. A symbol-based hint (e.g. the variable nameanyFailedor function name) is enough and won't go stale.♻️ Proposed tweak
- // succeeded. This covers the anyFailed branch in executeDagWorkflow - // (dag-executor.ts ~line 2956), which had no direct test coverage. + // succeeded. This covers the `anyFailed` branch in executeDagWorkflow, + // which had no direct test coverage.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/workflows/src/dag-executor.test.ts` around lines 6086 - 6087, The comment hardcodes a source line number ("dag-executor.ts ~line 2956") which will become stale; update the comment to remove the numeric line reference and instead point to the relevant symbol(s) — e.g., mention the function executeDagWorkflow and the anyFailed variable/branch — so the intent stays accurate after refactors.
5832-5930: Hoist these dynamic imports to the top of the file.
parseMcpFailureServerNamesandloadConfiguredMcpServerNamesare imported viaawait import('./dag-executor')inside every individualitblock (≈11 call sites across these twodescribes, plus 7 more in theshouldContinueStreamingForStatusdescribe at lines 6043–6077). Since all other helpers from./dag-executorare imported statically at the top (lines 35–40), aligning these for consistency reduces noise and avoids repeated module-cache lookups in tests.♻️ Proposed refactor
import { buildTopologicalLayers, checkTriggerRule, substituteNodeOutputRefs, executeDagWorkflow, + parseMcpFailureServerNames, + loadConfiguredMcpServerNames, + shouldContinueStreamingForStatus, } from './dag-executor';Then drop the in-test
await import('./dag-executor')calls and reference the symbols directly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/workflows/src/dag-executor.test.ts` around lines 5832 - 5930, The tests repeatedly call await import('./dag-executor') inside each it block for parseMcpFailureServerNames and loadConfiguredMcpServerNames; hoist those dynamic imports to the top of the test file and import the two symbols statically (e.g., add import { parseMcpFailureServerNames, loadConfiguredMcpServerNames } from './dag-executor' alongside the existing static imports), then remove the in-test await import calls and reference the functions directly in all it blocks (including the other describe for shouldContinueStreamingForStatus).packages/providers/src/claude/binary-resolver.ts (1)
102-112: Optional: align autodetect probe with Copilot resolver'sisExecutableFilecheck.The Copilot binary resolver uses a stricter
isExecutableFilecheck (validates both file type and executable permissions) across all resolution paths. In contrast, Claude's autodetect at~/.local/bin/clauderelies on plainexistsSyncviafileExists(), which returns true for directories or non-executable files. Because autodetect runs without explicit user opt-in (unlike env/config sources where users get immediate errors), a stale or wrong-typed entry at that path would be silently returned and only fail later at spawn time.Not blocking — the env and config paths already throw on missing files, providing user feedback; this is an optional improvement to fail-fast on autodetect as well.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/claude/binary-resolver.ts` around lines 102 - 112, Autodetect currently returns nativeInstallerPath when fileExists(nativeInstallerPath) is true, which can be a directory or non-executable file; change this to use the stricter executable check (the same isExecutableFile used in the Copilot resolver) when probing ~/.local/bin/claude so the resolver only returns a path that is a regular file with execute permission; update the check in binary-resolver.ts to call isExecutableFile(nativeInstallerPath) (or perform equivalent stat+mode checks) and only log/return nativeInstallerPath when that check passes, otherwise continue resolution..archon/commands/maintainer-review-code-review.md (2)
113-113: Consider hyphenating compound adjective for clarity.The phrase "single line summary" would be more grammatically correct as "single-line summary" when used as a compound adjective modifying "summary."
📝 Suggested fix
-Return a single line summary as your response: +Return a single-line summary as your response:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.archon/commands/maintainer-review-code-review.md at line 113, Replace the unhyphenated phrase "single line summary" with the hyphenated compound adjective "single-line summary" wherever it appears (e.g., the line containing "Return a single line summary as your response:") to improve grammatical clarity; update the text to read "Return a single-line summary as your response:".
22-24: Clarify fallback behavior if CLAUDE.md is missing.The instruction assumes
CLAUDE.mdexists in the repo. Consider adding guidance on what to do if it's missing (e.g., skip compliance checks, use defaults, or fail with a clear message).📝 Suggested clarification
### Read the project's rules -Read the repo's `CLAUDE.md` (project-level). It's the source of truth for engineering principles, type-safety rules, eslint policy, error-handling conventions, and forbidden patterns. +Read the repo's `CLAUDE.md` (project-level) if it exists. It's the source of truth for engineering principles, type-safety rules, eslint policy, error-handling conventions, and forbidden patterns. + +```bash +if [[ -f "CLAUDE.md" ]]; then + cat CLAUDE.md +else + echo "Note: CLAUDE.md not found. Skipping project-specific compliance checks." +fi +```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.archon/commands/maintainer-review-code-review.md around lines 22 - 24, The "Read the project's rules" step assumes CLAUDE.md exists; update this section to specify fallback behavior when CLAUDE.md is missing by adding a clear instruction (e.g., skip project-specific checks, apply default rules, or abort with a descriptive error) and show the expected handling pattern referencing CLAUDE.md and the "Read the project's rules" step so reviewers know to check for the file and act accordingly; make the preferred default behavior explicit (for example: if CLAUDE.md is absent, echo a note and proceed with default compliance rules or fail with a clear message).packages/providers/src/codex/binary-resolver.test.ts (1)
124-135: Nit: assert the resolution source for parity with sibling tests.The other two autodetect tests (lines 100-103, 118-121) verify that
mockLogger.infois called withsource: 'autodetect'. This one only checks the returned path, so a regression that resolves/usr/local/bin/codexbut mislabels its source would slip through. Mirror the same logger assertion here.♻️ Suggested addition
const result = await resolver.resolveCodexBinaryPath(); expect(result).toBe('/usr/local/bin/codex'); + expect(mockLogger.info).toHaveBeenCalledWith( + { binaryPath: '/usr/local/bin/codex', source: 'autodetect' }, + 'codex.binary_resolved' + ); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/codex/binary-resolver.test.ts` around lines 124 - 135, Add the same logger assertion as the sibling autodetect tests: after calling resolver.resolveCodexBinaryPath() in the test that checks '/usr/local/bin/codex', assert that mockLogger.info was called with an object indicating the resolved path and source: 'autodetect' (matching the pattern used in the other tests). Locate the test using resolver.resolveCodexBinaryPath, fileExistsSpy and mockLogger and add an expectation that mockLogger.info received an argument with the path '/usr/local/bin/codex' and source: 'autodetect'.packages/providers/src/codex/binary-resolver.ts (1)
145-165: Optional: probe linuxbrew / be defensive on unsupported platforms.Two small follow-ups you may want to consider; neither is blocking:
- On Linux,
/home/linuxbrew/.linuxbrew/bin/codexis a fairly common npm prefix when Node is installed via Homebrew on Linux. The deferred-cases doc-comment doesn't mention it; worth either adding a probe or calling it out alongside the other deferred items.getAutodetectPaths()falls into the POSIX branch for any non-win32platform (e.g., FreeBSD, AIX), even thoughgetVendorBinaryName()rejects them viaSUPPORTED_PLATFORMS. Probing a couple of POSIX paths is harmless, but you could short-circuit on unsupported platforms for symmetry with tier 3 — your call.No action required if you'd rather keep tier-4 minimal.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/codex/binary-resolver.ts` around lines 145 - 165, getAutodetectPaths() currently treats any non-win32 as POSIX and misses the common Linuxbrew prefix; update getAutodetectPaths() to (1) add '/home/linuxbrew/.linuxbrew/bin/codex' to the Linux-specific probe (i.e., when process.platform === 'linux') so the resolver will find Node/Homebrew installs, and (2) optionally short-circuit and return an empty array if process.platform is not listed in SUPPORTED_PLATFORMS (used by getVendorBinaryName()) to keep behavior symmetric with platform support checks; reference getAutodetectPaths(), SUPPORTED_PLATFORMS, and getVendorBinaryName() when making the change.packages/server/src/index.ts (1)
403-403: Normalize Discord failure log event name to the standard convention.
discord.start_failed_continuing_without_adapterdoes not follow the{domain}.{action}_{state}shape used for consistent log filtering and alerting.Proposed fix
- getLog().error({ err, hint }, 'discord.start_failed_continuing_without_adapter'); + getLog().error({ err, hint }, 'discord.start_failed');As per coding guidelines:
**/*.ts: Use structured logging with Pino; event naming format:{domain}.{action}_{state}.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/index.ts` at line 403, Replace the nonconforming event string in the logging call so it follows the {domain}.{action}_{state} convention: update the getLog().error invocation that currently uses 'discord.start_failed_continuing_without_adapter' to a normalized name such as 'discord.start_failed_without_adapter' (keep existing structured payload { err, hint } intact); change only the event-name string in the getLog().error call so log filtering/alerting uses the normalized symbol.packages/docs-web/src/content/docs/reference/variables.md (1)
67-69: Strengthen the safety guidance — embedding raw values in a template literal is also unsafe.The current text correctly tells authors to use
JSON.parserather than shell interpolation, but the example workflows in this PR (dag-workflow.yaml,archon-workflow-builder.yaml) recommend wrapping the substitution in aString.raw`...`template. That's still a JS interpolation surface —${...}is evaluated and backticks terminate the string regardless ofString.raw. Since this is the canonical reference, it's worth calling out the pitfall here so the example workflows follow.♻️ Suggested addition
-`$nodeId.output` values are **auto shell-quoted** (single-quoted, with embedded `'` escaped) when substituted into `bash:` scripts, so the value is always safe to embed in a shell command. They are **not** shell-quoted when substituted into `script:` bodies — the raw value is embedded as-is. For script nodes, treat substituted values as untrusted input and parse them with language features (e.g. `JSON.parse`), not by interpolating into shell syntax. +`$nodeId.output` values are **auto shell-quoted** (single-quoted, with embedded `'` escaped) when substituted into `bash:` scripts, so the value is always safe to embed in a shell command. They are **not** quoted or escaped in any way when substituted into `script:` bodies — the raw bytes are pasted directly into the source. + +For script nodes, treat substituted values as untrusted input. **Do not** embed them inside string literals or template literals (including `` String.raw`...` ``): a backtick or `${...}` in upstream output will terminate the string or evaluate as a JS/Python expression. Instead, read the upstream value from `stdin` (recommended) or pass it through a JSON-encoded substitution form, then `JSON.parse` / `json.loads` it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/docs-web/src/content/docs/reference/variables.md` around lines 67 - 69, Update the safety guidance to explicitly warn that embedding substituted values into JavaScript template literals (e.g. using String.raw`...` or backtick interpolation like `${...}`) is unsafe for script: nodes because `${...}` is evaluated and backticks can be terminated by the value; state that $nodeId.output is only auto-quoted for bash: and is raw for script:, and instruct authors to treat substituted values as untrusted and parse them with language parsers (e.g. JSON.parse) or pass them through a safe serialization/deserialization API instead of interpolating into template literals (mention String.raw, template literal/backtick interpolation, and JSON.parse by name so readers can find the relevant code/examples to change)..claude/skills/archon/references/troubleshooting.md (1)
9-11: Add a language tag to the plain-text fenced blocks (markdownlint MD040).Static analysis flags lines 9 and 42 as missing a code-block language. Use
textto keep the lint clean:♻️ Diff
-``` +```text ~/.archon/workspaces/<owner>/<repo>/logs/<run-id>.jsonl```diff -``` +```text ~/.archon/workspaces/<owner>/<repo>/artifacts/runs/<run-id>/</details> Also applies to: 42-44 <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In @.claude/skills/archon/references/troubleshooting.md around lines 9 - 11, The
two plain-text fenced code blocks in troubleshooting.md are missing a language
tag (markdownlint MD040); update both fenced blocks that show paths like
"/.archon/workspaces///logs/.jsonl" and/.archon/workspaces///artifacts/runs//" to use the "text"
"
language tag (i.e., changetotext for those blocks) so the markdown
linter no longer flags them.</details> </blockquote></details> <details> <summary>.claude/skills/archon/references/repo-init.md (1)</summary><blockquote> `90-94`: **Add a language tag to the log-output fenced block (markdownlint MD040).** <details> <summary>♻️ Diff</summary> ```diff -``` +```text [archon] loaded N keys from ~/.archon/.env [archon] loaded M keys from /path/to/repo/.archon/.env [archon] stripped K keys from /path/to/repo (ANTHROPIC_API_KEY, OPENAI_API_KEY, ...) ``` ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In @.claude/skills/archon/references/repo-init.md around lines 90 - 94, The
fenced code block containing the sample log output (the triple-backtick block
that starts with "[archon] loaded N keys...") is missing a language tag and
triggers markdownlint MD040; update that fence to include a language tag (e.g.,
changetotext) for the block showing the log lines and search for any
other unlabeled fenced blocks in the same document to add appropriate language
tags as well.</details> </blockquote></details> <details> <summary>.claude/skills/archon/references/cli-commands.md (1)</summary><blockquote> `35-46`: **Slight inconsistency between the `--resume` flag description and the auto-resume note.** Line 35 describes `--resume` as "Resume the last failed run of this workflow at this cwd (skips completed nodes)", but line 45 then explains that *auto-resume already does that* and `--resume` is only needed to "force resume a specific failed run or to reuse the worktree from that run." Readers will be unclear on what `--resume` actually adds over the default behavior. <details> <summary>♻️ Suggested phrasing</summary> ```diff -| `--resume` | Resume the last failed run of this workflow at this cwd (skips completed nodes) | +| `--resume` | Force-resume a prior failed run (and reuse its worktree). Most workflows auto-resume without this flag — see note below | ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.claude/skills/archon/references/cli-commands.md around lines 35 - 46, The `--resume` flag description conflicts with the "Auto-resume without `--resume`" note; update the `--resume` line to clearly state that it forces resuming a specific prior failed run or reuses that run's worktree (e.g., "Force resume a specific failed run at this cwd or reuse its worktree; skips completed nodes"), and leave the "Auto-resume without `--resume`" paragraph to explain the default automatic skipping of completed nodes for the most recent failed invocation; change either occurrence (the `--resume` bullet and the "Auto-resume without `--resume`" heading) so both consistently convey that normal invocations auto-resume the last failed run but `--resume` targets a specific run/worktree. ``` </details> </blockquote></details> <details> <summary>.claude/skills/archon/references/good-practices.md (1)</summary><blockquote> `224-239`: **Consider clarifying the MCP wildcard anti-pattern.** This example is labeled "BAD" but then explains that `allowed_tools: []` actually works because "Archon auto-adds mcp__<server>__* wildcards." The anti-pattern is "forgetting and manually adding Read/Write/Bash" when you only want MCP. This might confuse readers who see "BAD" but then read that it works. Consider restructuring to make the actual anti-pattern clearer. <details> <summary>Clarification suggestion</summary> Consider rewording the comment to lead with what works, then explain what the anti-pattern is: ```diff # BAD — no tools available, including MCP - id: analyze prompt: "Use the Postgres MCP to query users" mcp: .archon/mcp/postgres.json - allowed_tools: [] # OOPS — disables EVERYTHING, including MCP tools + allowed_tools: [Read, Write, Bash] # OOPS — forgot that mcp: auto-adds MCP tools + # Now you've allowed file ops when you only wanted MCP -# FIXED — Archon auto-adds mcp__<server>__* wildcards when mcp: is set, -# so this actually works out of the box. The anti-pattern is forgetting -# and manually adding Read/Write/Bash/etc. when you only want MCP. +# CORRECT — Archon auto-adds mcp__<server>__* wildcards when mcp: is set - id: analyze prompt: "Use Postgres MCP to query users" mcp: .archon/mcp/postgres.json - allowed_tools: [] # correct — MCP tools auto-attached + allowed_tools: [] # MCP tools auto-attached, everything else denied ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.claude/skills/archon/references/good-practices.md around lines 224 - 239, The example labeled "BAD" is confusing because it shows allowed_tools: [] with mcp: set but then states Archon auto-adds mcp__<server>__* wildcards so the config actually works; update the text and examples around the analyze entry (id: analyze, mcp: .archon/mcp/postgres.json, allowed_tools) to clearly show the real anti-pattern: demonstrate the incorrect manual addition of Read/Write/Bash tools when only MCP is intended and mark that as BAD, and mark the version that relies on Archon's auto-added mcp__<server>__* wildcards (i.e., allowed_tools: [] with only mcp set) as the correct/fixed example, adjusting comments to explicitly state that Archon auto-attaches MCP wildcards. ``` </details> </blockquote></details> <details> <summary>.claude/skills/archon/references/parameter-matrix.md (1)</summary><blockquote> `155-155`: **Minor style: Add period after "etc."** In American English, abbreviations like "etc." require a period. <details> <summary>Style fix</summary> ```diff -| Variable reference (`$ARGUMENTS`, `$ARTIFACTS_DIR`, etc) | `variables.md` | +| Variable reference (`$ARGUMENTS`, `$ARTIFACTS_DIR`, etc.) | `variables.md` | ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.claude/skills/archon/references/parameter-matrix.md at line 155, Update the markdown table cell that currently reads "| Variable reference (`$ARGUMENTS`, `$ARTIFACTS_DIR`, etc) | `variables.md` |" to add the missing period after "etc" so it reads "etc."; edit the line containing the table entry (the string "Variable reference (`$ARGUMENTS`, `$ARTIFACTS_DIR`, etc)") and replace "etc" with "etc." to conform to American English abbreviation style. ``` </details> </blockquote></details> <details> <summary>packages/docs-web/src/content/docs/book/dag-workflows.md (1)</summary><blockquote> `240-240`: **Minor formatting: Remove trailing space in code example.** The script node syntax has a trailing space before the closing backtick: `script: "..." ` — should be `script: "..."`. <details> <summary>Minor fix</summary> ```diff -| **Script** | `script: "..." ` + `runtime: bun \| uv` | Run TypeScript/JavaScript (bun) or Python (uv) without AI. Inline code or named reference to `.archon/scripts/`. Stdout captured as `$nodeId.output`. See [Script Nodes](/guides/script-nodes/). | +| **Script** | `script: "..."` + `runtime: bun \| uv` | Run TypeScript/JavaScript (bun) or Python (uv) without AI. Inline code or named reference to `.archon/scripts/`. Stdout captured as `$nodeId.output`. See [Script Nodes](/guides/script-nodes/). | ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@packages/docs-web/src/content/docs/book/dag-workflows.md` at line 240, In the markdown table row containing the script node example, remove the stray space at the end of the inline code token `script: "..." ` so it reads `script: "..."` (i.e., delete the trailing space before the closing backtick) to fix the formatting in the docs; update the table cell where the snippet `script: "..." ` + `runtime: bun | uv` appears. ``` </details> </blockquote></details> <details> <summary>packages/providers/src/community/pi/provider.test.ts (2)</summary><blockquote> `216-229`: **Shim regression test — well-targeted.** Driving the assertion through the fast-fail "no model" path is a nice trick: it sidesteps any need to script a successful Pi run while still proving that `ensurePiPackageDirShim()` runs before the no-model error. One small thing — `process.env.PI_PACKAGE_DIR` is left set after this test completes; later tests don't read it so there's no functional consequence today, but consider adding a `delete process.env.PI_PACKAGE_DIR` to the `beforeEach` block (alongside the existing `GEMINI_API_KEY` / `ANTHROPIC_API_KEY` deletes) to keep the test environment hermetic against future tests that might assert on its absence. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/community/pi/provider.test.ts` around lines 216 - 229, Add cleanup for PI_PACKAGE_DIR in the test suite setup: in the existing beforeEach block that clears GEMINI_API_KEY / ANTHROPIC_API_KEY, also delete process.env.PI_PACKAGE_DIR so the test "sendQuery installs PI_PACKAGE_DIR shim before Pi SDK loads" does not leave PI_PACKAGE_DIR set for subsequent tests; reference the beforeEach setup and the test name/sendQuery and the environment variable process.env.PI_PACKAGE_DIR when locating where to add the deletion. ``` </details> --- `389-402`: **Test header comment is slightly inaccurate.** The comment "the adapter rejects unknown providers" overstates what the adapter does — there's no provider allowlist; an unknown `provider` segment just causes `modelRegistry.find()` to return `undefined`, which is the same path being exercised here. The reason to use `mockImplementationOnce(() => undefined)` against a *mapped* provider like `google` is so that the credential-resolution path (which depends on `PI_PROVIDER_ENV_VARS[parsed.provider]`) gets exercised in earlier tests with real fixtures, not because the adapter rejects unknown providers. Consider tightening the comment to reflect this. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/community/pi/provider.test.ts` around lines 389 - 402, The test header comment is inaccurate: update it to say that the adapter does not reject unknown providers but that an unknown provider segment causes ModelRegistry.find to return undefined, so we use mockModelRegistryFind.mockImplementationOnce(() => undefined) with a mapped provider like 'google' to exercise the credential-resolution path (which depends on PI_PROVIDER_ENV_VARS[parsed.provider]) rather than to simulate an adapter-level provider rejection; mention PiProvider.sendQuery, mockModelRegistryFind, and the 'google/unknown-model-id' modelId to make the intent clear. ``` </details> </blockquote></details> <details> <summary>packages/providers/src/community/pi/provider.ts (1)</summary><blockquote> `97-101`: **Move the value import to the top of the file.** While the import placement itself is not idiomatic (imports should appear at the top of the file), the code organization concern is valid. However, the `import/first` ESLint rule referenced in the original suggestion is not enabled in this repository—`eslint-plugin-import` is not installed or configured. Keep the `export { augmentPromptForJsonSchema };` re-export at its current location with the documentation comment for clarity, but move the binding import up with the other imports for consistency with standard module organization. <details> <summary>♻️ Proposed refactor</summary> ```diff import { existsSync, mkdirSync, writeFileSync } from 'node:fs'; import { tmpdir } from 'node:os'; import { join } from 'node:path'; import { createLogger } from '@archon/paths'; import type { IAgentProvider, MessageChunk, ProviderCapabilities, SendQueryOptions, } from '../../types'; import { PI_CAPABILITIES } from './capabilities'; import { parsePiConfig } from './config'; import { parsePiModelRef } from './model-ref'; +import { augmentPromptForJsonSchema } from '../../shared/structured-output'; @@ -// Structured-output prompt augmentation is shared across providers. Import -// once for local use and re-export so existing callers and tests keep their -// import path stable; new providers should import from `../../shared/structured-output`. -import { augmentPromptForJsonSchema } from '../../shared/structured-output'; -export { augmentPromptForJsonSchema }; +// Re-export the shared structured-output helper so existing Pi callers/tests +// keep their import path stable; new providers should import directly from +// `../../shared/structured-output`. +export { augmentPromptForJsonSchema }; ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/community/pi/provider.ts` around lines 97 - 101, Move the binding import for augmentPromptForJsonSchema up into the existing top-of-file import block (so add "import { augmentPromptForJsonSchema } from '../../shared/structured-output';" alongside the other imports), but leave the re-export line "export { augmentPromptForJsonSchema };" and its documentation comment exactly where it is; this keeps module organization consistent while preserving the current exported API and existing tests/callers. ``` </details> </blockquote></details> <details> <summary>.archon/scripts/maintainer-standup-git-status.ts (1)</summary><blockquote> `20-27`: **Surface captured stderr when `git` invocations fail.** The helper pipes stderr to a buffer but discards it on the catch path. When `pull_status` ends up `pull_failed` (line 52) or `current_dev_sha` is empty because `git rev-parse origin/dev` failed (line 55), the user has no diagnostic to act on. Forwarding the error message to `process.stderr` keeps the function's `{ stdout, ok }` contract intact while making failures debuggable. <details> <summary>♻️ Proposed refactor</summary> ```diff function git(args: string[]): { stdout: string; ok: boolean } { try { const out = execFileSync('git', args, { stdio: ['ignore', 'pipe', 'pipe'] }).toString(); return { stdout: out, ok: true }; - } catch { + } catch (e) { + const stderr = (e as { stderr?: Buffer }).stderr?.toString() ?? (e as Error).message; + process.stderr.write(`git ${args.join(' ')} failed: ${stderr}\n`); return { stdout: '', ok: false }; } } ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.archon/scripts/maintainer-standup-git-status.ts around lines 20 - 27, The git helper currently swallows stderr on failure; update the git(args: string[]) function so that in the catch path it surfaces the captured stderr (and/or error message) to process.stderr before returning { stdout: '', ok: false }—preserve the return contract but write err.stderr (or err.message if stderr is absent) to process.stderr (e.g., process.stderr.write(...)) so callers like pull_status and the current_dev_sha lookup (git rev-parse origin/dev) get useful diagnostics when ok is false. ``` </details> </blockquote></details> <details> <summary>.archon/workflows/maintainer/maintainer-review-pr.yaml (2)</summary><blockquote> `199-233`: **Mandatory `code-review` is only enforced by prompt — workflow can silently dead-end the review branch.** The classifier prompt (Line 159) asserts code-review is mandatory, but the workflow gates `code-review` with `when: "$review-classify.output.run_code_review == 'true'"`. If the LLM ever returns `run_code_review: 'false'` (LLM disobedience, schema-coercion edge case, etc.), every aspect node is skipped, `synthesize-review`'s `trigger_rule: one_success` cannot fire, `post-review` never runs, and `report` is also skipped because none of `[post-review, post-decline, approve-unclear]` succeeded. The review path silently produces no output. Drop the `when:` on `code-review` so the workflow contract matches the prompt. The other four aspects are genuinely conditional and should keep theirs. <details> <summary>♻️ Suggested fix</summary> ```diff - id: code-review command: maintainer-review-code-review depends_on: [review-classify] - when: "$review-classify.output.run_code_review == 'true'" context: fresh ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.archon/workflows/maintainer/maintainer-review-pr.yaml around lines 199 - 233, The workflow declares code-review mandatory in the classifier but gates the node with when: "$review-classify.output.run_code_review == 'true'", which can silently skip all downstream steps; remove the when condition from the node with id "code-review" (command "maintainer-review-code-review") so that "code-review" always runs while leaving the other aspect nodes (error-handling, test-coverage, comment-quality, docs-impact) and their when conditions untouched; ensure "code-review" still depends_on "review-classify" and retains context "fresh" so "synthesize-review" (id "synthesize-review") can trigger as expected with its trigger_rule. ``` </details> --- `65-82`: **Diff truncation is invisible to the gate — append a marker when capped.** `head -2500` silently drops lines past the cap. The gate then produces a verdict on a partial diff with no signal that truncation happened, which contradicts the comment on Line 70 ("would produce a confident verdict on no evidence") — same risk applies to "confident verdict on partial evidence." Cheap to fix: emit a trailing marker only when the diff exceeds the cap. <details> <summary>♻️ Suggested fix</summary> ```diff - id: fetch-diff bash: | PR_NUM=$(cat "$ARTIFACTS_DIR/.pr-number") - # Don't redirect stderr — let auth / network / deleted-PR failures surface - # as a node failure rather than feeding an empty diff to the gate (which - # would produce a confident verdict on no evidence). if ! diff_output=$(gh pr diff "$PR_NUM"); then echo "ERROR: gh pr diff failed for PR #$PR_NUM" >&2 exit 1 fi - # Cap at 2500 lines to keep prompt size bounded; gate cares about shape, not every line. if [ -z "$diff_output" ]; then echo "(empty diff — PR has no changes)" else - echo "$diff_output" | head -2500 + total_lines=$(printf '%s\n' "$diff_output" | wc -l) + printf '%s\n' "$diff_output" | head -2500 + if [ "$total_lines" -gt 2500 ]; then + echo "... (diff truncated: showing 2500 of ${total_lines} lines — gate verdict should treat this PR as 'too broad to assess from diff alone')" + fi fi ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.archon/workflows/maintainer/maintainer-review-pr.yaml around lines 65 - 82, The fetch-diff step currently truncates output with `echo "$diff_output" | head -2500` without signaling truncation; change it to detect when the diff exceeds the cap and append a visible marker (e.g., "--TRUNCATED--: diff exceeds 2500 lines") only in that case. Concretely, after populating `diff_output` (and before printing), compute the line count (e.g., via `wc -l` or similar) and if it is greater than 2500 print the first 2500 lines and then the marker; otherwise print the full `diff_output` as now. Keep identifiers `fetch-diff`, `PR_NUM`, and `diff_output` unchanged so the logic is easy to locate. ``` </details> </blockquote></details> <details> <summary>.archon/workflows/maintainer/maintainer-standup.yaml (1)</summary><blockquote> `150-162`: **Narrow `err` before reading `.message`.** In the catch on Line 150, `err` is `unknown` under strict TypeScript / Bun. `err.message` will either fail typecheck or, if loosened, evaluate to `undefined` for non-Error throws (e.g., a thrown string), making the recovery line less useful right at the moment it matters most. <details> <summary>♻️ Suggested fix</summary> ```diff } catch (err) { - // Synthesis (Sonnet, ~5 min) is the expensive part. If persist fails - // (disk full, read-only fs, permission), dump the brief + state to - // stderr so the run isn't a total loss — they're recoverable from logs. - process.stderr.write(`PERSIST FAILED: ${err.message}\n`); + // Synthesis (Sonnet, ~5 min) is the expensive part. If persist fails + // (disk full, read-only fs, permission), dump the brief + state to + // stderr so the run isn't a total loss — they're recoverable from logs. + const message = err instanceof Error ? err.message : String(err); + process.stderr.write(`PERSIST FAILED: ${message}\n`); process.stderr.write('--- BEGIN brief_markdown (recoverable from logs) ---\n'); ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.archon/workflows/maintainer/maintainer-standup.yaml around lines 150 - 162, The catch currently accesses err.message unsafely; change the catch to accept err: unknown and narrow it before reading .message (e.g., use if (err instanceof Error) { message = err.message } else { message = String(err) }) and then use that safe message in the process.stderr.write call(s) instead of err.message; locate the catch block handling persistence failure (the one writing PERSIST FAILED and using process.stderr.write with data.brief_markdown and JSON.stringify(data.next_state)) and apply this narrowing so non-Error throws don’t produce undefined. ``` </details> </blockquote></details> <details> <summary>.archon/commands/maintainer-review-report.md (1)</summary><blockquote> `26-30`: **Disambiguate `decline` vs `needs_split` in branch detection.** Phase 1 lists three possibilities, but Phase 2 reports four branches (`review | decline | needs_split | unclear`). In the workflow both `decline` and `needs_split` produce `decline-comment.md` and run `post-decline`, so artifact existence alone cannot tell them apart — the report has to read the verdict from `gate-decision.md` to pick the correct label. Worth calling out explicitly here so the LLM doesn't always emit `decline`. <details> <summary>📝 Suggested clarification</summary> ```diff 1. **Review branch ran**: `$ARTIFACTS_DIR/review/synthesis.md` exists. -2. **Decline branch ran**: `$ARTIFACTS_DIR/decline-comment.md` exists with non-placeholder content; the post-decline bash node already posted to GitHub. +2. **Decline / needs_split branch ran**: `$ARTIFACTS_DIR/decline-comment.md` exists with non-placeholder content and the post-decline bash node already posted to GitHub. Read the verdict from `$ARTIFACTS_DIR/gate-decision.md` to distinguish `decline` from `needs_split` — both share this path. 3. **Unclear branch ran**: gate verdict was `unclear` and the maintainer was prompted to decide manually. ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.archon/commands/maintainer-review-report.md around lines 26 - 30, The branch-detection currently infers labels from artifact presence (e.g., review/synthesis.md and decline-comment.md) which cannot distinguish "decline" vs "needs_split"; update the report generation logic in maintainer-review-report to read the gate verdict from gate-decision.md and use that value to choose between "decline" and "needs_split" (emit "needs_split" only when gate-decision.md contains that verdict, otherwise emit "decline"), while still treating the presence of review/synthesis.md as "review" and preserving "unclear" when the gate verdict is `unclear`. ``` </details> </blockquote></details> </blockquote></details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| ```bash | ||
| PR_NUMBER=$(cat $ARTIFACTS_DIR/.pr-number) | ||
| ``` |
There was a problem hiding this comment.
Add error handling for missing or empty PR number file.
If $ARTIFACTS_DIR/.pr-number doesn't exist or is empty, the workflow will fail later with unclear errors when attempting to use $PR_NUMBER. Add an explicit check and fail-fast message.
🛡️ Suggested fix with error handling
```bash
-PR_NUMBER=$(cat $ARTIFACTS_DIR/.pr-number)
+if [[ ! -f "$ARTIFACTS_DIR/.pr-number" ]]; then
+ echo "Error: $ARTIFACTS_DIR/.pr-number not found. Gate step may have failed."
+ exit 1
+fi
+PR_NUMBER=$(cat "$ARTIFACTS_DIR/.pr-number")
+if [[ -z "$PR_NUMBER" ]]; then
+ echo "Error: PR number is empty in $ARTIFACTS_DIR/.pr-number"
+ exit 1
+fi</details>
Based on learnings: "Fail Fast + Explicit Errors: prefer throwing early with a clear error for unsupported or unsafe states — never silently swallow errors."
<!-- suggestion_start -->
<details>
<summary>📝 Committable suggestion</summary>
> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
```suggestion
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.archon/commands/maintainer-review-code-review.md around lines 18 - 20, The
script currently reads PR_NUMBER directly from $ARTIFACTS_DIR/.pr-number without
validating it; add a pre-check to ensure the file exists and is non-empty before
assigning PR_NUMBER: test for the existence of "$ARTIFACTS_DIR/.pr-number" and
exit with a clear error message if missing, then read PR_NUMBER and validate it
is not empty (exit with a clear error if it is). Ensure you reference and use
the same variables PR_NUMBER and ARTIFACTS_DIR used in the script so callers
fail fast with an explicit message rather than proceeding with an unset/empty
PR_NUMBER.
| ```bash | ||
| cat $ARTIFACTS_DIR/gate-decision.md | ||
| ``` |
There was a problem hiding this comment.
Add error handling for missing gate decision file.
If the gate step failed or was skipped, $ARTIFACTS_DIR/gate-decision.md may not exist. Add an explicit check to fail fast with a clear message.
🛡️ Suggested fix with error handling
```bash
+if [[ ! -f "$ARTIFACTS_DIR/gate-decision.md" ]]; then
+ echo "Error: $ARTIFACTS_DIR/gate-decision.md not found. Gate step may not have completed."
+ exit 1
+fi
cat $ARTIFACTS_DIR/gate-decision.md</details>
Based on learnings: "Fail Fast + Explicit Errors: prefer throwing early with a clear error for unsupported or unsafe states — never silently swallow errors."
<!-- suggestion_start -->
<details>
<summary>📝 Committable suggestion</summary>
> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
```suggestion
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.archon/commands/maintainer-review-code-review.md around lines 28 - 30, Add
an explicit existence check before attempting to read the gate decision file:
verify that "$ARTIFACTS_DIR/gate-decision.md" exists and if not emit a clear
error message and exit non‑zero, then only run the existing cat
$ARTIFACTS_DIR/gate-decision.md; specifically update the snippet that currently
calls cat to first test [[ -f "$ARTIFACTS_DIR/gate-decision.md" ]] (or
equivalent) and fail fast with a descriptive message if the file is missing so
the script does not silently continue.
| ```bash | ||
| gh pr diff $PR_NUMBER | ||
| ``` | ||
|
|
||
| If the diff is too large to reason about cleanly, sample: read the diff against each changed file individually with `gh pr diff $PR_NUMBER -- <path>`. | ||
|
|
There was a problem hiding this comment.
Add error handling for gh command failures and clarify sampling strategy.
Two issues:
- The
gh pr diffcommand can fail due to auth issues, network problems, or an invalid PR number. Add error handling to fail fast with a clear message. - The guidance on "too large to reason about cleanly" is subjective. Consider providing a concrete threshold (e.g., file count or line count) or suggesting a systematic sampling approach.
🛡️ Suggested fix with error handling
```bash
-gh pr diff $PR_NUMBER
+if ! gh pr diff "$PR_NUMBER" > /tmp/pr-diff.txt 2>&1; then
+ echo "Error: Failed to fetch PR diff for #$PR_NUMBER. Check gh auth status."
+ exit 1
+fi
+cat /tmp/pr-diff.txt-If the diff is too large to reason about cleanly, sample: read the diff against each changed file individually with gh pr diff $PR_NUMBER -- <path>.
+If the diff is too large to reason about cleanly (e.g., > 50 changed files or > 5,000 lines), sample: read the diff against each changed file individually with gh pr diff "$PR_NUMBER" -- <path>. Prioritize files with the most changes or those in critical paths (e.g., packages/core, packages/workflow).
</details>
Based on learnings: "Fail Fast + Explicit Errors: prefer throwing early with a clear error for unsupported or unsafe states."
<details>
<summary>🤖 Prompt for AI Agents</summary>
Verify each finding against the current code and only fix it if needed.
In @.archon/commands/maintainer-review-code-review.md around lines 36 - 41,
Update the docs/commands to add explicit failure handling and a concrete
sampling guideline: instruct the script/command that invokes gh pr diff to check
the command's exit status and fail fast with a clear error message when gh pr diff "$PR_NUMBER" fails (mentioning auth/network/invalid PR as possible causes)
and to capture its output to a temporary file for inspection; also replace the
vague "too large" guidance with a concrete threshold (e.g., >50 changed files or
5,000 lines) and recommend systematic sampling by running
gh pr diff "$PR_NUMBER" -- <path>for files prioritized by change volume or critical
directories (e.g.,packages/core,packages/workflow).
</details>
<!-- fingerprinting:phantom:triton:puma:2cb7b664-7e28-43c9-b2aa-9bbaf49fd757 -->
<!-- 4e71b3a2 -->
<!-- This is an auto-generated comment by CodeRabbit -->
| ## Phase 3: WRITE FINDINGS | ||
|
|
||
| Write `$ARTIFACTS_DIR/review/code-review-findings.md` with this structure: | ||
|
|
||
| ```markdown | ||
| # Code Review — PR #<n> | ||
|
|
||
| ## Summary | ||
| <1-2 sentences. State the overall verdict: ready-to-merge / minor-fixes-needed / blocking-issues.> | ||
|
|
||
| ## Findings | ||
|
|
||
| ### CRITICAL | ||
| - **<file:line>**: <description> | ||
| - **Why it matters**: <impact> | ||
| - **Suggested fix**: <concrete change> | ||
|
|
||
| ### HIGH | ||
| - (same format) | ||
|
|
||
| ### MEDIUM | ||
| - (same format) | ||
|
|
||
| ### LOW / NITPICK | ||
| - (same format — combine if many) | ||
|
|
||
| ## CLAUDE.md compliance | ||
| <bullet list of any violations, or "Compliant."> | ||
|
|
||
| ## Notes for synthesizer | ||
| <anything the synthesize step should know — e.g., a pattern that needs broader review, a finding that overlaps with another aspect.> | ||
| ``` | ||
|
|
||
| If you find nothing to flag, write the file with `## Findings\n\nNone — code looks clean.` and stop. Don't manufacture issues. | ||
|
|
There was a problem hiding this comment.
Ensure the review subdirectory exists before writing findings.
The instructions specify writing to $ARTIFACTS_DIR/review/code-review-findings.md, but there's no guarantee the review/ subdirectory exists. Add a directory creation step to avoid write failures.
Additionally, clarify that the template placeholder <n> in line 79 should be replaced with the actual $PR_NUMBER.
📁 Suggested fix to create directory
## Phase 3: WRITE FINDINGS
-Write `$ARTIFACTS_DIR/review/code-review-findings.md` with this structure:
+Create the review directory and write `$ARTIFACTS_DIR/review/code-review-findings.md` with this structure:
+
+```bash
+mkdir -p "$ARTIFACTS_DIR/review"
+```
```markdown
-# Code Review — PR #<n>
+# Code Review — PR #${PR_NUMBER}Based on learnings: "Fail Fast + Explicit Errors: prefer throwing early with a clear error for unsupported or unsafe states."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.archon/commands/maintainer-review-code-review.md around lines 74 - 108, In
the WRITE FINDINGS step ensure the review output directory exists and the PR
placeholder is substituted: before writing
$ARTIFACTS_DIR/review/code-review-findings.md add mkdir -p
"$ARTIFACTS_DIR/review"; validate required variables (e.g., $ARTIFACTS_DIR and
$PR_NUMBER) and fail fast with a clear error if missing; and replace the
template header "# Code Review — PR #<n>" with "# Code Review — PR
#${PR_NUMBER}" so the actual PR number is written.
| // ── Maintainer's recent commits on dev (what you shipped) ── | ||
| let myRecentCommits = ''; | ||
| if (ghHandle) { | ||
| const since = lastRunAt || '7 days ago'; | ||
| try { | ||
| myRecentCommits = execFileSync( | ||
| 'git', | ||
| ['log', 'origin/dev', `--since=${since}`, `--author=${ghHandle}`, '--no-decorate', '--format=%h %s'], | ||
| { stdio: ['ignore', 'pipe', 'pipe'] }, | ||
| ).toString(); | ||
| } catch { | ||
| myRecentCommits = ''; | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the profile.md schema referenced by other scripts/docs and confirm
# whether a git_author field already exists or only gh_handle.
fd -t f 'profile.md' .archon
rg -nP '^(gh_handle|git_author|git_name|git_email)\s*:' --type=md
rg -nP -C2 'gh_handle' .archonRepository: popemkt/Archon
Length of output: 3502
🏁 Script executed:
#!/bin/bash
# 1. Check if there are actual profile.md files (not .example) in the repo
echo "=== Looking for actual profile.md files ==="
find .archon -name "profile.md" -not -name "*.example"
# 2. Check for tests related to maintainer-standup-gh-data.ts
echo -e "\n=== Looking for tests for gh-data script ==="
fd -t f "test\|spec" | grep -i "maintainer\|gh-data" | head -20
# 3. Check how my_recent_commits is used downstream
echo -e "\n=== Usage of my_recent_commits ==="
rg "my_recent_commits" --type ts --type js -C2
# 4. Check if there's any documentation or comments about git author vs gh_handle
echo -e "\n=== Comments about author/handle mapping ==="
rg -i "git.*author|author.*name|author.*email" .archon --type ts --type md | head -20Repository: popemkt/Archon
Length of output: 326
🌐 Web query:
git log --author flag behavior documentation
💡 Result:
The --author flag in git log limits output to commits whose author header lines match the given regular expression pattern. Multiple --author= options select commits matching any pattern (logical OR). By default, it matches against canonical author names after .mailmap normalization; use --no-use-mailmap to match raw commit data instead. The pattern is a regex, not a substring search (e.g., --author=John matches commits with author "John Doe", but --author=d* matches all due to zero or more 'd's).
Citations:
- 1: https://git-scm.com/docs/git-log.html
- 2: https://git-scm.com/docs/git-log
- 3: https://git-scm.com/docs/git-log/2.37.0.html
- 4: https://www.kernel.org/pub/software/scm/git/docs/git-log.html
- 5: https://git-scm.com/docs/git-log/2.42.1.html
- 6: https://blog.plover.com/prog/git-author-address.html
🏁 Script executed:
#!/bin/bash
# 1. Check the full output structure of the gh-data script
echo "=== Final JSON output structure (lines 182-200) ==="
sed -n '182,200p' .archon/scripts/maintainer-standup-gh-data.ts
# 2. Search for usage of my_recent_commits in synthesis or output processing
echo -e "\n=== Check for synthesis or output processing of gh-data ==="
rg "recent_commits\|my_recent_commits" .archon --type ts --type js --type md -C1
# 3. Look for any examples or documentation showing the field being used
echo -e "\n=== Documentation examples ==="
rg -A10 "recent_commits\|my_recent_commits\|recent unlabeled" .archon --type mdRepository: popemkt/Archon
Length of output: 622
git log --author=<gh_handle> won't match commits — git expects author name/email, not GitHub handle.
--author in git log matches the commit author name/email via regex, not the GitHub username. For most contributors, the GH handle (e.g. popemkt) is not a substring of their git author name (e.g. Pope MKT <pope@example.com>), so my_recent_commits will silently be empty even when the maintainer has shipped commits.
A few options:
- Read the maintainer's git author from
profile.mdfrontmatter (git_author:separate fromgh_handle:), and pass that to--author. - Drop
--authorhere and have the synthesis step join commits to GH PRs by SHA via theghdata already collected. - Fall back to
git config user.email/user.namewhen no explicitgit_authoris set, and log a warning if neither resolves.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.archon/scripts/maintainer-standup-gh-data.ts around lines 167 - 180, The
current git log call in the myRecentCommits block uses --author with the GitHub
handle (ghHandle), but git expects commit author name/email so this often yields
no results; fix by resolving a git author identifier (prefer profile frontmatter
git_author if present, otherwise fallback to git config user.name or user.email)
and pass that value to execFileSync's git log --author argument (use lastRunAt
as before), or alternatively remove the --author filter here and perform SHA→PR
joining later; update the myRecentCommits code path (where ghHandle, lastRunAt
and execFileSync are used) to use the resolved git_author fallback chain and log
a warning if no author info is found.
| - id: approve-unclear | ||
| approval: | ||
| message: | | ||
| Gate could not classify this PR confidently. Read the raw gate output | ||
| and any artifacts in $ARTIFACTS_DIR/, then decide manually. | ||
|
|
||
| Approve (with optional comment) = workflow ends here (no comment posted, | ||
| no review run). Your comment is captured as $approve-unclear.output and | ||
| the report node will include it. | ||
| Reject (with reason) = workflow is cancelled; reasoning is recorded in | ||
| the run. | ||
| capture_response: true | ||
| depends_on: [gate] | ||
| when: "$gate.output.verdict == 'unclear'" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the approval-node executor / DAG runner to confirm reject-without-on_reject semantics.
rg -nP --type=ts -C5 '\bapproval\b.*\b(on_reject|reject|max_attempts)\b' -g '!**/test/**'
rg -nP --type=ts -C3 'capture_response|approval(Node|Executor|Result)' -g '!**/test/**'Repository: popemkt/Archon
Length of output: 50373
🏁 Script executed:
# Find the complete executeApprovalNode function, especially what happens when rejection occurs without on_reject
rg -A 50 'if \(rejectionReason !== .* && node\.approval\.on_reject\)' packages/workflows/src/dag-executor.ts | head -100Repository: popemkt/Archon
Length of output: 2034
🏁 Script executed:
# Search for what happens after the approval node handles rejection — the return statement and full control flow
rg -A 100 'async function executeApprovalNode' packages/workflows/src/dag-executor.ts | tail -80Repository: popemkt/Archon
Length of output: 2953
🏁 Script executed:
# Look for test cases of approval node rejection WITHOUT on_reject to see expected behavior
rg -B 5 -A 15 'approval.*reject.*on_reject' packages/workflows/src/dag-executor.test.tsRepository: popemkt/Archon
Length of output: 40
🏁 Script executed:
# Get the full executeApprovalNode function to see what happens when rejectionReason is set but on_reject is undefined
sed -n '2174,2360p' packages/workflows/src/dag-executor.tsRepository: popemkt/Archon
Length of output: 6062
🏁 Script executed:
# Also check if there's any special handling for approval rejection in the DAG runner that treats rejection as failure
rg -B 5 -A 10 'rejection|pauseWorkflowRun' packages/workflows/src/dag-executor.ts | grep -A 10 'state.*completed\|state.*failed' | head -50Repository: popemkt/Archon
Length of output: 959
🏁 Script executed:
# Check if there's any rejection count or max-attempts logic outside of the on_reject handling
rg -B 3 -A 5 'rejection_count|rejection_reason' packages/workflows/src/dag-executor.ts | grep -A 5 -B 5 'max_attempts\|cancel'Repository: popemkt/Archon
Length of output: 586
🏁 Script executed:
# Search pauseWorkflowRun implementation to see if it has any auto-cancel logic for repeated rejections
rg -A 30 'pauseWorkflowRun' packages/workflows/src/stores/ | head -80Repository: popemkt/Archon
Length of output: 129
🏁 Script executed:
# Find where store files are located
find . -type f -name '*.ts' -path '*/store*' | grep -v node_modules | grep -v test | head -10Repository: popemkt/Archon
Length of output: 217
🏁 Script executed:
# Search for pauseWorkflowRun implementation across the codebase
rg -l 'pauseWorkflowRun' --type=ts | grep -v test | grep -v node_modulesRepository: popemkt/Archon
Length of output: 208
🏁 Script executed:
# Check pauseWorkflowRun in the main store implementation
rg -A 30 'pauseWorkflowRun' packages/workflows/src/store.ts | head -80Repository: popemkt/Archon
Length of output: 1289
🏁 Script executed:
# Also check the store-adapter to see if there's rejection handling there
rg -A 30 'pauseWorkflowRun|rejection' packages/core/src/workflows/store-adapter.ts | head -80Repository: popemkt/Archon
Length of output: 1181
🏁 Script executed:
# Find the actual pauseWorkflowRun implementation in the database layer
rg -A 40 'pauseWorkflowRun' packages/core/src/db/workflows.tsRepository: popemkt/Archon
Length of output: 1628
🏁 Script executed:
# Also search for any rejection logic that might happen outside the approval node execution
rg -B 5 -A 10 'rejection_count|rejection_reason' packages/core/src/db/workflows.tsRepository: popemkt/Archon
Length of output: 40
User-facing message claims cancellation without on_reject configured — but code only cancels when on_reject is present.
The message states "Reject = workflow is cancelled," but when on_reject is undefined, rejection triggers re-pause at the approval gate, not cancellation. The condition on line 2208 of dag-executor.ts is if (rejectionReason !== '' && node.approval.on_reject) — rejection handling only activates when both are true. Without on_reject, a rejected approval re-enters the standard pause logic indefinitely (until the max approval timeout or user approves), leaving the report node blocked.
Either configure on_reject with a rejection handler, or revise the message to reflect actual behavior: rejection pauses for human re-review, not immediate cancellation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.archon/workflows/maintainer/maintainer-review-pr.yaml around lines 309 -
322, The approval block "approve-unclear" claims "Reject = workflow is
cancelled" but the runtime (see dag-executor.ts check using
node.approval.on_reject and the rejectionReason logic) only cancels when an
on_reject handler is configured; otherwise a reject returns the flow to the
approval pause. Fix by either adding an on_reject handler to the approve-unclear
approval (so rejection actually cancels) or update the approval.message text to
accurately state that "Reject" will re-pause the workflow for manual re-review
(not immediately cancel) and include guidance to configure on_reject if
cancellation is desired; reference the approval block id approve-unclear and the
node.approval.on_reject behavior in dag-executor.ts when making the change.
| # 1. Connectivity — does Pi resolve the model and stream a response? | ||
| - id: hello | ||
| prompt: 'What is 2+2? Answer with just the number, nothing else.' | ||
| allowed_tools: [] |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
allowed_tools is Claude-only — remove on Pi nodes.
The workflow runs against provider: pi, but allowed_tools: [] is set on each prompt node. Per coding guidelines, allowed_tools/denied_tools are Claude-only restrictions, so on Pi this is either silently ignored (misleading intent) or rejected at workflow load time. The prompts already enforce "no tools" textually (e.g., line 38), so the YAML key is redundant.
♻️ Proposed fix
- id: hello
prompt: 'What is 2+2? Answer with just the number, nothing else.'
- allowed_tools: []
effort: low
idle_timeout: 60000
@@
- id: identify
prompt: 'Without using any tools, on a single short line, tell me which model and provider you are.'
- allowed_tools: []
idle_timeout: 60000
depends_on: [hello]
@@
- "ok": always true (boolean)
- allowed_tools: []
idle_timeout: 60000
depends_on: [hello]As per coding guidelines: "allowed_tools/denied_tools per-node restrictions (Claude only)".
Also applies to: 39-39, 51-51
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.archon/workflows/test-workflows/e2e-minimax-smoke.yaml at line 28, Remove
the Claude-only per-node restriction keys from this Pi workflow: delete the
allowed_tools entries (and any denied_tools if present) on the prompt nodes
referenced (the entries currently set near the prompt nodes that also state "no
tools" textually), since provider: pi ignores these keys; leave the textual "no
tools" instructions in the prompt content intact and ensure no other Claude-only
keys remain in the same prompt nodes.
Extends the smoke from 4 to 7 live capabilities, plus a cleanup pass.
New nodes:
- setup_mcp_fixture / setup_skills_fixture (bash) — stage the
fixtures at run time so they don't pollute the repo when idle.
- mcp_demo (AI, mcp: ./.archon/test-fixtures/copilot-mcp.json) —
spawns the canonical @modelcontextprotocol/server-everything stdio
MCP server and asks the model to call its add(2,3) tool.
- skills_demo (AI, skills: [copilot-smoke]) — uses a staged SKILL.md
with a fixed marker.
- agents_demo (AI, inline agents: { smoke-helper: ... }) — asks the
model to invoke a custom sub-agent via the Task tool and surface
its marker.
- cleanup (bash, trigger_rule: all_done) — removes the fixtures
regardless of pass/fail.
Assert hardening:
- Drops the redundant outer single quotes around $nodeId.output
references. The DAG executor already shell-escapes for bash nodes
(coleam00#591 + dag-executor.ts:282 substituteNodeOutputRefs with
escapedForBash=true). Wrapping the engine's pre-quoted value in a
second pair of quotes broke parsing on outputs containing
apostrophes / parens (the `it's the "copilot-smoke" skill (listed
in available skills)` case).
- Routes the human-readable results table to stderr so it surfaces in
the terminal — successful bash node stdout is captured silently as
the node's output for downstream substitution.
- Skills check is soft: assert non-empty + log a WARN if the model
paraphrases. The deterministic proof is the Pino
copilot.skills_resolved event (resolved:1, missing:[]). gpt-5-mini
is known to paraphrase rather than emit exact SKILL.md markers, so
failing the workflow on model-adherence flake would be a false
negative for the wiring itself.
Live evidence on Linux + Copilot CLI 1.0.36 + gpt-5-mini:
- copilot.agents_registered { count: 1, names: ["smoke-helper"] }
- copilot.mcp_loaded { serverNames: ["everything"], missingVars: [] }
- copilot.skills_resolved { resolved: 1, missing: [] }
- mcp_demo returned `5` (model invoked the MCP add tool).
- agents_demo returned `COPILOT_AGENT_MARKER_OK`.
- skills_demo returned skill description chitchat — soft warn, expected.
- Workflow exit: PASS, ~110s end-to-end.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…s/experimental/ Move two repo-scoped workflows that were sitting untracked at the workflow root into a dedicated subfolder. Subfolder grouping is supported by the loader (1 level deep, resolution by filename), so workflow names are unchanged and the /release skill still resolves archon-release correctly. Files moved: - archon-fix-github-issue-experimental.yaml — Path-A variant of the issue-fix workflow used today to land coleam00#1434, coleam00#1435, coleam00#1438. - archon-release.yaml — the live release workflow used by the /release skill end-to-end (validate -> binary smoke -> version bump -> changelog -> approval -> commit -> PR -> tag -> Homebrew formula update).
Skill assertion was previously soft (warn-not-fail) because gpt-5-mini
paraphrased the SKILL.md body instead of emitting the marker verbatim.
Agent assertion regressed in the prior strengthening pass — model
confabulated the skill marker rather than invoking the registered agent.
Two changes pin both checks deterministically:
1. Stronger SKILL.md and prompt — the description now carries an
explicit "Use when..." trigger so the metadata scan picks the right
invocation; the body has the directive in three places (frontmatter,
## Output, ## Behavior) so the lazy-loaded body is unambiguous; the
skills_demo prompt explicitly tells the model to "invoke the
copilot-smoke skill" so the SDK loads the body (Copilot lazy-loads
skill bodies on invocation, like Claude's progressive disclosure).
2. Unguessable random tokens. Skill emits SK_a8f3kL2qZTOK; agent
emits AG_n5k7HpT3wAGOK. The previous COPILOT_SKILL_MARKER_OK /
COPILOT_AGENT_MARKER_OK pattern was guessable from training data —
gpt-5-mini confabulated the skill marker even when only the agent
was registered (no skills loaded for that node). Random tokens
make confabulation arithmetically impossible: a token's presence
in the response is mathematical proof the SDK actually loaded that
capability's content into the model's context.
Also: agent renamed `smoke-helper` → `task-responder` so its name no
longer pattern-matches as a skill, and the kebab-case validator (added
in the loader) accepts it. agents_demo prompt now mandates Task-tool
invocation explicitly.
Live evidence on Linux + Copilot CLI 1.0.36 + gpt-5-mini, fresh DB:
── results ──
hello = PONG
reasoning = 391
restricted = DENIED_OK
json.model = gpt-5-mini, json.ok = true
mcp_demo = 5
skills_demo = SK_a8f3kL2qZTOK
agents_demo = ...AG_n5k7HpT3wAGOKAG_n5k7HpT3wAGOK
PASS: all seven capabilities exercised end-to-end
Pino confirms wiring (visible in run output):
copilot.skills_resolved { resolved: 1, missing: [] }
copilot.mcp_loaded { serverNames: ["everything"], missingVars: [] }
copilot.agents_registered { count: 1, names: ["task-responder"] }
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…s in smoke
Doc clarification:
Unlike Claude (OAuth subscription OR ANTHROPIC_API_KEY) and Codex (OAuth OR
OPENAI_API_KEY), GitHub Copilot has only ONE auth model: GitHub OAuth bound
to a Copilot subscription. The four "options" previously listed as a flat
bullet list were misleading — they're different *delivery paths for the
same OAuth token*, not separate auth schemes.
The doc now leads with that fact and presents the four paths as a When-to-
Use table, with an explicit warning that GitHub Actions' workflow-scoped
${{ github.token }} does NOT carry Copilot scope.
Smoke quoting fix:
e2e-copilot-smoke.yaml had the same redundant-outer-single-quote pattern
we already fixed in e2e-copilot-all-features (coleam00#591 + dag-executor's
substituteNodeOutputRefs already shell-quotes for bash). Bumped
idle_timeout from 30s → 60s — gpt-5-mini occasionally pauses past the 30s
default on a cold session, leading to the smoke failing with empty output.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
Thanks @Wirasm — got everything you asked for plus a bit more. Latest commit is Models tested (live, Linux + Copilot CLI 1.0.36)
The PR description also lists three earlier scenarios on Windows + Copilot CLI 1.0.31 with Configs usedAuth. A clarification I'd like to fold into your mental model: GitHub Copilot has only one auth model (GitHub OAuth, billed via your Copilot subscription). The four "options" in the docs ( MCP / skills / agents. Staged at runtime by the smoke workflow itself (no fixtures committed to the repo):
The unguessable tokens are deliberate — they make confabulation impossible. A model that didn't actually receive the SKILL.md body or the agent's A ResultsCapability matrix (7 live, 9 unit-tested, 4 deliberately deferred)
Captured run output (
|
|
Thanks @Wirasm — got everything you asked for plus a bit more. Latest commit is Models tested (live, Linux + Copilot CLI 1.0.36)
The PR description also lists three earlier scenarios on Windows + Copilot CLI 1.0.31 with Configs usedAuth. A clarification I'd like to fold into your mental model: GitHub Copilot has only one auth model (GitHub OAuth, billed via your Copilot subscription). The four "options" in the docs ( MCP / skills / agents. Staged at runtime by the smoke workflow itself (no fixtures committed to the repo):
The unguessable tokens are deliberate — they make confabulation impossible. A model that didn't actually receive the SKILL.md body or the agent's A ResultsCapability matrix (7 live, 9 unit-tested, 4 deliberately deferred)
Captured run output (
|
|
Thanks @Wirasm — quick run-down on your four asks. Latest commit: Models tested: Configs: Results: one DAG covers all 7 wired capabilities — chat, Run output —
|
|
Thanks @Wirasm — quick run-down on your four asks. Models tested: Configs: Results: one DAG covers all 7 wired capabilities — chat, Smoke files: Full run output —
|
…des (coleam00#1387) executeBashNode previously only merged explicit envVars on top of process.env. The three well-known workflow directories (artifactsDir, logDir, baseBranch) were passed as function parameters and used for compile-time substitution of $ARTIFACTS_DIR / $LOG_DIR / $BASE_BRANCH in the script body, but were never added to the subprocess environment. As a result, any script that relied on shell-runtime expansion — e.g. JSON_FILE="${ARTIFACTS_DIR}/foo.output.json" inside a heredoc, an inherited helper script, or a `bash -c` subshell — saw the variable unset and silently fell back to its default (typically an empty string or "."), writing artifacts to the workflow cwd instead of the nominal artifacts directory. Always build subprocessEnv from process.env plus the three well-known directories, then allow explicit envVars to override. Compile-time substitution behavior is unchanged; existing scripts that do not reference these variables are unaffected; user-supplied envVars still win on conflict.
…oleam00#1426) * fix(workflow): substitute \$nodeId.output refs in approval messages Approval node messages were emitted as raw strings, bypassing the substituteNodeOutputRefs() pass that prompt/bash/loop/cancel nodes all run. This made interactive workflows like atlas-onboard show literal "\$gather-context.output.repo_name" placeholders to humans at HITL gates, leaving them unable to know what they were approving. Fix: rendered the approval.message through substituteNodeOutputRefs once at the top of the standard approval gate path, then used the resolved string in all 4 emission sites (safeSendMessage, createWorkflowEvent, pauseWorkflowRun, event-emitter). Test: new dag-executor.test case wires a structured-output upstream node into an approval node and asserts pauseWorkflowRun receives the substituted message ("Repo: hcr-els | App: CCELS | Port: 3012") rather than the literal placeholders. Repro: any workflow with an approval node whose message references \$nodeId.output[.field]. Observed in the wild on atlas-onboard's confirm-context HITL gate. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(workflow): extend approval-substitution test to cover all 4 emission sites Per CodeRabbit review: the original test only verified pauseWorkflowRun received the substituted message, but the fix touches 4 emission sites. A future regression at safeSendMessage / createWorkflowEvent / event-emitter would silently leave the test passing while users still saw raw $node.output placeholders. Adds two additional assertions: - platform.sendMessage prompt contains substituted message + does NOT contain literal $gather-context.output placeholders - The persisted approval_requested workflow event's data.message is substituted Event-emitter assertion deferred (no existing pattern for spying on the global emitter in this test file). Two of three secondary surfaces covered closes the practical regression risk — both are user-visible (chat prompt + audit-log event); the emitter is internal only. Test count: 7 pass / 22 expect() (was 18). Full suite 193 pass / 353 expect() — no regressions. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…m00#1286) (coleam00#1367) * feat(workflows): expose $LOOP_PREV_OUTPUT in loop node prompts (coleam00#1286) Adds a new substitution variable that carries the previous loop iteration's cleaned output into the next iteration's prompt. Empty on iteration 1; the prior iteration's output (after stripCompletionTags) on iteration 2+. Why: fresh_context: true loops have no way to reference what the previous pass produced or why it failed without dragging the full session forward. $LOOP_PREV_OUTPUT closes that gap with zero session-cost — same trust boundary as $nodeId.output, no new external surface. Changes: - packages/workflows/src/executor-shared.ts: substituteWorkflowVariables accepts a 10th positional loopPrevOutput arg and substitutes $LOOP_PREV_OUTPUT (defaults to ''). - packages/workflows/src/dag-executor.ts: executeLoopNode passes lastIterationOutput on iteration 2+ (and explicit '' on iteration 1 / the first iteration of an interactive resume, since lastIterationOutput is a per-call variable that does not survive resume metadata). - Unit tests: 3 new cases in executor-shared.test.ts. - Integration tests: 2 new cases in dag-executor.test.ts verifying the prompt sent to the AI on iter 1 vs iter 2, and that the value reflects cleaned output (no <promise> tags). - Docs: variables.md, loop-nodes.md (new "Retry-on-failure" pattern), CLAUDE.md variable reference. Backward compatibility: prompts that don't reference $LOOP_PREV_OUTPUT are unaffected. All 843 workflow tests + type-check + lint + format:check + bun run validate pass locally. * docs: address coderabbit review on variables/loop-nodes - variables.md: include $LOOP_PREV_OUTPUT in substitution-order list and availability table to match the new variable row at line 30 - loop-nodes.md: document the interactive-resume exception where the first iteration after an approval-gate resume still receives an empty $LOOP_PREV_OUTPUT regardless of iteration number (per dag-executor.ts L1781-1783 where i === startIteration always clears prev output) * docs(changelog): add Unreleased entry for $LOOP_PREV_OUTPUT (coleam00#1367 review) * test(loop): add resume-from-approval integration test for $LOOP_PREV_OUTPUT (coleam00#1367 review) Per maintainer-review-pr suggestion (Wirasm): two-call integration test covering the resume-from-approval scenario. - Call 1: fresh interactive loop pauses at the gate after iteration 1 and asserts $LOOP_PREV_OUTPUT substitutes to empty on iter 1 (no prior output) plus the gate pause is recorded. - Call 2: resumed run with metadata.approval populated. The first resumed iteration must substitute $LOOP_PREV_OUTPUT to '', NOT to the paused run's iter-1 output (which lived in a different process and is not persisted). $LOOP_USER_INPUT still flows through as normal. Locks the documented invariant at dag-executor.ts:1769-1772. --------- Co-authored-by: voidborne-d <DottyEstradalco@allergist.com>
…oleam00#1457) The brief was missing a key signal — when contributors reply on PRs or issues, the maintainer wouldn't see it explicitly. Empirically reviewed PR replies were buried under aggregate updatedAt timestamps with no indication of WHO replied or WHAT they said. This adds a new "Replies waiting on you" section to the daily brief, sourced from two paginated GitHub API calls scoped by since=last_run_at: - /repos/{o}/{r}/issues/comments PR + issue conversation comments - /repos/{o}/{r}/pulls/comments inline code-review comments Filters applied: - Skip the maintainer's own comments (gh_handle from profile.md) - Skip GitHub bot accounts (login ending in [bot]) — coderabbitai, chatgpt-codex-connector, dependabot, etc. They post a constant churn of automated review tooling that drowns out human replies; the maintainer wants the latter. Output is grouped by PR/issue number with kind classification: - issue comment on a non-PR issue - pr_conversation PR conversation-level comment - pr_review inline code-review comment (most actionable — usually needs a code-level response, so kind upgrades to pr_review whenever review comments arrive on a PR that also has conversation ones) Sorted by recency (newest reply first). Synthesizer reads gh-data.output.replies_since_last_run and renders a section. Verified on a backdated state.json (last_run_at = yesterday morning): 22 human replies on 22 PRs/issues, bot noise filtered (32 → 22 after the [bot] filter). Surfaces exactly the contributor responses to yesterday's review comments and direction questions.
The maintainer-standup brief had no signal for "I already triaged that
PR via maintainer-review-pr 2 days ago" — it just kept listing reviewed
PRs in P1-P4 with no acknowledgement of prior work. Result: maintainer
ends up re-skimming the same PR several mornings in a row.
This adds a shared persistent state file at:
.archon/maintainer-standup/reviewed-prs.json (gitignored, per-maintainer)
shape:
{
"1338": {
"reviewed_at": "2026-04-27T16:34:57Z",
"gate_verdict": "review", // review | decline | needs_split | unclear
"run_id": "..."
},
...
}
Three pieces:
1. WRITER — new `record-review` script node in maintainer-review-pr.yaml,
runs after whichever branch fired (post-review / post-decline /
approve-unclear) with trigger_rule: one_success. Inline bun script;
reads $gate.output.verdict, $ARTIFACTS_DIR/.pr-number, and
$WORKFLOW_ID; appends/upserts the entry. report node now depends on
record-review so the state write happens before the run completes.
2. READER — read-context.ts loads reviewed-prs.json into a new
reviewed_prs field on the standup gather output. Same pattern as
prior_state and recent_briefs.
3. SURFACE — maintainer-standup command file gets a Phase 2h rule:
when listing PRs in P1-P4 / Polite-decline sections, append:
- "✓ reviewed Nd ago" for review-branch entries
- "✓ declined Nd ago" for decline / needs_split branches
- "✓ triaged Nd ago (unclear)" for unclear branch
and a STALENESS marker — compare reviewed_at to PR's updatedAt; if
contributor pushed since the prior review, append
"⚠ contributor pushed since" so the maintainer knows the prior pass
may need to be re-run.
Plus a one-shot backfill script:
.archon/scripts/maintainer-standup-backfill-reviews.ts
Scans the maintainer's gh comments in the last 7 days, pattern-matches
"## Review Summary" / direction-clause-citation / split-up wording, and
populates reviewed-prs.json. Idempotent; existing entries (from real
workflow runs) take precedence over backfilled ones (the writer-node
record is more authoritative than a body-pattern guess). Uses 64MB
maxBuffer on the gh exec because --paginate over 7 days of an active
repo's comments easily exceeds Node's default 1MB.
Backfill verified: 363 comments scanned, 18 matched, 17 unique PRs
populated — exactly the 17 PRs we reviewed via the workflow yesterday.
The new state file is gitignored alongside the existing per-maintainer
files (profile.md, state.json, briefs/).
…oleam00#1460) Both SDKs were ~30 patch releases behind. Validation suite passes (type-check, lint, format, tests across all 10 packages) without code changes. The only sustained Claude SDK behavior change in the range — v0.2.111's options.env overlay/replace flap, since reverted to overlay — is a no-op for Archon, which already passes { ...process.env } as the SDK env.
…oleam00#1461) * fix(claude): stop passing --no-env-file to native binary in dev mode The Claude Agent SDK switched from shipping `cli.js` inside the package to per-platform native binaries via optional deps somewhere in the 0.2.x series. As of 0.2.121 there is no `cli.js` in the SDK package; dev mode resolves to `@anthropic-ai/claude-agent-sdk-darwin-arm64/claude` (Mach-O). That native binary rejects `--no-env-file` with `error: unknown option '--no-env-file'` and the subprocess exits 1. `shouldPassNoEnvFile` was returning true on `cliPath === undefined` on the assumption that "dev mode = JS executable run via Bun". That assumption is dead. Tighten the predicate to only return true on an explicit `.js` suffix, so we only emit the flag when the SDK is going to spawn a Bun-runnable script. CWD `.env` leak protection is unaffected. `stripCwdEnv()` in `@archon/paths` (coleam00#1067) deletes Bun-auto-loaded `.env`/`.env.local`/ `.env.development`/`.env.production` keys from `process.env` at every Archon entry point before any subprocess is spawned. The native Claude binary does not auto-load `.env` from its cwd either. `--no-env-file` was belt-and-suspenders for the JS-via-Bun case only. Verified end-to-end with a sentinel: added a unique `ARCHON_LEAK_SENTINEL_$$` to Archon's `.env`, ran e2e-claude-smoke with a bash probe checking the subprocess env. stderr shows `[archon] stripped 23 keys from /Users/rasmus/Projects/cole/Archon (.env, .env.local)` — sentinel was deleted. Bash node prints `PASS: simple='4', no sentinel leak`. Workflow completes cleanly, no `--no-env-file` rejection from the SDK binary. bun run validate: green across all 10 packages. * fix(claude): address review on coleam00#1461 (stale docs + test gaps) Critical: file-level JSDoc at provider.ts:18 still claimed dev mode resolves cli.js. Updated to reflect SDK 0.2.x's switch to per-platform native binaries. Important: security.md still listed --no-env-file as item 2 of target-repo .env isolation. Scoped that bullet to legacy Bun-runnable JS entry points and called out that native binaries don't auto-load .env from cwd. Added an Unreleased Fixed entry to CHANGELOG.md. Updated binary-resolver.ts JSDoc title that referenced cli.js. Polish: widened the predicate to accept .mjs and .cjs (also Bun-runnable JS — matches the SDK's own internal extension list). Dropped the redundant `passesNoEnvFile` log field that mirrored `isJsExecutable`. Added unit cases for .mjs/.cjs (now true) and .ts/.tsx/.jsx (deliberately false — never SDK entry points). Added an integration test that mocks resolveClaudeBinaryPath to return a .js path and asserts executableArgs: ['--no-env-file'] flows through buildBaseClaudeOptions all the way to the SDK call — catches future regressions in the conditional spread. bun run validate: green across all 10 packages.
# Conflicts: # bun.lock
* refactor(workflows): trust the SDK for model validation Drops cross-provider model inference and hard-coded model allow-lists. The string a workflow author writes in `model:` is forwarded to the SDK unchanged; the SDK and its API decide whether the model exists. Provider identity is the only thing Archon validates at load time — typos like `provider: claud` are caught early; everything else fails at runtime through the SDK's normal error path. Why this matters: a recent run on Sasha showed `provider: claude` + `model: opus[1m]` getting silently routed to Codex (because Codex's isModelCompatible was defined as the complement of Claude's, so anything not literally `sonnet|opus|haiku` matched). Codex then rejected the model as a `⚠️ ` system warning and the node "completed" in 2.1 seconds with empty output, after which the workflow opened a hallucinated PR. Three stacked bugs and two amplifiers; this commit removes all five. Changes: - Delete model-validation.ts entirely (inferProviderFromModel and isModelCompatible are gone). Drop the matching field from ProviderRegistration and from the claude/codex/pi entries. - Replace the resolver in executor.ts and dag-executor.ts (both the per-node and per-loop paths) with a flat `node.provider ?? workflow.provider ?? config.assistant`. Model never influences provider selection; load-time validation is just isRegisteredProvider on the resolved provider id. - Remove the dag-node Zod superRefine that recomputed model-compat — load-time provider validation moved to loader.ts. - Codex provider: stream loop now matches Claude's contract. error events that aren't followed by turn.completed yield `result.isError: true` (subtype `codex_stream_incomplete`) so the dag-executor's existing isError path catches them. turn.failed becomes `codex_turn_failed` with the same shape. Iterator close without a terminal event is itself a fail-stop. MCP-client errors remain filtered (Codex retries those internally). - dag-executor: AI nodes that exit the streaming loop with empty assistant text and no structured output now fail with `dag.node_empty_output` instead of completing silently — the Sasha bug's final amplifier. Bash/script/approval nodes are unaffected. Tests: model-validation.test.ts and isPiModelCompatible block deleted; codex provider tests rewritten to assert the new fail-stop contract; dag-executor empty-output test flipped to assert failure; new tests cover (a) loader rejecting unknown provider, (b) loader accepting any model string with a known provider, (c) executor passing provider+model through without re-routing, (d) executor throwing on unknown provider, (e) Codex synthesizing fail-stop on iterator close. Two cost-tracking tests adjusted to yield non-empty assistant text since their intent was cost accumulation, not empty-output handling. bun run validate: green (check:bundled, type-check, lint --max-warnings 0, format:check, all packages' test suites — 0 fail). End-to-end smoke (.archon/workflows/test-workflows/): - e2e-deterministic: PASS (engine healthy) - e2e-codex-smoke: PASS (Codex sendQuery + structured output work) - e2e-claude-smoke: FAIL with `error: unknown option '--no-env-file'` — this is a regression from the SDK 0.2.121 bump (coleam00#1460), not from this redesign. The Claude provider source is unchanged on this branch. To be fixed separately. * fix(workflows): address review on coleam00#1463 Critical: - C1: empty-output guard now skips idle-timeout completions. The on-screen message says "completed via idle timeout"; flipping that to a failure contradicted the user-facing log. Added !nodeIdleTimedOut to the guard. - C2: per-node provider identity is now validated at YAML load time. Loader iterates dagNodes after parsing and rejects any unknown provider id with "Node 'X': unknown provider 'Y'. Registered: ...". The dag-executor's runtime check stays as defense-in-depth. Important: - I1: CHANGELOG entry under [Unreleased] > Changed describing the resolver redesign + an explicit migration line for workflows that relied on cross-provider model inference. - I2: restored the dropped mockLogger.error('turn_failed') assertion in the turn.failed-without-error-message test. - I3: empty-output test now also asserts store.failWorkflowRun was called, matching the parallel error_max_budget_usd test pattern. - I4: new test that proves a node yielding zero assistant text but a valid structuredOutput is treated as a successful completion (not caught by the empty-output guard). - I5: rewrote the post-loop comment in codex/provider.ts to be precise about which dag-executor branch catches the synthesized result chunk (the throwing msg.isError branch, distinct from the empty-output guard's { state: 'failed' } return). - I6: removed PR-era "redesign" / "Sasha workflow" references from three test-file comments. - I7: docs sweep for the deleted isModelCompatible field — six files updated (CLAUDE.md, two docs guides, quick-reference, contributing guide, architecture reference). Polish: - S3: dropped the dead sawTerminal flag in streamCodexEvents — both terminal branches `return`, so reaching the post-loop block always means no terminal fired. Pure simplification. - S4: dropped parsePiModelRef and PiModelRef from community/pi/index.ts exports. The parser is consumed only by Pi's provider.ts; making it package-internal narrows the public surface. - S6: new Codex test for the bare-stream-close case (zero events, iterator just ends) — locks in the default fallback message used when no captured non-MCP error is available. - S7: new dag-executor test for per-node unknown-provider at runtime. Bypasses the loader to exercise resolveNodeProviderAndModel's throw, asserts the node_failed event carries the "unknown provider 'claud'" detail (the workflow-level fail message is a generic summary). bun run validate green across all 10 packages. * fix(workflows): address CodeRabbit review on coleam00#1463 Two real issues from CodeRabbit's automated pass on db95e8a: 1. Empty-output fail-stop now applies to loop iterations too. The single-shot AI-node guard at executeNodeInternal only covered prompt/command nodes; executeLoopNode has its own streaming path, so a provider that closed cleanly with zero content could pause an interactive loop with a blank gate or burn the full max_iterations budget. Mirrors the contract of the single-shot guard: `fullOutput.trim() === '' && !iterationIdleTimedOut` fails the iteration with a `loop_iteration_failed` event carrying a clear error. Idle-timeout exits remain exempt for the same reason as single-shot nodes — the on-screen "completed via idle timeout" message would otherwise contradict the failure. 2. Unknown loop providers now throw instead of return-failed. The early-return path bypassed the layer dispatch's outer catch at line 2870, so loop nodes with an invalid per-node `provider:` field skipped the standard `node_failed` event, the user-facing message, and the pre-execution log entry. Throwing reuses the common failure path — same shape as resolveNodeProviderAndModel uses for non-loop nodes. Both align with CLAUDE.md's "fail fast, explicit errors, never silently swallow" principle. The third CodeRabbit finding (boundary violation for `@archon/providers` import in loader.ts) is consistent with existing precedent — `dag-executor.ts`, `executor.ts`, and `validator.ts` already import from the same path; the runtime contract (every entrypoint bootstraps the registry before parseWorkflow runs) is already enforced in tests and documented at `loader.test.ts:31`. bun run validate green across all 10 packages.
… crash on missing source (coleam00#1394) The 18 top-level `import … with { type: 'text' }` statements in `bundled-skill.ts` resolve at module load. For `bun build --compile` that's build time, so the binary embeds the strings and works regardless of any on-disk skill files. For `bun link` (linked-source) installs that's every `archon` invocation — including `archon --help`, which doesn't even use the skill content. If any of the 18 source files are missing or moved, the import fails and the CLI cannot start at all. The skill content is data the binary deploys via `archon setup`, not data the CLI needs at runtime. There's only one consumer in production code: `copyArchonSkill()` in `setup.ts`. Moving the import into that function as a dynamic import preserves the compiled-binary behavior (Bun's bundler statically analyses literal-string `import()` and embeds the chunk — verified by grepping the SKILL.md frontmatter out of a freshly compiled binary) while making the linked-source install resilient: only `archon setup` triggers the bundled-skill module load now. Verified: a known skill string appears in the compiled binary 1×, and `archon --help` no longer needs the source files to start. `copyArchonSkill()` becomes async because the dynamic import is a Promise. The single production call site is already in an async function and gets an `await`. The four `setup.test.ts` cases become async too.
…art (coleam00#1307) * fix(docker): register safe.directory for all repos on bind-mount restart (coleam00#1279) On macOS bind mounts (VirtioFS), host UIDs do not map to the container appuser (1001). Git 2.35.2+ rejects operations with "dubious ownership". The Dockerfile RUN-layer gitconfig is not inherited by bind mounts on restart, and worktree paths are unknown at build time. Add a find loop in docker-entrypoint.sh that dynamically registers every .git directory under /.archon as a safe directory after the chown block. This is idempotent and handles worktrees at any depth. * chore: add -prune to avoid scanning .git internals
…mat persist (coleam00#1480) The original maintainer-standup workflow relies on Claude SDK-enforced output_format (one big JSON wrapping the brief markdown + state). It works on Claude but hangs in nested Claude Code sessions (coleam00#1067), and Pi/Minimax can't reliably emit a 30KB JSON wrapper around markdown content. Changes: - Drop output_format from synthesize node. Synthesis now emits brief markdown plain, followed by a delimited ARCHON_STATE_JSON_BEGIN/END block. - Replace persist (script:) with bash: that pipes synth output into a new .archon/scripts/maintainer-standup-persist.ts. The script tries the delimiter format first and falls back to the legacy {brief_markdown, next_state} JSON wrapper Pi/Minimax tends to emit regardless of prompt instructions. Either format yields the same brief + state.json on disk. - Add maintainer-standup-minimax.yaml — a 1:1 copy with provider: pi, model: minimax/MiniMax-M2.7 — for use when the Claude variant hangs or when the maintainer wants to spend Pi tokens instead. - Update the maintainer-standup synthesizer prompt: top-of-file output format directive, explicit "do not wrap in JSON object" guidance, delimiter-based example. Tested end-to-end on the Minimax variant against today's PR/issue payload; brief was extracted via the JSON-wrapper fallback path and written to .archon/maintainer-standup/briefs/2026-04-29.md.
…nses (coleam00#1440) * fix(providers/pi): tolerate prose preamble in structured-output responses Pi has no SDK-level JSON-mode parameter and Minimax's Anthropic-compat proxy doesn't expose response_format support, so reasoning models like Minimax M2.7 routinely "think out loud" before emitting the JSON we asked for, e.g.: "Now I have all the inputs. Let me evaluate the three gates: **Gate A — Direction alignment**: ... {\"verdict\":\"review\",\"direction_alignment\":\"aligned\",...}" The previous tryParseStructuredOutput stripped fences then JSON.parse'd the whole string, which fails the moment a single character of prose appears before the {. The maintainer-review-pr workflow's gate node was hitting this on ~70% of runs, propagating to 7-of-10 silent failures in a sequential review batch this afternoon. Add two preamble-tolerant fallback tiers: Tier 1 (existing): clean JSON.parse — fast path for compliant models Tier 2 (new): scan backward to last `{`, parse from there — flat JSON with reasoning preamble (Minimax M2.7 case) Tier 3 (new): scan forward to first `{`, parse from there — nested JSON with preamble (Tier 2 lands inside a child object and fails) The existing "trailing-prose-too" failure case (preamble + JSON + postamble) still returns undefined — handling it would require brace- depth tracking and isn't worth the cost. The two new tiers cover the failure mode actually observed in production. Tested against the real Minimax preamble pattern and a synthetic preamble + nested-JSON case. 39 event-bridge tests pass. No SDK changes, no Pi extensions, no Minimax API dependencies. Pure Archon-side parser hardening — backward-compatible and benefits every verbose-preamble model routed through Pi, not just Minimax. * fix(providers/pi): address CodeRabbit review on coleam00#1440 - Drop the redundant `firstBrace !== lastBrace` guard on Tier 3. When the only `{` past position 0 is the same one Tier 2 just tried, Tier 3 will re-run JSON.parse on the same input and fail identically — one redundant call accepted for simpler control flow. (CodeRabbit cleanup) - Tighten the Tier 1 comment to drop the model name list (will date) and tighten the Tier 2 comment to scope its claim to "preamble + flat JSON" rather than overgeneralized "nested JSON". (CodeRabbit comment-quality cleanup) - Replace the bare `// Fall through...` comments with `// fall through` inside the catch blocks. Necessary because eslint's `no-empty` rule flags bare `catch {}` blocks. (lint compliance) - Add a regression test for `{` inside a string value that pins the Tier 2 → Tier 3 cascade composition: lastIndexOf lands inside the string brace, JSON.parse rejects the resulting `{ inside","ok":true}` fragment, Tier 3 forward-scans to the JSON object's outer `{` and parses cleanly. Note: the preamble must not itself contain `{`, otherwise Tier 3 lands on it instead of on the JSON object — covered in the test comment so the constraint isn't lost. (CodeRabbit test coverage) - Update `packages/docs-web/src/content/docs/getting-started/ai-assistants.md:378` — the previous "degrades cleanly when the model emits prose" claim is now partially false (preamble-only patterns are recovered). New text enumerates the three handled forms (bare, fenced, prose-preamble) and clarifies the trailing-text-interleaved case still degrades. (CodeRabbit docs-impact) - Add a CHANGELOG entry under `## [Unreleased] / Fixed`. CodeRabbit's "critical" suffix-check suggestion (`cleaned.slice(brace) .trimEnd().endsWith('}')`) is not applied. JSON.parse already rejects trailing non-whitespace, so any successful parse from a `{`-prefixed slice already ends with `}` after trim — the check is tautological in all cases the function actually reaches it. The actual concern (a self-contained `{...}` fragment in preamble that happens to be valid JSON, with no real answer in the response) wouldn't be caught by the suffix check either, since the fragment IS a valid JSON object that ends with `}`. Real defense against that case would require schema validation, which lives at the call site (executor's `structured_output_missing` warning path), not in this parser. Tests: 40 pass / 0 fail. Lint, format, type-check all clean. * fix(providers/pi): drop Tier 2 backward-scan to avoid brace-postamble footgun CodeRabbit flagged a real correctness issue: a backward scan from the last `{` silently returns the wrong JSON object when the response contains a brace-bearing example after the real payload (e.g. `{"actual":1}\n For example: {"x":2}` parses the trailing example instead of failing). This breaks the conservative-failure contract callers rely on. Tier 2 was always strictly worse than Tier 3 anyway: every preamble pattern Tier 2 handled, Tier 3 handles too, and Tier 3 doesn't have the multi-fragment hazard. Dropping Tier 2 entirely removes the footgun and keeps the parser simple (clean parse → forward scan → undefined). Tests updated: - Renamed/clarified existing tests to reference the surviving forward-scan tier instead of the deleted backward-scan tier (no behavior change for the cases they cover — preamble has no extra braces). - Added a regression test for the brace-postamble footgun: input with a trailing example like `{"actual":"value"}\nFor example: {"verdict":"review"}` must return undefined under the new contract.
…m00#1389) (coleam00#1393) * fix(workflows): concise failure messages for bash/script nodes (coleam00#1389) When a `bash:` or `script:` node fails, the executor was embedding `err.message` verbatim into the user-visible error. For inline scripts run via `bash -c <body>` / `bun -e <body>`, Node's `promisify(execFile)` puts the entire substituted script body into `err.message`, `err.cmd`, and the first line of `err.stack` — and the Pino log serialized all three, repeating the body ~3× in one structured log line. The actionable diagnostic was buried under kilobytes of echoed source. Route both catch-block default branches through a new `formatSubprocessFailure(err, label)` helper in `executor-shared.ts` that: - strips the `Command failed: <cmd>` prefix line (which carries the body) - prefers `err.stderr` — Bun/bash emit the actionable diagnostic there - tail-truncates at 2 KB with a `…[truncated]` marker - returns a controlled `logFields` subset (`exitCode`, `killed`, `stderrTail`) so Pino never re-serializes `err.message` / `err.stack` / `err.cmd` Also drops the script-node `stderrHint` concatenation — stderr is already handled by the helper, so the previous code appended it twice. Timeout / ENOENT / EACCES branches are preserved verbatim. Fixes coleam00#1389 * fix(workflows): address PR review feedback for coleam00#1389 - Run formatSubprocessFailure unconditionally so timeout / ENOENT / EACCES branches also get sanitized log fields (the timeout message also embeds the `Command failed: bash -c <body>` line). - Drop `errType: err.constructor.name` (always 'Error' in production) and replace with `nodeType: 'bash' | 'script'` for actual discriminating value. - Replace chained `||` + ternary in diagnostic selection with explicit if/else for readability. - Simplify exit suffix guard: `err.code != null` instead of double typeof. - Make stderrTail emptiness check explicit. - Drop `RawSubprocessError` export (no external consumer) and widen `code` to `number | string | null` to mirror Node's ExecFileException. - Tighten test length bound to <2100 (was <4000 / <2200) so a doubling of SUBPROCESS_ERROR_MAX_CHARS would actually trip the assertion. - Replace broad regex assertion with `.toContain('[eval]')` — the location marker is the strongest signal that diagnostic content survived. - Strip issue-number citations from describe block and test comments. - Update script-nodes.md to distinguish stderr behavior on success vs failure paths. - Add CHANGELOG Unreleased / Fixed entry for the user-visible change.
…o prevent infinite failure loop (coleam00#1294) * fix(orchestrator): clear stale session ID on error_during_execution to prevent infinite failure loop When a Claude API session expires (e.g. after container restart), the orchestrator persists the new (failed) session ID from the error result, causing every subsequent message in that conversation to hit the same error — an infinite failure loop. Fix: on error_during_execution result, set assistant_session_id to NULL instead of persisting the failed session ID. The next message starts a fresh session with full context rebuilt from the DB. Conversation history is unaffected since it lives in remote_agent_messages, independent of the Claude session. Changes: - updateSession() and tryPersistSessionId() now accept string | null - Both handleStreamMode and handleBatchMode clear session ID on error_during_execution Fixes coleam00#1280 * test(orchestrator): add stale session clearing tests + address review feedback Co-Authored-By: Claude Opus 4 (1M context) <noreply@anthropic.com> Signed-off-by: kagura-agent <kagura.agent.ai@gmail.com> --------- Signed-off-by: kagura-agent <kagura.agent.ai@gmail.com> Co-authored-by: Claude Opus 4 (1M context) <noreply@anthropic.com>
coleam00#1481) * fix(claude): honor CLAUDE_BIN_PATH in dev mode for libc-mismatch hosts The Claude Agent SDK auto-resolves its bundled native binary in [linux-x64-musl, linux-x64] order. On glibc Linux hosts (Ubuntu/Debian/ Fedora), Bun installs both via optionalDependencies and the musl variant is picked first; its ELF interpreter (/lib/ld-musl-x86_64.so.1) does not exist on glibc, so spawn fails and the SDK reports a misleading "binary not found" — the file is on disk, the loader is not. The documented escape hatch CLAUDE_BIN_PATH was dead code in dev mode: the resolver early-returned undefined when BUNDLED_IS_BINARY=false before ever reading the env var. The only workaround was patching node_modules. Move the env-var block above the BUNDLED_IS_BINARY return. Config-file path stays binary-mode-only — it's per-repo, not per-machine; env is the right knob for libc mismatches. Behavior preserved: - env unset → unchanged (undefined in dev, autodetect/throw in binary) - env set + file exists → resolved (was binary-only; now also dev) - env set + file missing → clear error (was binary-only; now also dev) Closes coleam00#1474 * chore(claude): address CodeRabbit review on coleam00#1481 - CHANGELOG entry under [Unreleased] / Fixed describing the dev-mode CLAUDE_BIN_PATH escape hatch (previously ignored). Notes that config-file path remains binary-mode-only and that env-loading + target-repo .env isolation are unchanged downstream. - Empty-string test pinning that CLAUDE_BIN_PATH='' falls through to undefined rather than throwing — protects against a future predicate typo that would treat empty as "set". - One-line note in ai-assistants.md "Binary path configuration" section pointing dev-mode users at the env-var override for the glibc/musl mismatch case. Skipped from the review: - The other two docs-page rewrites (configuration.md / troubleshooting.md): the error message itself names CLAUDE_BIN_PATH, and coleam00#1474 documents the use case publicly. One mention in ai-assistants.md is enough for discovery. - Type-style consistency tweaks in the test file: pure bikeshed.
# Conflicts: # packages/providers/src/community/pi/event-bridge.ts
`refactor(workflows): trust the SDK for model validation` (bf1f471) removed `isModelCompatible` from `ProviderRegistration` entirely; Pi got the same treatment in its own registration. Mirror that for Copilot: drop the field from `registerProvider({...})`, the helper function and its export, and the registry-test block that exercised it. The `getProviderInfoList` projection assertion stays as a forward guard, matching the upstream Pi convention. Reported in PR coleam00#1351 review by @danielscholl. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…iled-binary boot Mirror the Pi precedent (coleam00#1355, v0.3.7). The Copilot SDK has not been audited for module-load side effects, and Pi's symptom — file I/O at import time crashing compiled Archon binaries with ENOENT before any command runs — is a class of bug that doesn't surface in dev mode. The defensive shape: - Top-of-file imports converted to type-only. - `CopilotClient` and `approveAll` value bindings dynamic-imported once at the start of `sendQuery()`'s inner async block. Module-namespace binding (`copilotSdk.CopilotClient`) avoids fighting the camelCase naming-convention lint rule on local variables. - `approveAll` threaded into `buildSessionConfig()` as a parameter so no helper has to dynamic-import the SDK again. Adds `provider-lazy-load.test.ts` (its own `bun test` invocation, like Pi's, because `mock.module` is process-wide). The test walks the same `registerCommunityProviders()` path the CLI and server take and asserts the SDK module never resolved. Reported in PR coleam00#1351 review by @danielscholl. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
… tokens Previously any of `COPILOT_GITHUB_TOKEN` / `GH_TOKEN` / `GITHUB_TOKEN` in the resolved env would force `useLoggedInUser: false`, silently overriding an explicit `useLoggedInUser: true` in the user's `.archon/config.yaml`. The real-world case: a user who has `GH_TOKEN` exported for the `gh` CLI on a *different* GitHub account than their Copilot subscription cannot direct Copilot to their logged-in session. Invert the precedence so explicit config wins: useLoggedInUser: copilotConfig.useLoggedInUser ?? !githubToken Default behavior is unchanged when the user sets nothing — env token present means `useLoggedInUser: false`, otherwise `true`. Locked in with four test cases covering the cross-product of (env-token? × config?). Reported in PR coleam00#1351 review by @danielscholl. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Summary
@github/copilot-sdk) in 2025 that drives the Copilot CLI through a supported JSON-RPC bridge. Archon had no way to use it — the only path to Copilot would have been screen-scraping the interactive TUI.gpt-5,gpt-5-mini, and cross-family models (Claude via Copilot) under an existing GitHub billing relationship. This is the second community provider after Pi (feat(providers): add Pi community provider (@mariozechner/pi-coding-agent) coleam00/Archon#1270) and validates that the community-provider seam scales.packages/providers/src/community/copilot/wrapping@github/copilot-sdk, registered viaregisterCopilotProvider()withbuiltIn: false. Seven phased commits: initial scaffold → tool restrictions → MCP → skills → structured output → hardening fixes from CodeRabbit/Devin → sub-agents. Nine of the thirteen capability flags are wired end-to-end.builtIn: true. Those remain deliberate follow-ups (plan preserved at.claude/archon/plans/copilot-provider-parity.md). Claude and Codex code paths untouched. Pi provider's behavior is preserved byte-for-byte via re-exports.Architecture Diagram
Before
After
Shared extractions (no behavior change to Pi):
shared/skills.ts::resolveSkillDirectoriespi/options-translator.ts::resolvePiSkillsshared/structured-output.ts::augmentPromptForJsonSchemapi/provider.tsshared/structured-output.ts::tryParseStructuredOutputpi/event-bridge.tsMCP reuse (no extraction): Copilot imports Claude's existing public
loadMcpConfigre-export (packages/providers/src/claude/index.ts:4) directly. Env-var expansion and missing-var detection behave identically across providers.Capability matrix
sessionResumesessionId, reused on resumeenvInjectioneffortControlthinkingControlthinking:accepted; object form warns (Claude-specific)toolRestrictionsallowed_tools→availableTools,denied_tools→excludedToolsmcpnodeConfig.mcpJSON →SessionConfig.mcpServers,$VARexpandedskillsskills: [name]→ absolute dirs containingSKILL.md, missing names warnstructuredOutputagentsnodeConfig.agents→customAgents; Claude-specific fields (model,disallowedTools,skills,maxTurns) warn per agenthooksSessionHooksevent vocabulary diverges from Archon's Claude-shaped hook schema; deferred to avoid partial mappingfallbackModelcostControlmaxBudgetUsdenforcement)sandboxonPermissionRequestis policy control, not sandbox semantics; intentionally not declared equivalentImplementation (7 commits)
9546ea75initial provider —CopilotProvider+parseCopilotConfig+ config-awareresolveCopilotCliPath(env > config > vendor in binary mode > PATH > dev-undefined > throw). Auth:copilot login,COPILOT_GITHUB_TOKEN,GH_TOKEN,GITHUB_TOKEN. Registration wired atregisterCommunityProviders().SAFE_ASSISTANT_FIELDSgainscopilot: ['model'].f412b835refactor + tool restrictions — Extracted single-source-of-truthbuildSessionConfig()with aProviderWarning[]collector, so each subsequent phase plugs in as oneapplyX(sessionConfig, …, warnings)call.applyToolRestrictionspassesallowed_tools/denied_toolsthrough verbatim (SDK tool names are canonical).94b7f472MCP —applyMcpServersreuses Claude's publicloadMcpConfig. Missing env vars surface as a consolidated system-warning chunk; IO / JSON errors propagate.enableConfigDiscoverystaysfalseby default (see trust-boundary note in docs).a19829064skills — Extracted Pi'sresolvePiSkills→shared/skills.ts::resolveSkillDirectories. Pi re-exports under the historical name so every existing Pi test passes unchanged. Copilot'sapplySkillsmaps names → absolute paths →SessionConfig.skillDirectories; unresolved names warn.d7719bbastructured output (best-effort) — Extracted Pi'saugmentPromptForJsonSchema+tryParseStructuredOutput→shared/structured-output.ts. Same prompt-engineering + JSON-parse approach as Pi feat(providers/pi): best-effort structured output via prompt engineering coleam00/Archon#1297. Parse failure leavesstructuredOutputunset so the executor's existingdag.structured_output_missingpath handles degradation.8a3504d7PR-review hardening — Addresses CodeRabbit + Devin findings:abortSignal.abortedbefore awaitingsendAndWait, sinceaddEventListener('abort', ...)is a no-op on already-aborted signals and would otherwise enter the 24-hour timeout path after cancel.session.disconnect()andclient.stop()each in their own try/catch-log so a cleanup throw can't replace the primary result / friendly error.sawAssistantContent = truewhen emitting the final-message fallback, preventing a spurioussession.errorwarning from double-firing.SessionConfig.model.existsSync()withisExecutableFile()(isFile + exec bit on posix, isFile on win32) so env/config/vendor overrides fail early.mkdtempSync+chmod 0o755— no shared/tmp/.archonstate.enableConfigDiscovery.31d94d4dsub-agents —applyAgentsmapsnodeConfig.agentstoSessionConfig.customAgentswithname/description/prompt/tools(allowlist) passed through. Archon fields Copilot can't represent (model,disallowedTools,skills,maxTurns) warn per-agent. Intentionally does NOT setSessionConfig.agent— Archon invokes sub-agents via the Task tool, not by switching session-level agent.Validation Evidence
bun run validate # ← fully green end-to-endbun run type-check— all 10 packages exit 0;bun run lint— 0 errors 0 warnings;bun run format:check— clean.bun --filter @archon/providers testpasses every per-file batch.community/copilot/config.test.tscommunity/copilot/binary-resolver.test.tsisExecutableFile)community/copilot/provider.test.tscommunity/copilot/tool-restrictions.test.tscommunity/copilot/mcp-translation.test.tscommunity/copilot/skills-translation.test.tscommunity/copilot/structured-output.test.tscommunity/copilot/agents-translation.test.tscommunity/copilot/provider-hardening.test.tsPlus 6 new tests in
registry.test.tsexercising Copilot registration, idempotence, capability flags, andisModelCompatible. All 12 Pi tests that exercise the extracted shared utilities via the Pi re-exports pass unchanged (byte-for-byte identical function bodies).Not yet validated (manual): end-to-end run of
.archon/workflows/e2e-copilot-smoke.yamlwith a real Copilot CLI auth. The smoke workflow asserts the happy path returnsCOPILOT_OK; first real-timing exercise happens when a reviewer runs it locally aftercopilot login(or settingCOPILOT_GITHUB_TOKEN/GH_TOKEN/GITHUB_TOKEN).Security Impact
process.env(COPILOT_GITHUB_TOKEN,GH_TOKEN,GITHUB_TOKEN) and passed toCopilotClient({ githubToken, ... }). Not persisted by Archon.useLoggedInUser: trueis the default when no token env var is present, delegating auth to the Copilot CLI's existing credential store.cwdlike other providers.enableConfigDiscovery: falseis Archon's default, preventing the CLI from auto-loading repo.mcp.json/ skill dirs outsidenodeConfig.mcp/nodeConfig.skills(documented as a trust boundary).isExecutableFilevalidates the resolved CLI path is a regular file with the exec bit set (posix), not a directory or non-executable, so env/config/vendor overrides fail early rather than passing a bad path to the SDK.Compatibility / Migration
assistants.copilot.modelto.archon/config.yamland supply auth viacopilot loginor one of three env vars. Nothing required for existing installs.pi/options-translator.tsandpi/provider.ts/pi/event-bridge.tsre-export the extracted functions under their historical names. No external import paths change.Human Verification
Verified scenarios (unit tests against mocked Copilot SDK + real filesystem/env-var fixtures):
resumeSessionpath)low|medium|high|xhigh|max→reasoningEffort; string thinking warns;offdisablesisExecutableFilerejects directories, missing paths, and non-executable files on posixallowed_tools/denied_toolspass-through on both create and resume paths$VARexpansion, missing-var warning, relative-path resolution, malformed-file error.agents/skills//.claude/skills/(project or home), missing-skill warningjson/fences, prose-wrapped → nostructuredOutput, absentoutputFormat→ never setdescription+prompt+toolspass through;model/disallowedTools/skills/maxTurnswarn; empty object / absent → omitted;SessionConfig.agentnever setAbortErrorand never enterssendAndWaitrequestOptions.modelandassistantConfig.modeldisconnect+stop) failures logged and swallowed, never mask the primary result or the friendly auth/model-access errorsawAssistantContent, preventing double-emission of asession.errorwarningWhat was NOT verified:
gpt-5,gpt-5-mini, Claude-via-Copilot aliases)approveAll, matching Claude / Codex'sbypassPermissionsdefault)Side Effects / Blast Radius
provider: copilot. Claude, Codex, and Pi code paths are untouched at runtime. The two Pi shared-utility call sites go through re-exports, so Pi's observable behavior is byte-identical.@github/copilot-sdk(~few MB, includes the CLI bridge types). Loaded only whenCopilotProvider.sendQueryruns.GET /api/providersreturns 4 entries (claude,codex,copilot,pi). Web UI already iterates generically.copilot.binary_resolved,copilot.mcp_loaded,copilot.mcp_env_vars_missing(warn),copilot.skills_resolved,copilot.agents_registered,copilot.abort_failed(warn),copilot.disconnect_failed(warn),copilot.client_stop_errors(warn),copilot.client_stop_threw(warn). All under theprovider.copilotPino module, matching the{domain}.{action}_{state}convention.Rollback Plan
dev. Seven commits, purely additive, no schema or interface changes — revert is clean.provider: copilotfrom workflow YAML orassistants.copilotfrom.archon/config.yaml.copilot.*_failed/copilot.*_errors/copilot.*_threwlog events underprovider.copilot⚠️system-chunk messages for missing skills, missing MCP env vars, unsupported thinking / effort shapes, and ignored per-agent fieldsprovider: copilotnodes with friendly messages for auth errors, model-access errors, or binary-not-foundRisks and Mitigations
@github/copilot-sdkis pre-1.0 (v0.2.2) — breaking changes possible on minor bumps.^0.2.2. The 52 new Copilot tests plus type-check gate any API-surface drift before merge.AbortSignalcan leave a Copilot session holding resources untilsendAndWaitresolves (up to 24 h).AbortSignal, limiting blast radius. A follow-up can add an internalAbortControllerwired into the generator'sfinallyto cover the abandonment path. Tracked in a review comment on this PR.dag.structured_output_missingwarning. Documented as "best-effort" in the capability matrix and docs.enableConfigDiscovery: truewould let the Copilot CLI load repo-level config (MCP servers, skill dirs) outside Archon's workflow validation surface.false. Docs carry an explicit trust-boundary note (packages/docs-web/.../ai-assistants.md).loadMcpConfigis imported directly from Claude's module instead of extracted toshared/.loadMcpConfigis a public re-export inpackages/providers/src/claude/index.ts, not an internal detail. By CLAUDE.md's Rule of Three, extraction toshared/mcp.tscan wait until a third provider needs it.🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Documentation