Skip to content

fix(workflows): kill script/bash node grandchildren via process-group on timeout (#1242)#1248

Open
shaun0927 wants to merge 28 commits intocoleam00:devfrom
shaun0927:fix/script-node-kill-process-group
Open

fix(workflows): kill script/bash node grandchildren via process-group on timeout (#1242)#1248
shaun0927 wants to merge 28 commits intocoleam00:devfrom
shaun0927:fix/script-node-kill-process-group

Conversation

@shaun0927
Copy link
Copy Markdown

@shaun0927 shaun0927 commented Apr 16, 2026

Summary

  • Problem: executeBashNode and executeScriptNode use execFileAsync(cmd, args, { timeout }) which only sends SIGTERM to the direct child. Wrapper runtimes (uv run python, bun run, npm run) fork the real interpreter as a grandchild. When timeout fires, the wrapper dies but the grandchild continues — producing side effects (file writes, network calls, git pushes) long after the node is marked timed_out.
  • Why it matters: Misleading failure attribution, resource leaks, and side-effect leaks after cancellation. Reproduced locally: wrapper killed at 2s, grandchild completed its full 10s sleep and wrote a marker file.
  • What changed: Extracted subprocess.exec() into packages/workflows/src/subprocess.ts. Uses spawn with detached: true to create a process group. On timeout: SIGTERM the entire group (-pid), then SIGKILL after 5s grace. Both executeBashNode and executeScriptNode now call subprocess.exec() instead of execFileAsync. Updated 2 env-passthrough tests to spy on the new module.
  • What did not change: AI node execution (uses Claude SDK, not subprocess). Loop node until_bash (uses execFileAsync for lightweight predicate checks — no wrapper runtimes). Error classification and timeout detection logic in the catch blocks. The subprocess module is exported as an object for test spyability, matching the fileExists pattern in codex-binary-resolver.ts.

UX Journey

Before

archon workflow run my-workflow
  → script node starts: uv run python long_task.py (timeout: 30s)
  → 30s: SIGTERM sent to uv (direct child)
  → uv dies, node marked "timed_out"
  → python (grandchild, pid=34143) keeps running...
  → 40s: python writes results to disk
  → 50s: python pushes to git
  → user thinks the workflow was stopped at 30s

After

archon workflow run my-workflow
  → script node starts: uv run python long_task.py (timeout: 30s, detached group)
  → 30s: SIGTERM sent to process group (-pid) — uv AND python both receive it
  → 35s: SIGKILL sent to group (grace expired) — guaranteed kill
  → node marked "timed_out", no orphan processes

Files Changed

  • packages/workflows/src/subprocess.tsnew (+97): process-group spawn helper
  • packages/workflows/src/dag-executor.ts — (+2 −3): import + use subprocess.exec()
  • packages/workflows/src/dag-executor.test.ts — (+4 −3): spy on subprocess.exec instead of git.execFileAsync

Testing

  • bun run type-check — all 10 packages clean
  • bun run lint — zero warnings
  • bun test packages/workflows/src/dag-executor.test.ts — 153 pass, 0 fail

Closes #1242

Summary by CodeRabbit

  • Refactor
    • Centralized subprocess execution with improved timeouts, whole-process-tree termination on kill, enforced output-size limits, and clearer error signals for failed or killed runs.
  • Tests
    • Updated tests to validate the new execution path and preserved env/stdout behavior.
    • Added integration tests covering success, non-zero exits, timeouts that kill descendants, and max-buffer rejection.

Architecture Diagram

Before

executeBashNode / executeScriptNode
       │
       ▼
  execFileAsync('bash' | 'uv' | 'bun', args, { cwd, timeout })
       │
       ▼            (timeout)              (huge stdout)
  child  ──── SIGTERM only to direct child   silently truncated
       │            │                        promise resolves with partial
       ▼            ▼                        data → workflow misreads $output
  grandchild   wrapper dies, grandchild keeps running:
   (uv/bun)    file writes, network calls, git pushes survive cancel

Windows: same code path, but process.kill(-pid) is POSIX-only — there was no real tree-kill on Windows beyond a NOTE comment.

After

executeBashNode / executeScriptNode
       │
       ▼
  subprocess.exec(cmd, args, { cwd, timeout, env? })
       │
       ▼
  spawn(detached: true on POSIX, windowsHide on Win32)
       │
   ┌───┴───────────────────────────────────────────────────────┐
   │                                                           │
   ▼                              ▼                            ▼
(timeout fires)          (capture > 1 MiB)              (normal exit)
  POSIX: kill(-pid, SIGTERM)        force-kill tree     resolve(stdout, stderr)
  Win32: taskkill /T /F /PID <pid>  (POSIX or Win32)
  + always schedule SIGKILL after 5s grace (do NOT
   cancel even if wrapper closes early — descendants
   may still be alive)
                                    │
                                    ▼
                          reject with code='ERR_MAXBUFFER'

Connection inventory:

From To Status Notes
dag-executor.ts (bash/script nodes) subprocess.exec (./subprocess.ts) modified (already in initial commit; preserved)
subprocess.ts process.kill(-pid) modified now POSIX-only branch
subprocess.ts taskkill /T /F /PID <pid> new Windows tree-kill fallback
subprocess.ts bounded stdout/stderr capture (1 MiB) + ERR_MAXBUFFER reject new replaces silent truncation
script-node-deps.test.ts subprocess.exec mock modified was mocking @archon/git::execFileAsync; tests script-node command construction at the new layer

Label Snapshot

  • Risk: risk: medium
  • Size: size: M
  • Scope: workflows
  • Module: workflows:subprocess, workflows:dag-executor

Change Metadata

  • Change type: bug
  • Primary scope: workflows
  • Files: 4 across the branch (subprocess.ts, subprocess.test.ts (new), dag-executor.ts, dag-executor.test.ts, script-node-deps.test.ts)

Linked Issue

Validation Evidence

bun run type-check                              # clean
bun run lint                                    # zero warnings
bun test packages/workflows/src/subprocess.test.ts          # 5/5 pass
bun test packages/workflows/src/script-node-deps.test.ts    # 7/7 pass
bun test packages/workflows/src/dag-executor.test.ts        # 153/153 pass

subprocess.test.ts is a new file that exercises real bash subprocesses on POSIX and asserts: success path, non-zero exit, process-tree timeout kill (the wrapper-then-grandchild scenario), and ERR_MAXBUFFER overflow rejection.

Security Impact

  • New permissions/capabilities? No
  • New external network calls? No
  • Secrets/tokens handling changed? No
  • File system access scope changed? No
  • Resource-exhaustion exposure shrinks in two places: (1) noisy commands no longer accumulate unbounded memory in the worker process, (2) timed-out grandchildren can no longer keep performing side effects (file writes, git pushes, network calls) after the user expects the run to be stopped.

Compatibility / Migration

  • Backward compatible? Yes — well-formed workflows are unaffected. Workflows whose bash/script nodes intentionally produced > 1 MiB of stdout will now fail with ERR_MAXBUFFER instead of silently truncating, which surfaces a real misconfiguration.
  • Config/env changes? No
  • Database migration needed? No

Human Verification

  • Verified scenarios:
    • uv run python long_task.py with a 30s sleep behind a 5s timeout → both uv AND python are killed (verified via ps snapshot and pidfile checks).
    • bash -c 'sleep 30 & wait' with 300ms timeout → wrapper-and-child both gone within timeout + 5s grace.
    • yes a against the 1 MiB cap → rejects with code: 'ERR_MAXBUFFER', no memory growth, producer killed.
    • 8 KiB bounded output → resolves normally with full data.
  • Windows: not yet directly run on a Windows host; the taskkill /T /F call is a documented child_process API — when it fails (taskkill missing), we fall through to natural close, matching the previous behaviour rather than regressing it.

Side Effects / Blast Radius

  • Affected subsystems: bash/script nodes only (loop until_bash and git operations still go through execFileAsync and are unchanged).
  • Potential unintended effects: a workflow that had been silently truncating output and depending on the truncated string will now fail loudly. This is the intended behaviour (the previous truncation produced incorrect downstream $nodeId.output).
  • Guardrails for early detection: error message includes the exact byte cap; dag-executor already routes failures through structured logs.

Rollback Plan

  • Fast rollback: revert this commit on the branch. The previous version (silent truncate, no Windows tree-kill, SIGKILL fallback cancelled in timeout flow) is restored verbatim.
  • Operational signal that triggers rollback: an unexpected ERR_MAXBUFFER spike in production would indicate either (a) a misconfigured workflow needing a fix or (b) the cap being too small. The cap is a single constant (MAX_CAPTURE_BYTES) easy to bump without reverting the rest of the change.

Risks and Mitigations

  • Risk: existing workflows depended on multi-MiB stdout capture. Mitigation: ERR_MAXBUFFER is loud and immediate; cap is one-line tunable; downstream pattern in this repo is artifact files for large outputs, not stdout pipes.
  • Risk: taskkill is missing on a Windows install. Mitigation: error handler on the spawn() call falls back to natural process exit — no regression vs. the prior NOTE-only state.
  • Risk: SIGKILL after grace might race with a process that exits cleanly during the grace window. Mitigation: process.kill on a dead PID throws ESRCH which we ignore.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 16, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Replaced direct child-process invocations with a new subprocess.exec helper that spawns detached process groups, enforces a combined stdout/stderr cap, and kills the whole process tree on timeout; tests were updated to mock/spy the new helper and a Bun integration test for subprocess.exec was added.

Changes

Cohort / File(s) Summary
Subprocess implementation
packages/workflows/src/subprocess.ts
New subprocess export and MAX_CAPTURE_BYTES. Uses spawn with detached groups, accumulates stdout/stderr with a byte cap, enforces timeout (SIGTERM → SIGKILL after 5s), kills process tree on over-cap/timeout, and resolves/rejects with enriched error/result shapes.
Executor integration
packages/workflows/src/dag-executor.ts
Replaced execFileAsync(...) with subprocess.exec(...) for bash and script nodes while preserving cwd, timeout, env, and existing stdout/stderr/error handling.
Tests: adapt & add
packages/workflows/src/dag-executor.test.ts, packages/workflows/src/script-node-deps.test.ts, packages/workflows/src/subprocess.test.ts
Updated executor/script tests to mock/spy ./subprocess.exec instead of @archon/git.execFileAsync; added subprocess.test.ts (Bun integration) to validate success, non-zero exits, timeouts (tree kill), max-buffer enforcement, and exported MAX_CAPTURE_BYTES.
Package script
packages/workflows/package.json
Prepended src/subprocess.test.ts to the package test script so the new subprocess tests run first.

Sequence Diagram(s)

sequenceDiagram
    participant DAG as DagExecutor
    participant Sub as subprocess.exec
    participant Child as Child Process (detached group)
    participant OS as Operating System

    DAG->>Sub: exec(cmd, args, {cwd, timeout, env})
    Sub->>Child: spawn(cmd, args, {detached: true})
    Child-->>Sub: stream stdout/stderr
    alt normal exit (code 0)
        Sub-->>DAG: resolve {stdout, stderr}
    else non-zero exit
        Sub-->>DAG: reject Error(code, stderr, signal)
    end
    alt timeout triggered
        Sub->>OS: process.kill(-Child.pid, "SIGTERM")
        Note right of OS: 5s grace for group shutdown
        Sub->>OS: process.kill(-Child.pid, "SIGKILL")
        Sub-->>DAG: reject Error(killed=true, stderr=..., signal=SIGKILL)
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐇
I nudged the rambunctious spawn to sleep,
wrapped every child in a careful sweep,
SIGTERM hummed, five ticks of grace,
SIGKILL finished the scattered race,
now the burrow's quiet, tidy, and deep.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 40.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title directly and clearly summarizes the primary change: introducing process-group-based killing for script/bash node grandchildren on timeout, closing the specific issue #1242.
Description check ✅ Passed The PR description is comprehensive and follows the template structure with all major sections completed: Summary (problem, why it matters, what changed, scope boundary), UX Journey (Before/After flows), Architecture Diagram with connection inventory, Label Snapshot, Change Metadata, Linked Issues, Validation Evidence, Security Impact, Compatibility/Migration, Human Verification, Side Effects/Blast Radius, Rollback Plan, and Risks/Mitigations.
Linked Issues check ✅ Passed The PR successfully addresses all coding requirements from issue #1242: spawns with detached process group, signals entire group on timeout (SIGTERM to -pid on POSIX, taskkill on Windows), schedules SIGKILL after grace period, implements bounded capture with ERR_MAXBUFFER rejection, and updates both executeBashNode and executeScriptNode to use the new subprocess.exec helper.
Out of Scope Changes check ✅ Passed All changes are within scope: subprocess.ts (new process-group spawn helper), dag-executor.ts (bash/script node refactoring to use subprocess.exec), test files (dag-executor.test.ts, script-node-deps.test.ts, subprocess.test.ts), and package.json (test script). AI node execution, loop node until_bash, and error classification logic remain unchanged as documented.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 71943eaa2f

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/workflows/src/subprocess.ts Outdated
const pid = child.pid;
if (pid !== undefined) {
try {
process.kill(-pid, 'SIGTERM');
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Handle timeout killing without POSIX process groups

subprocess.exec() implements timeout termination with process.kill(-pid, ...), which relies on POSIX process-group semantics. On native Windows this does not kill the spawned tree, so timed-out script/bash nodes can continue running until they exit naturally, meaning a short configured timeout can still block for the command’s full runtime and leak side effects. Add a Windows-specific kill path before treating timeout enforcement as complete.

Useful? React with 👍 / 👎.

Comment on lines +34 to +38
child.stdout.on('data', (chunk: Buffer) => {
stdout += chunk.toString();
});
child.stderr.on('data', (chunk: Buffer) => {
stderr += chunk.toString();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Reinstate output buffer limits for spawned subprocesses

The new implementation concatenates stdout/stderr into in-memory strings with no size bound. The previous execFileAsync path inherited execFile’s maxBuffer safeguard, but this replacement removes that protection, so verbose commands can grow memory usage unbounded and destabilize long-running workflow workers. Add an explicit byte cap (or stream output to disk) to keep subprocess output handling bounded.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
packages/workflows/src/subprocess.ts (1)

16-20: Use interfaces for object shapes in this TS module.

Please replace inline object type literals for options and result payload with named interfaces to match repo TS conventions.

🔧 Proposed refactor
+interface ExecOptions {
+  cwd: string;
+  timeout: number;
+  env?: NodeJS.ProcessEnv;
+}
+
+interface ExecResult {
+  stdout: string;
+  stderr: string;
+}
+
 function execWithProcessGroupKill(
   cmd: string,
   args: string[],
-  options: { cwd: string; timeout: number; env?: NodeJS.ProcessEnv }
-): Promise<{ stdout: string; stderr: string }> {
+  options: ExecOptions
+): Promise<ExecResult> {
As per coding guidelines `**/*.{ts,tsx}`: TypeScript: Use `interface` for defining object shapes in TypeScript.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/workflows/src/subprocess.ts` around lines 16 - 20, The function
execWithProcessGroupKill currently uses inline object type literals for the
options parameter and the Promise return payload; define two named interfaces
(e.g., ExecOptions and ExecResult) matching those shapes, replace the inline
types in the execWithProcessGroupKill signature with ExecOptions and
Promise<ExecResult>, and update any references inside the function to use the
new interfaces so the module follows the repo convention of using interface for
object shapes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/workflows/src/subprocess.ts`:
- Around line 29-39: The stdout/stderr accumulation is unbounded in the
child.stdout.on('data', ...) and child.stderr.on('data', ...) handlers; add a
safe cap (e.g. MAX_OUTPUT_BYTES) and change the handlers in subprocess.ts to
append only until the cap is reached (or keep only the tail), and once the cap
is exceeded stop appending further data and proactively kill the child process
(using child.kill()) and set killed = true and clear the killTimer if any; also
ensure the handlers remove themselves after killing so memory isn't retained and
surface a clear error/exit reason including that output was truncated.
- Around line 66-69: The code clears the SIGKILL fallback timer (killTimer) even
when the timeout flow has already set killed = true, which can cancel the
pending SIGKILL; in the child.on('close', ...) handler leave timeoutTimer
clearing as-is but only clearTimeout(killTimer) when killed is false (i.e., move
or guard clearTimeout(killTimer) so it does not run when killed === true) so the
scheduled SIGKILL for stubborn descendants is not cancelled; refer to
variables/handlers child.on('close', ...), timeoutTimer, killTimer, and the
killed flag when making this change.

---

Nitpick comments:
In `@packages/workflows/src/subprocess.ts`:
- Around line 16-20: The function execWithProcessGroupKill currently uses inline
object type literals for the options parameter and the Promise return payload;
define two named interfaces (e.g., ExecOptions and ExecResult) matching those
shapes, replace the inline types in the execWithProcessGroupKill signature with
ExecOptions and Promise<ExecResult>, and update any references inside the
function to use the new interfaces so the module follows the repo convention of
using interface for object shapes.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 842f13fb-b4f2-4f1f-96ea-bdc0e09e9a36

📥 Commits

Reviewing files that changed from the base of the PR and between 3dedc22 and 71943ea.

📒 Files selected for processing (3)
  • packages/workflows/src/dag-executor.test.ts
  • packages/workflows/src/dag-executor.ts
  • packages/workflows/src/subprocess.ts

Comment thread packages/workflows/src/subprocess.ts
Comment thread packages/workflows/src/subprocess.ts
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: da0bd69648

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/workflows/src/subprocess.ts Outdated
Comment on lines +39 to +43
if (capturedBytes >= MAX_CAPTURE_BYTES) return;
if (capturedBytes + chunk.length > MAX_CAPTURE_BYTES) {
stdout += chunk.toString('utf-8', 0, MAX_CAPTURE_BYTES - capturedBytes);
capturedBytes = MAX_CAPTURE_BYTES;
return;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Reject subprocess when output exceeds capture limit

When capturedBytes reaches MAX_CAPTURE_BYTES, the handler just truncates and returns, so the promise can still resolve as success with partial stdout/stderr. The previous execFileAsync path would raise ERR_CHILD_PROCESS_STDIO_MAXBUFFER (and terminate the child), but this implementation silently turns oversized output into a "completed" node result, which can corrupt downstream workflow logic that depends on full script/bash output.

Useful? React with 👍 / 👎.

github-actions Bot and others added 13 commits April 22, 2026 11:26
…be (coleam00#1359)

The pre-flight binary smoke does a bare `bun build --compile` — it
deliberately skips `scripts/build-binaries.sh` to stay fast. That means
packages/paths/src/bundled-build.ts retains its dev defaults, including
BUNDLED_IS_BINARY = false.

version.ts branches on BUNDLED_IS_BINARY: when true it returns the
embedded string; when false it calls getDevVersion(), which reads
package.json at `SCRIPT_DIR/../../../../package.json`. Inside a compiled
binary SCRIPT_DIR resolves under `$bunfs/root/`, the walk produces a CWD-
relative path that doesn't exist, and the smoke aborts with "Failed to
read version: package.json not found" — a false positive.

Hit during the 0.3.8 release attempt: the real Pi lazy-load fix was
working end-to-end; the smoke test was the only thing failing.

Use --help instead. It exercises the same module-init graph (so it still
catches the real failure modes the skill lists — Pi package.json init
crash, Bun --bytecode bugs, CJS wrapper issues, circular imports under
minify) but has no dev/binary branch, so no false positive.

Also add a longer comment block explaining why --help is preferred, so
this doesn't get "normalized" back to `version` by a future drive-by.
The brew path of /test-release runs `brew uninstall` in Phase 5 to leave the
system in its pre-test state. For operators using the dual-homebrew pattern
(renamed brew binary at `/opt/homebrew/bin/archon-stable` so it coexists with
a `bun link` dev `archon`), that uninstall wipes the Cellar dir the
`archon-stable` symlink points into → `archon-stable` becomes dangling →
`brew cleanup` sweeps it away on the next brew op. Next time the operator
wants stable, they have to manually re-run `brew-upgrade-archon`.

Fix: make the skill aware of `archon-stable` and restore it transparently.

- Phase 2 item 4: detect the `archon-stable` symlink before any brew op;
  export `ARCHON_STABLE_WAS_INSTALLED=yes` so Phase 5 knows to restore it.
  Only triggers for the brew path (curl-mac/curl-vps don't touch brew so
  they leave `archon-stable` alone).
- Phase 5 brew path: after `brew uninstall + untap`, if the flag was set,
  re-tap + re-install + rename. Verifies the restored `archon-stable`
  reports a version and warns (non-fatal) if the rename target is missing.
  Documents the tradeoff: the restored version is "whatever the tap ships
  today", not necessarily the pre-test version — usually that's what the
  operator wants (the release they just tested becomes stable) but the
  back-version-QA case requires a manual `brew-upgrade-archon` after.
- Phase 1 confirmation banner now mentions that `archon-stable` will be
  preserved so the operator isn't surprised by the reinstall during Phase 5.

No changes to curl-mac/curl-vps paths. No changes to Phase 4 test suite.
… a compiled binary (coleam00#1360)

v0.3.9 made Pi boot-safe: lazy-loading its imports meant `archon version`
no longer crashed on `@mariozechner/pi-coding-agent/dist/config.js`'s
module-init `readFileSync(getPackageJsonPath())`. That's what the
`provider-lazy-load.test.ts` regression test guards.

The fix was only half the problem though. When a Pi workflow actually
runs, sendQuery() triggers the dynamic import — and Pi's config.js
module-init fires then, hitting the exact same ENOENT on
`dirname(process.execPath)/package.json`. Discovered by running
`archon workflow run test-pi` against a locally-compiled 0.3.9 binary:

    [main] Failed: ENOENT: no such file or directory,
           open '/private/tmp/package.json'
        at readFileSync (unknown)
        at <anonymous> (/$bunfs/root/archon-providertest:184:7889)
        at init_config

Boot-safe ≠ runtime-safe. The `/test-release` run for 0.3.9 passed
because it only exercised `archon-assist` (Claude); Pi was never
actually invoked on the released binary.

Fix: before the dynamic `import('@mariozechner/pi-coding-agent')` in
sendQuery, install a PI_PACKAGE_DIR shim. Pi's config.js checks
`process.env.PI_PACKAGE_DIR` first in its `getPackageDir()` and
short-circuits the `dirname(process.execPath)` walk. We write a
minimal `{name, version, piConfig:{}}` stub to
`tmpdir()/archon-pi-shim/package.json` (idempotent — existsSync check)
and set the env var. Pi only reads `piConfig.name`, `piConfig.configDir`,
and `version` from that file, all optional, so the stub surface is
genuinely minimal.

Localized to PiProvider: no global state, no mutation of any shared
config, no upstream fork. Claude and Codex providers are unaffected
(their SDKs don't have this class of module-init side effect).

Verified end-to-end: built a compiled archon binary with this patch,
ran `archon workflow run test-pi --no-worktree` (Pi workflow with
model `anthropic/claude-haiku-4-5`), got a clean response. Before the
patch, same binary crashed at `dag_node_started` with the ENOENT above.

Regression test added: asserts `PI_PACKAGE_DIR` is set after sendQuery
hits even its fast-fail "no model" path. Together with the existing
`provider-lazy-load.test.ts` (boot-safe) this covers both halves.
… and Codex (coleam00#1361)

Both binary resolvers previously stopped at env-var + explicit config and
threw a "not found" error when neither was set. Users who followed the
upstream-recommended install flow (Anthropic's `curl install.sh` for
Claude, `npm install -g @openai/codex`) still had to manually set either
`CLAUDE_BIN_PATH` / `CODEX_BIN_PATH` or the corresponding config field
before any workflow could run.

Add a tier-N autodetect step between the explicit config tier and the
install-instructions throw. Purely additive: env and config still win
when set (precedence covered by new tests). On autodetect miss, the same
install-instructions error fires as before.

Claude probe list (verified against docs.claude.com "Uninstall Claude
Code → Native installation" section):
  - $HOME/.local/bin/claude            (mac/linux native installer)
  - $USERPROFILE\.local\bin\claude.exe (Windows native installer)

Codex probe list (verified against openai/codex README; npm global-
install puts the binary at `{npm_prefix}/bin/<name>` on POSIX,
`{npm_prefix}\<name>.cmd` on Windows):
  - $HOME/.npm-global/bin/codex   (user-set `npm config set prefix`)
  - /opt/homebrew/bin/codex       (mac arm64 with homebrew-node)
  - /usr/local/bin/codex          (mac intel / linux system node)
  - %APPDATA%\npm\codex.cmd       (Windows npm global default)
  - $HOME\.npm-global\codex.cmd   (Windows user-set prefix)

Not probed (explicit override still required):
  - Custom npm prefixes — `npm root -g` would need a subprocess per
    resolve, too much surface for a probe helper
  - `brew install --cask codex` — cask layout isn't a PATH binary
  - Manual GitHub Releases extracts — placement is user-determined
  - `~/.bun/bin/codex` — not documented in openai/codex README

Pi provider intentionally has no equivalent change: the Pi SDK is
bundled into the archon binary (no subprocess), so there's no "binary"
to resolve. Pi auth lives at `~/.pi/agent/auth.json` which the SDK
already finds by default, and the PR A shim (`PI_PACKAGE_DIR`) handles
the package-dir case via Pi's own documented escape hatch.

E2E verified: removed both config entries from ~/.archon/config.yaml,
rebuilt compiled binary, ran `archon workflow run archon-assist` and a
Codex workflow. Logs showed `source: 'autodetect'` for both, responses
returned cleanly.
…ry autodetect test

The native-installer autodetect test computed its expected path from
process.env.HOME, but the implementation uses node:os homedir(). On
Windows, HOME is typically unset (Windows uses USERPROFILE), so the
test fell back to '/Users/test' while the resolver returned the real
home dir — making the spy's path-equality check fail and breaking CI
on windows-latest.

Mirror the implementation by importing homedir() from node:os and
joining with node:path so the expected path matches the actual
platform-resolved home and separator.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…ver (coleam00#1365)

Reported in coleam00#1365: a user running `archon serve` with DISCORD_BOT_TOKEN
set but the "Message Content Intent" toggle disabled in the Discord
Developer Portal saw the entire server crash with `Used disallowed
intents`. Discord rejects the gateway connection (close code 4014) when
a privileged intent is requested without being enabled, and the
unguarded `await discord.start()` propagated the error all the way up,
taking the web UI down with it.

Wrap discord.start() in try/catch — log the failure with an actionable
hint (special-cased for the disallowed-intent error) and continue
running. Other adapters and the web UI come up regardless. The shutdown
handler already uses optional chaining (`discord?.stop()`) so nulling
discord after a failed start is safe.

Other adapters (Telegram, Slack, GitHub, Gitea, GitLab) have the same
unguarded-start pattern but are out of scope for this fix — addressing
them is tracked separately.

Also expanded the Discord setup docs with a caution callout that names
the exact error string and the new log event so users can grep for
both.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…0#1362)

* docs(script-nodes): add dedicated guide and teach the archon skill how to write them

Script nodes (script:) have been a first-class DAG node type since v0.3.3 but
were documented only as one-liners in CLAUDE.md and a CI smoke test. Claude
Code reading the archon skill would see "Four Node Types: command, prompt,
bash, loop" and reach for bash+node/python one-liners instead of a proper
script node — losing bun's --no-env-file isolation, uv's --with dependency
pins, and the .archon/scripts/ reuse story.

- New packages/docs-web/src/content/docs/guides/script-nodes.md mirroring the
  structure of loop-nodes.md / approval-nodes.md: schema, inline vs named
  dispatch, runtime/deps semantics, scripts directory precedence (repo > home),
  extension-runtime mapping, env isolation, stdout/stderr contract, patterns,
  and the explicit list of ignored AI fields.
- guides/authoring-workflows.md and guides/index.md updated so the new guide is
  discoverable from both the node-types table and the guides landing page.
- reference/variables.md calls out the no-shell-quote difference between
  bash: and script: substitution — a subtle correctness trap when adapting a
  bash pattern into a script node.
- Sidebar order bumped +1 on hooks/mcp-servers/skills/global-workflows/
  remotion-workflow to slot script-nodes at order 5 next to the other
  node-type guides.

- .claude/skills/archon/SKILL.md: replaces stale "Four Node Types" (which
  also silently omitted approval and cancel) with the accurate seven, with a
  script-node code block showing both inline and named patterns.
- references/workflow-dag.md: full Script Node section covering dispatch,
  resolution, deps, stdout contract, and the list of AI-only fields that are
  ignored; validation-rules list updated.
- references/dag-advanced.md and references/variables.md: retry-support line
  corrected; no-shell-quote note added.
- examples/dag-workflow.yaml: added an extract-labels TypeScript script node
  and updated the header comment.

* fix(docs): review follow-ups for script-node guide

- skills example: extract-labels was reading process.env.ISSUE_JSON which is
  never set; use String.raw`$fetch-issue.output` so the upstream bash node's
  JSON is actually consumed
- guides/script-nodes.md + skills/workflow-dag.md: idle_timeout is accepted
  but ignored on script (and bash) nodes — executeScriptNode only reads
  node.timeout. Clarify that script/bash use `timeout`, not idle_timeout
- archon-workflow-builder.yaml: prompt enumerated only bash/prompt/command/loop,
  so the AI builder could never propose script or approval nodes. Add both
  (plus examples + rule about script output not being shell-quoted) and
  regenerate bundled defaults
- book/dag-workflows.md + book/quick-reference.md + adapters/web.md: fill in
  the node-type references that were missing script, approval, and cancel.
  adapters/web.md also overclaimed "loop" in the palette — NodePalette.tsx
  only drags command/prompt/bash, so note that the other kinds are YAML-only
…nv gaps, add good-practices + troubleshooting (coleam00#1363)

* fix(skill/when): document the full `when:` operator set and compound expressions

The skill reference previously stated "operators: ==, != only" which is
materially wrong — the condition evaluator supports ==, !=, <, >, <=, >=
plus && / || compound expressions with && binding tighter than ||, plus
dot-notation JSON field access. An agent authoring a workflow from the
skill would think half the operators don't exist.

Replaces the single-sentence section with a structured reference covering:
- All six comparison operators (string and numeric modes)
- Compound expressions with precedence rules and short-circuit eval
- JSON dot notation semantics and failure modes
- The fail-closed rules in full (invalid expression, non-numeric side,
  missing field, skipped upstream)

Grounded in packages/workflows/src/condition-evaluator.ts.

* feat(skill): document Approval and Cancel node types

Approval and cancel nodes are first-class DAG node types (approval since the
workflow lifecycle work in coleam00#871, cancel as a guarded-exit primitive) but the
skill never described either one. An agent reading the skill and asked to
"add a review gate before implementation" or "stop the workflow if the input
is unsafe" would fall back to bash + exit 1, losing the proper semantics
(cancelled vs. failed, on_reject AI rework, web UI auto-resume).

Approval node coverage (references/workflow-dag.md, SKILL.md):
- Full configuration block with message, capture_response, on_reject
- The interactive: true workflow-level requirement for web UI delivery
- Approve/reject commands across all platforms (CLI, slash, natural
  language) and the capture_response → $node-id.output flow
- Ignored-fields list + the on_reject.prompt AI sub-node exception

Cancel node coverage (references/workflow-dag.md, SKILL.md):
- Single-field schema (cancel: "<reason>")
- Lifecycle: cancelled (not failed); in-flight parallel nodes stopped;
  no DAG auto-resume path
- The "cancel: vs bash-exit-1" decision rule (expected precondition miss
  vs. check itself failing)
- Two canonical patterns — upstream-classification gate, pre-expensive-step
  gate

Validation-rules list updated to enumerate approval/cancel constraints
(message non-empty, on_reject.max_attempts range 1-10, cancel reason
non-empty), plus a forward note that script: joins the mutually-exclusive
set once PR coleam00#1362 lands.

Placement in both files is after the Loop section and before the validation
section, so this commit stays additive with respect to PR coleam00#1362's Script
node insertion between Bash and Loop — rebase is clean.

* feat(skill): document workflow-level fields beyond name/provider/model

The skill's Schema section previously showed only name, description, provider,
and model at the workflow level — which is most of a stub. Agents asked to
"use the 1M-context Claude beta" or "run this under a network sandbox" or
"add a fallback model in case Opus rate-limits" had no way to discover
that any of these fields existed at the workflow level.

Adds a comprehensive Workflow-Level Fields section covering:
- Core: name, description, provider, model, interactive (with explicit
  callout that interactive: true is REQUIRED for approval/loop gates on
  web UI — a common footgun)
- Isolation: worktree.enabled for pin-on/pin-off (the only worktree field
  at workflow level; baseBranch/copyFiles/path/initSubmodules are
  config.yaml only, so a cross-reference points there)
- Claude SDK advanced: effort, thinking, fallbackModel, betas, sandbox,
  with explicit per-node-only exceptions (maxBudgetUsd, systemPrompt)
- Codex-specific: modelReasoningEffort (with note that it's NOT the same
  as Claude's effort — this has confused users), webSearchMode,
  additionalDirectories
- A complete worked example combining sandbox + approval + interactive

All fields cross-referenced against packages/workflows/src/schemas/workflow.ts
and packages/workflows/src/schemas/dag-node.ts.

* feat(skill/loop): document interactive loops and gate_message

Interactive loop nodes pause between iterations for human feedback via
/workflow approve — used by archon-piv-loop and archon-interactive-prd.
The skill's Loop Nodes section previously omitted both interactive: true
and gate_message entirely, so an agent writing a guided-refinement
workflow wouldn't know the feature exists or that gate_message is
required at parse time.

Adds:
- interactive and gate_message rows to the config table (marking
  gate_message as required when interactive: true — enforced by the
  loader's superRefine)
- A dedicated "Interactive Loops" subsection explaining the 6-step
  iterate-pause-approve-resume flow
- Explicit call-out that $LOOP_USER_INPUT populates ONLY on the first
  iteration of a resumed session — easy to miss and a common surprise
- Workflow-level interactive: true requirement for web UI delivery
  (loader warning otherwise) so the full-flow example is complete
- Note that until_bash substitution DOES shell-quote $nodeId.output
  (unlike script bodies) — called out since the audit surfaced this
  inconsistency

* fix(skill/cli): complete the CLI command reference with missing lifecycle commands

The CLI reference previously documented only list, run, cleanup, validate,
complete, version, setup, and chat — missing nearly every workflow
lifecycle command an agent needs to operate a paused, failed, or stuck
run. The interactive-workflows reference assumed these commands existed
without actually documenting them.

Adds full documentation for:
- archon workflow status — show running workflow(s)
- archon workflow approve <run-id> [comment] — resume approval gate
  (also populates $LOOP_USER_INPUT on interactive loops and the gate
  node's output when capture_response: true)
- archon workflow reject <run-id> [reason] — reject gate; cancels or
  triggers on_reject rework depending on node config
- archon workflow cancel <run-id> — terminate running/paused with
  in-flight subprocess kill
- archon workflow abandon <run-id> — mark stuck row cancelled without
  subprocess kill (for orphan-cleanup after server crashes — matches
  the coleam00#1216 precedent)
- archon workflow resume <run-id> [message] — force-resume specific
  run (auto-resume is default; this is for explicit override)
- archon workflow cleanup [days] — disk hygiene for old terminal runs
  (with explicit callout that it does NOT transition 'running' rows,
  a common confusion)
- archon workflow event emit — used inside loop prompts for state
  signalling; documented so agents don't invent their own mechanism
- archon continue <branch> [flags] [msg] — iterative-session entry
  point with --workflow and --no-context flags

Also:
- Adds --allow-env-keys flag to the `workflow run` flag table with
  audit-log context and the env-leak-gate remediation use case
- Adds an "Auto-resume without --resume" note disambiguating when
  --resume is needed vs. when auto-resume handles it
- Adds --include-closed flag to `isolation cleanup`, which was
  previously missing; converts the flag list to a structured table
- Explains the cancel/abandon distinction (live subprocess vs. orphan)

All grounded in packages/cli/src/commands/workflow.ts, continue.ts,
and isolation.ts.

* feat(skill/repo-init): add scripts/ and state/, three-path env model, per-project env injection

The repo-init reference was missing two first-class .archon/ directories
(scripts/ since v0.3.3, state/ since the workflow-state feature) and had
nothing to say about env — the coleam00#1 thing a user hits on first-run when
their repo has a .env file with API keys.

Directory tree updates:
- Adds .archon/scripts/ with the extension->runtime rule (.ts/.js -> bun,
  .py -> uv) so agents know where to put named scripts referenced by
  script: nodes.
- Adds .archon/state/ with explicit "always gitignore" callout — these
  are runtime artifacts, not source. Previously undocumented in the skill.
- Adds .archon/.env (repo-scoped Archon env) and distinguishes it from
  the target repo's top-level .env.
- Adds a "What each directory is for" list so the structure isn't just
  a tree with no narrative.

.gitignore guidance:
- state/ and .env added as must-gitignore (state/ matches CLAUDE.md and
  reference/archon-directories.md — skill was lagging).
- mcp/ demoted to conditional — gitignore only if you hardcode secrets.

New "Three-Path Env Model" section:
- ~/.archon/.env (trusted, user), <cwd>/.archon/.env (trusted, repo),
  <cwd>/.env (UNTRUSTED, target project — stripped from subprocess env).
- Precedence (override: true across archon-owned paths) and the
  observable [archon] loaded N keys / stripped K keys log lines so
  operators can verify what actually happened.
- Decision tree for where to put API keys vs. target-project env vs.
  things Archon shouldn't touch.
- Links to archon setup --scope home|project with --force for writing
  to the right file with timestamped backups.

New "Per-Project Env Injection" section:
- Documents both managed surfaces: .archon/config.yaml env: block
  (git-committed, $REF expansion) and Web UI Settings → Projects →
  Env Vars (DB-stored, never returned over API).
- Names every execution surface that receives the injected vars:
  Claude/Codex/Pi subprocess, bash: nodes, script: nodes, and direct
  codebase-scoped chat.
- Documents the env-leak gate with all 5 remediation paths so an agent
  hitting "Cannot register: env has sensitive keys" knows the options.

Grounded in CHANGELOG v0.3.7 (three-path env + setup flags), v0.3.0
(env-leak gate), and reference/security.md on the docs site.

* fix(skill/authoring-commands): correct override paths and add home-scoped commands

The file-location and discovery sections described an override layout that
does not match the actual resolver. It showed:

  .archon/commands/defaults/archon-assist.md  # Overrides the bundled

and claimed `.archon/commands/defaults/` was where repo-level overrides
lived. In fact the resolver (executor-shared.ts:152-200 + command-
validation.ts) walks `.archon/commands/` 1 level deep and uses basename
matching — putting `archon-assist.md` at the top of `.archon/commands/`
is the canonical way to override the bundled version. The `defaults/`
subfolder is a Archon-internal convention for shipping bundled defaults,
not a user-facing override pattern.

Also, home-scoped commands (`~/.archon/commands/`, shipped in v0.3.7)
were completely absent — agents authoring personal helpers wouldn't
know they could live at the user level and be shared across every repo.

Changes:
- File Location section now shows all three discovery scopes (repo,
  home, bundled) with precedence ordering and 1-level subfolder rules
- Duplicate-basename rule documented as a user error surface
- Discovery and Priority section rewritten with accurate 3-step lookup
  order — no more references to the nonexistent defaults/ override path
- Adds the Web UI "Global (~/.archon/commands/)" palette label note so
  users authoring helpers for the builder know what to expect

No code changes — this is a pure fix of stale/incorrect skill reference
material.

* feat(skill): add workflow good-practices and troubleshooting reference pages

Closes two gaps from the audit. The skill previously had zero guidance on
designing multi-node workflows (what to avoid, what to reach for first,
how to structure artifact chains) and zero guidance on where to look
when things go wrong (log paths, env-leak gate remediations, orphan-row
cleanup, resume semantics).

New references/good-practices.md (9 Good Practices + 7 Anti-Patterns):

- Use deterministic nodes (bash:/script:) for deterministic work, AI for
  reasoning — the single biggest quality lever
- output_format required whenever downstream when: reads a field — the
  most common source of "workflow silently routes wrong"
- trigger_rule: none_failed_min_one_success after conditional branches —
  the classic bug where all_success fails because a skipped when:-gated
  branch doesn't count as a success
- context: fresh requires artifacts for state passing — commands must
  explicitly "read $ARTIFACTS_DIR/..." when downstream of fresh
- Cheap models (haiku) for glue, strong for substance
- Workflow descriptions as routing affordances
- Validate (archon validate workflows) + smoke-run before shipping
- Artifact-chain-first design
- worktree.enabled: true for code-changing workflows (reversibility)
- Anti-patterns with before/after YAML examples for each (AI-for-tests,
  free-form when: matching, context: fresh without artifacts, long flat
  AI-node layers, secrets in YAML, retry on loop nodes, tiny
  max_iterations, missing workflow-level interactive:, tool-restricted
  MCP nodes)

New references/troubleshooting.md:

- Log location (~/.archon/workspaces/<owner>/<repo>/logs/<run-id>.jsonl)
  with jq recipes for common queries (last assistant message, failed
  events, full stream)
- Artifact location for cross-node handoff debugging
- 9 Common Failure Modes, each with root cause + concrete fix:
  - $BASE_BRANCH unresolvable
  - Env-leak gate (5 remediations)
  - Claude/Codex binary not found (compiled-binary-only)
  - "running" forever (AI working / orphan / idle_timeout)
  - Mid-workflow failure and auto-resume semantics
  - Approval gate missing on web UI (workflow-level interactive:)
  - MCP plugin connection noise (filtered by design)
  - Empty $nodeId.output / field access (4 causes)
- Diagnostic command cheat sheet (list, status, isolation list, validate,
  tail-log, --verbose, LOG_LEVEL=debug)
- Escalation protocol (version + validate + log tail + CHANGELOG + issue)

SKILL.md routing table now dispatches "Workflow good practices /
anti-patterns" and "Troubleshoot a failing / stuck workflow" to the new
references so an agent can find them without having to know they exist.

* docs(book): update node-types coverage from four to all seven

The book is the curated first-contact reading path (landing page → "Get
Started" → /book/). Both dag-workflows.md and quick-reference.md were
stuck on "four node types" — missing script, approval, and cancel. A user
reading the book as their first introduction would form an incomplete
mental model, then find three more node types in the reference section
later with no explanation of when they arrived.

book/dag-workflows.md:
- "four node types" → "seven node types. Exactly one mode field is
  required per node"
- Table now lists Command, Prompt, Bash, Script, Loop, Approval, Cancel
  with one-line "when to use" for each, and cross-links to the dedicated
  guide pages for Script / Loop / Approval
- New sections below the table for Script (inline + named examples with
  runtime and deps), Approval (with the interactive: true workflow-level
  note that's easy to miss), and Cancel (guarded-exit pattern) — keeping
  the existing narrative shape for Bash and Loop

book/quick-reference.md:
- Node Options table now includes script, approval, cancel rows
- agents row added (inline sub-agents, Claude-only)
- New "Script-specific fields" and "Approval-specific fields" subsections
  so the cheat-sheet is actually complete rather than pointing users
  elsewhere for the required constraints
- Retry row callout that loop nodes hard-error on retry — previously
  omitted
- bash timeout note widened to cover script timeout (same semantics)

Both files are docs-web content; the CI build on the docs-script-nodes
PR (coleam00#1362) previously validated the Starlight build path with a similar
table addition, so this should render clean.

* fix(skill/cli): remove nonexistent \`archon workflow cancel\`, fix workflow status jq recipe

Two accuracy issues from the PR code-reviewer (comment 4311243858).

C1: \`archon workflow cancel <run-id>\` does NOT exist as a CLI subcommand.
The switch at packages/cli/src/cli.ts:318-485 dispatches on list / run /
status / resume / abandon / approve / reject / cleanup / event — running
\`archon workflow cancel\` hits the default case and exits with "Unknown
workflow subcommand: cancel" (cli.ts:478-484). Active cancellation is
only available via:
  - /workflow cancel <run-id> chat slash command (all platforms)
  - Cancel button on the Web UI dashboard
  - POST /api/workflows/runs/{runId}/cancel REST endpoint

cli-commands.md: removed the \`### archon workflow cancel <run-id>\`
subsection; kept the \`abandon\` subsection but made it explicit that
abandon does NOT kill a subprocess. Added a call-out box at the bottom
of the abandon section explaining where to go for actual cancellation.

troubleshooting.md "running forever" section: split the original
cancel-vs-abandon advice into three bullets — Web UI / CLI abandon (for
orphans, no subprocess kill) / chat \`/workflow cancel\` (for live runs
that need interruption). Added an explicit "there is no archon workflow
cancel CLI subcommand" parenthetical since the wrong command was being
suggested in flow.

I1: the \`archon workflow list --json\` diagnostic used an incorrect jq
filter. workflow list's --json output (workflow.ts:185-219) has shape
{ workflows: [{ name, description, provider?, model?, ... }], errors: [...] }
with no \`runs\` field — \`jq '.workflows[] | select(.runs)'\` returns empty
unconditionally. Replaced with \`archon workflow status --json | jq '.runs[]'\`,
which matches the actual shape of workflowStatusCommand at
workflow.ts:852+ ({ runs: WorkflowRun[] }). Also tightened the narration
to distinguish JSON from human-readable status output.

No change to the commit history in this PR — these are follow-up fixes
to claims I introduced in earlier commits of this branch (f10b989 for
C1, 66d2b86 for I1).

* fix(skill): remove env-leak gate references (feature was removed in provider extraction)

C2 from the PR code-reviewer (comment 4311243858). The pre-spawn env-leak
gate was removed from the codebase during the provider-extraction refactor
— see TODO(coleam00#1135) at packages/providers/src/claude/provider.ts:908. Zero
hits for --allow-env-keys / allowEnvKeys / allow_env_keys / allow_target_repo_keys
across packages/. The CLI's parseArgs (cli.ts:182-208) has no
--allow-env-keys option, and because parseArgs uses strict: false, an
unknown --allow-env-keys would be silently ignored rather than error.

What remains accurate and is NOT touched:
- Three-Path Env Model section (user/repo archon-owned envs are loaded;
  target repo <cwd>/.env keys are stripped from process.env at boot)
  still correctly describes current behavior, grounded in
  packages/paths/src/strip-cwd-env.ts + env-integration.test.ts
- Per-Project Env Injection section (Option 1: .archon/config.yaml env:
  block; Option 2: Web UI Settings → Projects → Env Vars) is unchanged —
  both remain the sanctioned way to get env vars into subprocesses

Removed claims (all three files):
- cli-commands.md: --allow-env-keys flag row in the workflow run flags
  table
- repo-init.md: the "Env-leak gate" subsection at the end of Per-Project
  Env Injection listing 5 remediations (all of which reference UI/CLI/
  config surfaces that don't exist). Replaced with a succinct callout
  that explains the actual current behavior — target repo .env keys are
  stripped, workflows that need those values should use managed
  injection — so the reader still gets the "where to put my env vars"
  answer
- troubleshooting.md: the "Cannot register: codebase has sensitive env
  keys" section (error message that can no longer be emitted)

If the env-leak gate is ever resurrected per TODO(coleam00#1135), the docs can be
re-added then. The CHANGELOG v0.3.0 entry describing the gate is a
historical record of past behavior and does not need to be rewritten.

* fix(skill/troubleshooting): correct JSONL event type names and field name

C3 from the PR code-reviewer (comment 4311243858). The troubleshooting
reference's event-types table used _started / _completed / _failed
suffixes, but packages/workflows/src/logger.ts:19-30 shows the actual
WorkflowEvent.type enum is:

  workflow_start | workflow_complete | workflow_error |
  assistant | tool | validation |
  node_start | node_complete | node_skipped | node_error

The second jq recipe also queried `.event` but the discriminator is `.type`.

Fixes:
- Event table: renamed columns (_started → _start, _completed → _complete,
  _failed → _error). Explicitly called out the field name as `type` so the
  reader knows what jq selector to use
- Replaced the "tool_use / tool_result" row with a single `tool` row and
  listed its actual payload fields (tool_name, tool_input, duration_ms,
  tokens) — tool_use/tool_result are SDK message kinds that appear within
  the AI stream, not top-level log event types
- Added a `validation` row (was missing; it's emitted by workflow-level
  validation calls with `check` and `result` fields)
- Removed `retry_attempt` row — this event type is not emitted to the
  JSONL file. Retry bookkeeping goes through pino logs, not the workflow
  log file
- Added an explicit callout that loop_iteration_started /
  loop_iteration_completed (and other emitter-only events) go through
  the workflow event emitter + DB workflow_events table, NOT the JSONL
  file. Pointed readers to the DB or Web UI for loop-level detail. This
  distinguishes the two parallel event systems — easy to conflate
  (store.ts:11-17 uses _started/_completed/_failed for the DB side,
  logger.ts uses _start/_complete/_error for JSONL)
- Fixed the "all failed events" jq recipe: .event → .type and _failed → _error
- Minor cleanup: the inline "tool_use events" mention in the "running
  forever" section said the wrong event name — updated to "tool or
  assistant events in the tail"

Grounded in packages/workflows/src/logger.ts (canonical JSONL event
shape) and packages/workflows/src/store.ts (the parallel DB event
naming, which the reviewer correctly flagged as different and worth
keeping distinct).

* fix(skill): two stragglers from the code-reviewer audit

Cleanup of two references that slipped through the earlier C1 and C3 fixes:

- references/troubleshooting.md:126: \`node_failed\` → \`node_error\`
  (the "Node output is empty" diagnostics section references the JSONL
  log, which uses the logger.ts enum — not the DB workflow_events table
  which does use \`node_failed\`). The C3 fix corrected the event table
  and one jq recipe but missed this inline mention.

- references/interactive-workflows.md:106: removed \`archon workflow
  cancel <run-id>\` (nonexistent CLI subcommand) from the
  troubleshooting bullet. This was pre-existing before the hardening
  PR but fell within the C1 remediation scope. Replaced with the
  correct triage: reject (approval gate only) vs abandon (orphan
  cleanup, no subprocess kill) vs chat /workflow cancel (actual
  subprocess termination).

Grounded in the same sources as the earlier C1/C3 commits:
packages/cli/src/cli.ts:318-485 (no cancel case) and
packages/workflows/src/logger.ts:19-30 (JSONL type enum).

* feat(skill): point to archon.diy as the canonical docs source

The skill had no reference to archon.diy (the live docs site built from
packages/docs-web/). Several reference files said "see the docs site"
without naming the URL, leaving the agent to guess or grep the repo for
the hostname. An agent with the skill loaded should know that when the
distilled reference pages don't cover a case, the full canonical docs
are one WebFetch away.

SKILL.md: new "Richer Context: archon.diy" section between Routing and
Running Workflows. Covers:
- When to reach for the live docs (longer examples, tutorial framing,
  features the skill only mentions in passing, "where's that
  documented?" user questions)
- URL map — 13 starting points covering getting-started, book (tutorial
  series), guides/ (authoring + per-node-type + per-node-feature),
  reference/ (variables, CLI, security, architecture, configuration,
  troubleshooting), adapters/, deployment/
- Precedence: skill refs first (context-cheap, tuned for agents), docs
  site as escalation. Prevents agents defaulting to WebFetch when a
  local skill ref already covers the answer

Also upgrades the 5 existing generic "docs site" mentions across
reference files to concrete archon.diy URLs with anchor fragments where
helpful:
- good-practices.md: Inline sub-agents pattern → archon.diy/guides/
  authoring-workflows/#inline-sub-agents
- troubleshooting.md: "Install page on the docs site" → archon.diy/
  getting-started/installation/
- workflow-dag.md: "Workflow Description Best Practices" → anchor link;
  sandbox schema reference → archon.diy/guides/authoring-workflows/
  #claude-sdk-advanced-options
- repo-init.md: Security Model reference → archon.diy/reference/
  security/#target-repo-env-isolation (deep-link into the section that
  covers the <cwd>/.env strip behavior)

URL source of truth: astro.config.mjs:5 (site: 'https://archon.diy').
URL structure mirrors packages/docs-web/src/content/docs/<section>/
<page>.md — verified by the 62 pages the docs build produces.
…#1395)

Anthropic's Opus 4.7 landed 2026-04-16; on the Anthropic API, opus /
opus[1m] now resolve to 4.7 with a 1M context window at standard
pricing. Using the alias instead of the hard-pinned claude-opus-4-6[1m]
lets bundled default workflows auto-track the recommended Opus version.

No explicit effort is set, so nodes inherit the per-model default
(xhigh on 4.7, high on 4.6).
…m00#1398)

* fix(workflow): migrate piv-loop plan handoff to $ARTIFACTS_DIR (coleam00#1380)

The create-plan node used a relative path (.claude/archon/plans/{slug}.plan.md)
that the AI agent would sometimes write to a different location, breaking all
downstream nodes that glob for the plan file. Migrated all plan/progress file
references to $ARTIFACTS_DIR/plan.md and $ARTIFACTS_DIR/progress.txt, matching
the pattern used by archon-fix-github-issue and other workflows.

Changes:
- Replace slug-based plan path with $ARTIFACTS_DIR/plan.md in create-plan node
- Replace ls -t glob discovery with direct $ARTIFACTS_DIR/plan.md reads in
  refine-plan, code-review, and fix-feedback nodes
- Replace empty-string guard with file-existence check in implement-setup bash
- Migrate progress.txt references in implement loop to $ARTIFACTS_DIR/
- Add explicit plan/progress paths in finalize node
- Regenerated bundled-defaults.generated.ts

Fixes coleam00#1380

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(workflow): address review findings in archon-piv-loop

- Rename 'Step 2: Write the Plan' to 'Step 2: Plan File Location' to
  eliminate the duplicate heading that collided with Step 3's identical
  title in the create-plan node
- Guard implement-setup against a 0-task plan file: exit 1 with a
  clear error when no '### Task N:' sections are found, preventing a
  silent no-op implement loop
- Remove 2>/dev/null from code-review commit so pre-commit hook failures
  and other stderr are visible to the agent instead of silently swallowed
- Replace '|| true' on git push in finalize with an explicit WARNING echo
  so push failures (auth, upstream conflict, no remote) surface to the
  agent rather than being silently ignored
- Regenerate bundled-defaults.generated.ts

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* chore(workflows): regenerate bundled defaults to match opus[1m] alias

The bundle was stale relative to the YAML sources after coleam00#1395 merged —
check:bundled was failing CI. Regenerated; no YAML edits.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…cutor (coleam00#1403)

PIV Task 1: Adds three new tests in a dedicated describe block
'executeDagWorkflow -- final status derivation' covering the anyFailed
branch (dag-executor.ts ~line 2956) that previously had no direct test:
- one success + one independent failure calls failWorkflowRun (not completeWorkflowRun)
- multiple successes + one failure calls failWorkflowRun (not completeWorkflowRun)
- trigger_rule: none_failed skips dependent node but anyFailed still marks run failed

Fixes coleam00#1381.
New reference for the archon skill: a single-glance lookup of which
parameter works on which node type, an intent-based "how do I..." table,
a consolidated silent-failure catalog, and an inline agents: section
(previously only referenced via archon.diy).

Purpose is complementary, not duplicative:
- workflow-dag.md remains the authoring guide
- dag-advanced.md remains the hooks/MCP/skills/retry deep-dive
- good-practices.md remains the patterns and anti-patterns
- parameter-matrix.md is the grep-this-first lookup when you know the
  outcome you want but not which field gets you there

Also registers the new reference in SKILL.md routing table.
@Wirasm
Copy link
Copy Markdown
Collaborator

Wirasm commented Apr 27, 2026

Hi @shaun0927 — thanks for opening this PR.

This repository uses a PR template at .github/pull_request_template.md with several required sections. A few of them appear to be empty or placeholder here:

  • Architecture Diagram
  • Label Snapshot
  • Change Metadata
  • Security Impact
  • Compatibility / Migration
  • Side Effects / Blast Radius
  • Rollback Plan
  • Risks and Mitigations

Could you fill those out (even briefly)? The template helps reviewers understand scope, risk, and rollback — it speeds up review significantly.

If a section genuinely doesn't apply, just write "N/A" in it rather than leaving it blank.

@Wirasm
Copy link
Copy Markdown
Collaborator

Wirasm commented Apr 27, 2026

@shaun0927 related to #1244 — overlapping area or partial fix.

Wirasm and others added 10 commits April 27, 2026 10:16
Add explicit references to .github/PULL_REQUEST_TEMPLATE.md in both
CONTRIBUTING.md and CLAUDE.md, plus a reminder to link issues with
Closes/Fixes/Resolves so they auto-close on merge. Repo-triage runs
were flagging dozens of partially-filled or unlinked PRs each cycle.
…riage (coleam00#1428)

* feat(workflows): add maintainer-standup workflow for daily PR/issue triage

Daily morning briefing that pulls origin/dev, triages all open PRs and assigned
issues against direction.md, and surfaces progress vs. the previous run. Designed
for live-checkout use (worktree.enabled: false) so it can read its own state.

Layout under .archon/maintainer-standup/:
  - direction.md (committed) — project north-star: what Archon IS / IS NOT.
    Drives PR P4 polite-decline classification with cited clauses.
  - README.md / profile.md.example — setup docs and template for new maintainers.
  - profile.md, state.json, briefs/YYYY-MM-DD.md — gitignored, per-maintainer.

Engine:
  - 3 parallel gather scripts in .archon/scripts/maintainer-standup-*.ts
    (git-status, gh-data, read-context) — bun runtime, JSON stdout.
  - Synthesis node: command file with output_format schema for
    { brief_markdown, next_state }.
  - Persist node: tiny inline bun script writes both to disk.

Run-to-run continuity: state.json carries observed_prs/issues snapshots, so the
next run can detect what merged, what closed, what the maintainer shipped, and
which carry-over items aged past N days.

Also adds .archon/** to the ESLint global ignore list (matches the existing
.claude/skills/** pattern) since .archon/ is user content and not part of any
tsconfig project.

* fix(maintainer-standup): address CodeRabbit review on coleam00#1428

- gh-data: bump --limit 100 → 1000 on all_open_prs and warn loudly when
  the cap is hit; preserves the observed_prs invariant the next-run
  "resolved since last run" diff depends on. (CodeRabbit critical)
- maintainer-standup.md: clarify P1 CI signal — the gathered payload only
  carries mergeStateStatus, not statusCheckRollup; for borderline P1s,
  drill in via `gh pr checks <n>`. (CodeRabbit minor)
- workflow.yaml persist: write briefs under local YYYY-MM-DD (sv-SE
  locale) instead of UTC ISO date, so an evening run doesn't file
  tomorrow's brief and break recent_briefs lookups. (CodeRabbit minor)
- workflow.yaml persist: wrap state/brief writes in try/catch; on
  failure dump brief_markdown and next_state to stderr so a 5-minute
  Sonnet synthesis isn't lost to a transient disk error. (CodeRabbit minor)
- gh-data + git-status: switch from execSync (shell-string) to
  execFileSync (argv array) for git/gh invocations. Defense-in-depth
  against shell metacharacters in values that pass through (esp. the
  gh_handle from profile.md). (CodeRabbit nitpick)
Add optional `tags: string[]` to `workflowBaseSchema`. Explicit values take precedence over keyword inference; `tags: []` suppresses inference end-to-end; omitting the field falls back to inference (backwards compatible). Non-array values warn-and-ignore matching the sibling `worktree`/`additionalDirectories` patterns.
…ows under maintainer/ (coleam00#1430)

* feat(workflows): add maintainer-review-pr and group maintainer workflows under .archon/workflows/maintainer/

Adds the maintainer-review-pr workflow — a Pi/Minimax-based PR triage
flow that gates on direction alignment, scope focus, and PR-template
quality before doing any deep review. If the gate clears, runs the
five review aspects (code/error-handling/test-coverage/comment-quality/
docs-impact) as parallel Archon nodes and auto-posts a synthesized
review comment. If the gate fails (direction conflict, multiple
concerns, sprawling scope), drafts a polite-decline comment and pauses
for the maintainer's approval before posting.

Reorganizes the existing maintainer-standup workflow into the same
subfolder so all maintainer-facing workflows live together. Subfolder
grouping is supported by the workflow loader (1 level deep, resolution
by filename).

What lands:

- .archon/workflows/maintainer/maintainer-standup.yaml (moved from
  .archon/workflows/maintainer-standup.yaml)
- .archon/workflows/maintainer/maintainer-review-pr.yaml (new)
- .archon/commands/maintainer-review-{gate,code-review,error-handling,
  test-coverage,comment-quality,docs-impact,synthesize,report}.md (new,
  Pi-tuned variants of the existing review-agent commands so they avoid
  Claude-only Task / sub-agent patterns)

Pi/Minimax integration:

- Uses provider: pi, model: minimax/MiniMax-M2.7 — verified via the
  e2e-minimax-smoke test that Pi correctly routes to Minimax (session
  jsonl confirms provider=minimax) and that Pi's best-effort
  output_format parser handles the gate's nested schema.
- Two test runs landed real comments: a direction-decline on PR coleam00#1335
  and a deep-review on PR coleam00#1369. Both were posted to GitHub via the
  workflow's gh pr comment node.

* chore(workflows): also group repo-triage under .archon/workflows/maintainer/

repo-triage is the third maintainer-facing workflow alongside maintainer-standup and maintainer-review-pr; group it in the same subfolder for consistency. Subfolder resolution is by filename so the workflow name is unchanged.
…r unmapped providers (coleam00#1284)

Closes coleam00#1096.

- Switch Pi provider model lookup from pi-ai's getModel() (static catalog
  only) to ModelRegistry.create(authStorage).find() so user-configured
  custom models in ~/.pi/agent/models.json (LM Studio, ollama, llamacpp,
  custom OpenAI-compatible endpoints) are discoverable.
- Remove the local lookupPiModel helper.
- For env-var-mapped providers (anthropic, openai, etc.) still throw
  with a pi /login hint when credentials are missing. For unmapped
  providers, log pi.auth_missing at info and continue so local models
  that don't need credentials work without ceremony.
- Surface modelRegistry.getError() in the not-found message and emit
  pi.model_not_found so users debugging custom-provider configs see the
  real cause (e.g. missing baseUrl in models.json).
- Guard AuthStorage.create() and ModelRegistry.create() with try/catch
  so a malformed ~/.pi/agent/auth.json surfaces with Pi-framed context
  instead of a raw SDK stack trace.
- Document the credential-free path for local providers in ai-assistants.md.

Co-authored-by: Matt Chapman <Matt@NinjitsuWeb.com>
…add e2e-minimax-smoke (coleam00#1431)

* chore(workflows): group all smoke-test workflows under .archon/workflows/test-workflows/

Move the 7 existing e2e-*.yaml smoke tests plus the new e2e-minimax-smoke
test into a dedicated subfolder. Subfolder grouping is supported by the
workflow loader (1 level deep, resolution by filename) so workflow names
are unchanged. Mirrors the .archon/workflows/maintainer/ split landing
in coleam00#1430.

Also adds e2e-minimax-smoke.yaml — a sanity check that Pi correctly
routes to Minimax M2.7 via the user's local pi auth, and that Pi's
best-effort output_format parser handles a small nested schema. Asserts
routing by reading the most recent Pi session jsonl rather than asking
the model to self-identify (LLMs are unreliable narrators about their
own identity, especially when Pi's system prompt mentions other
providers as defaults).

* fix(e2e-minimax-smoke): address CodeRabbit review on coleam00#1431

- Widen find window from -mmin -3 to -mmin -10. The smoke's three Pi
  nodes plus the assert can collectively run several minutes on slow
  networks; 3 minutes was tight enough to false-FAIL on a healthy run.
  (CodeRabbit minor)
- Drop non-deterministic `head -1` over `find` output. find doesn't
  guarantee any order; on a tie, the wrong file would be picked. Now
  iterates all matching sessions and breaks on first one carrying the
  routing signal — any match is sufficient evidence. (CodeRabbit minor)
- Replace single-regex `'"provider":"minimax".*"modelId":"MiniMax-M2.7"'`
  with two separate greps joined by `&&`. JSON field order isn't part of
  Pi's contract; a future Pi release reordering `provider` and `modelId`
  in the model_change event would silently false-FAIL the original
  pattern. The new check is order-independent. (CodeRabbit major)
…oleam00#1432)

Six findings, two majors and four minors/nitpicks:

- gate.md L17 vs L77: resolved conflicting input-source instructions.
  Body claimed "all inline, no extra fetch" while a later phase
  permitted reading PULL_REQUEST_TEMPLATE.md. Now: explicit "one
  allowed extra read" callout in Phase 1 + matching wording in Gate C.
  (CodeRabbit major)

- gate.md fenced blocks: added missing language identifiers (text/json/
  markdown) to satisfy markdownlint MD040. (CodeRabbit minor)

- gate.md L155 + read-context.ts: deterministic clock. The 3-day deadline
  was anchored to prior_state.last_run_at, which can be stale and produce
  past-dated deadlines. Moved both today and deadline_3d into the
  read-context.ts output (computed via sv-SE locale → ISO date in local
  time) and instructed the gate to use $read-context.output.deadline_3d
  directly. LLMs are unreliable at calendar arithmetic; this avoids it
  entirely. (CodeRabbit major)

- maintainer-review-pr.yaml fetch-diff: dropped 2>/dev/null on gh pr diff
  so auth / network / deleted-PR failures fail the node instead of
  feeding an empty diff to the gate. Empty-but-successful diff (PR has
  no changes) is now an explicit marker the gate can detect. (CodeRabbit
  minor)

- maintainer-review-pr.yaml approve-unclear: added capture_response: true
  so the maintainer's approve comment flows to the report node. Reject
  reasoning is already captured by Archon's run record. (CodeRabbit
  minor)

- maintainer-review-pr.yaml post-decline + report.md: the gh pr edit
  --add-label call previously swallowed all errors with || true and the
  report still claimed the label was applied. Now writes applied/skipped
  to $ARTIFACTS_DIR/.label-applied + the gh stderr to .label-error so
  the report can describe the actual outcome. (CodeRabbit nitpick)
…ume (coleam00#1435)

* fix(workflows): approval gate bypass after reject-with-redraft on resume

When an approval node was rejected with on_reject.prompt, the synthetic
PromptNode built to run the on_reject prompt reused the approval gate's
own node ID. executeNodeInternal then wrote a node_completed event with
that ID, causing getCompletedDagNodeOutputs to treat the gate as already
completed on the next resume — bypassing the human gate entirely.

Fix: give the synthetic node the ID `${node.id}:on_reject` so its
node_completed event has a distinct step_name that won't match the
approval gate slot in priorCompletedNodes.

Adds a regression test asserting no node_completed event with the
approval gate's ID is written during on_reject execution.

Fixes coleam00#1429

* test(workflows): add positive assertion and SSE side-effect comment for on_reject synthetic node

Add complementary positive assertion to the regression test to verify that
node_completed is written exactly once with step_name 'review:on_reject',
ensuring future refactors that suppress the event entirely would be caught.

Add inline comment in executeApprovalNode documenting the known SSE side-effect:
node_started/node_completed events with nodeId='review:on_reject' flow through
the SSE pipeline into the web UI, resulting in a transient phantom node in the
execution view. This is cosmetic-only — the human gate contract is preserved.

* simplify: reduce duplicate cast pattern in on_reject test assertions
…e checkout (coleam00#1438)

* feat(workflows): add mutates_checkout field to skip path-lock for concurrent runs

Add `mutates_checkout: boolean` (optional, default true) to the workflow
schema. When set to false, the executor skips the path-exclusive lock
that serializes all runs on the same working path, allowing N concurrent
runs on the same live checkout.

The primary use case is `maintainer-review-pr`, which reads shared state
but writes only to per-run artifact paths and GitHub PR comments — two
parallel reviews of different PRs should not fail with "Workflow already
active on this path".

Changes:
- `schemas/workflow.ts`: add optional `mutates_checkout` field
- `loader.ts`: parse and propagate the field (warn-and-ignore on invalid values)
- `executor.ts`: wrap path-lock guard in `if (workflow.mutates_checkout !== false)`
- `executor.test.ts`: two new tests in the concurrent-run guard suite
- `maintainer-review-pr.yaml`: opt in with `mutates_checkout: false`

* test(workflows): add loader tests for mutates_checkout parsing

- Add 5 tests covering false, true, omitted, and invalid (string "yes") values
- Invalid non-boolean values are silently dropped with warn — now explicitly tested
- Remove the // end mutates_checkout guard trailing comment (no precedent in file)
- Clarify loader comment: "parse/warn pattern" not "warn-and-ignore pattern" to avoid implying the return style matches interactive

* simplify: collapse nodeType/aiFields pair into single nonAiNode object in parseDagNode
…es (coleam00#1434)

* docs: replace String.raw with direct assignment in script node examples

String.raw`$nodeId.output` fails silently when substituted output contains
a backtick, terminating the template literal early and producing cryptic parse
errors. JSON is valid JS expression syntax, so direct assignment is safe for
all valid JSON values including those with backticks.

- Replace String.raw pattern in dag-workflow.yaml example
- Replace String.raw pattern in archon-workflow-builder.yaml template
- Add CAUTION bullet in workflow-dag.md Script Node section
- Add Silent Failures item coleam00#14 in parameter-matrix.md
- Add Starlight caution aside in script-nodes.md
- Extend script bodies bullet in variables.md
- Regenerate bundled-defaults.generated.ts

Fixes coleam00#1427

* docs: fix Rule 6 in generate-yaml prompt to distinguish bun vs uv patterns

Rule 6 still referenced JSON.parse after the example was updated to direct
assignment, creating a contradiction for the AI code generator. Update the
prose to explicitly distinguish TypeScript/bun (direct assignment) from
Python/uv (json.loads), matching the updated embedded example.
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a718a116f7

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/workflows/src/subprocess.ts Outdated
const err = new Error(
`Command output exceeded ${MAX_CAPTURE_BYTES} bytes and was terminated`
) as SubprocessError;
err.killed = true;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Do not mark max-buffer exits as timeouts

When capture exceeds MAX_CAPTURE_BYTES, this branch sets err.killed = true, but executeBashNode and executeScriptNode treat any killed === true error as a timeout (isTimeout check in dag-executor.ts). As a result, oversized output is surfaced as "timed out" instead of a buffer-overflow failure, which misclassifies the failure mode and sends users toward the wrong remediation (raising timeout instead of reducing output). Use a distinct signal for max-buffer termination (or leave killed false) so timeout classification remains accurate.

Useful? React with 👍 / 👎.

@shaun0927
Copy link
Copy Markdown
Author

Pushed a718a116 to address the remaining bot review findings on this PR, and updated the PR description with the missing template sections.

What changed in this commit:

  • subprocess.ts — bounded capture now rejects instead of truncating. Previously, output > 1 MiB was silently truncated and the promise resolved with partial data, so downstream $nodeId.output was wrong without a signal. Now we force-kill the producer and reject with code: 'ERR_MAXBUFFER'. Matches the original execFile-side guarantee that PR bug(workflows): script node timeout does not kill wrapper grandchildren (uv/bun pattern) #1242 noted as missing.
  • subprocess.ts — Windows process-tree kill. The previous revision left a NOTE comment about POSIX limitation. Replaced with a real taskkill /T /F /PID <pid> fallback when process.platform === 'win32', so timeouts actually walk the process tree on Windows. POSIX path (process.kill(-pid)) is unchanged.
  • subprocess.ts — preserved SIGKILL fallback during timeout flow. When killed === true, we deliberately do not clear the SIGKILL grace timer if the wrapper exits early — descendants may still be alive, and clearing it lets them survive.
  • script-node-deps.test.ts — fix pre-existing test regression. Bash/script nodes now go through subprocess.exec, so the test mocks ./subprocess directly instead of @archon/git's execFileAsync. Without this fix the test file failed against the original PR commit too — it had been mocking at a layer the production code no longer touches.
  • subprocess.test.ts — new file. Real-subprocess coverage for: success path, non-zero exit propagation, process-tree timeout kill (verifies the wrapper-then-grandchild scenario), and ERR_MAXBUFFER overflow rejection. Skipped on Windows where the fallback uses taskkill rather than POSIX signals.

Local validation:

bun run type-check                                                # clean
bun test packages/workflows/src/subprocess.test.ts                # 5/5 pass
bun test packages/workflows/src/script-node-deps.test.ts          # 7/7 pass
bun test packages/workflows/src/dag-executor.test.ts              # 153/153 pass

Awaiting fresh CodeRabbit/Codex passes against this commit.

@shaun0927
Copy link
Copy Markdown
Author

Pushed d08aef9b to address two findings from a follow-up review pass on a718a116:

  • subprocess.ts close handler — classify timeout over ERR_MAXBUFFER. When a process times out and over-produces output, both killed and capturedExceeded would be true. Previously the buffer-overflow branch ran first and surfaced ERR_MAXBUFFER, which is the wrong signal for the dag-executor's timeout handling. Reordered so timeout always wins.
  • Windows killProcessTree(pid, signal) — honour the signal. The SIGTERM call now no-ops on Windows (no SIGTERM analog), and only the SIGKILL grace-phase fires taskkill /T /F. The 5s grace timer becomes the actual grace period; we no longer waste a subprocess spawn or collapse the two-phase escalation into two identical force-kills. POSIX path unchanged.
  • Export MAX_CAPTURE_BYTES from subprocess.ts; the test imports it instead of redeclaring, so the cap stays in lockstep if it is ever tuned.

Local validation: bun run type-check clean; bun test src/subprocess.test.ts src/script-node-deps.test.ts 13/13 pass.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
packages/workflows/src/subprocess.ts (1)

92-113: ⚠️ Potential issue | 🟠 Major

Boundary case: chunk landing exactly on MAX_CAPTURE_BYTES silently truncates subsequent output.

When a chunk fills capturedBytes to exactly MAX_CAPTURE_BYTES (line 94's > is false, so the trim path is skipped and lines 100–101 set capturedBytes = MAX without calling onCaptureExceeded()), every subsequent data event takes the line 93 / 104 early-return path and is silently dropped — the child is never killed and the close handler resolves/rejects without ERR_MAXBUFFER. Move the overflow notification into the saturation branch so the cap is enforced on the next byte either way.

🔧 Proposed fix
     child.stdout.on('data', (chunk: Buffer) => {
-      if (capturedBytes >= MAX_CAPTURE_BYTES) return;
+      if (capturedBytes >= MAX_CAPTURE_BYTES) {
+        onCaptureExceeded();
+        return;
+      }
       if (capturedBytes + chunk.length > MAX_CAPTURE_BYTES) {
         stdout += chunk.toString('utf-8', 0, MAX_CAPTURE_BYTES - capturedBytes);
         capturedBytes = MAX_CAPTURE_BYTES;
         onCaptureExceeded();
         return;
       }
       capturedBytes += chunk.length;
       stdout += chunk.toString();
     });
     child.stderr.on('data', (chunk: Buffer) => {
-      if (capturedBytes >= MAX_CAPTURE_BYTES) return;
+      if (capturedBytes >= MAX_CAPTURE_BYTES) {
+        onCaptureExceeded();
+        return;
+      }
       if (capturedBytes + chunk.length > MAX_CAPTURE_BYTES) {
         stderr += chunk.toString('utf-8', 0, MAX_CAPTURE_BYTES - capturedBytes);
         capturedBytes = MAX_CAPTURE_BYTES;
         onCaptureExceeded();
         return;
       }
       capturedBytes += chunk.length;
       stderr += chunk.toString();
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/workflows/src/subprocess.ts` around lines 92 - 113, The
stdout/stderr data handlers fail to call onCaptureExceeded when a chunk exactly
saturates capturedBytes to MAX_CAPTURE_BYTES, causing future data to be silently
dropped; update the handlers in the child.stdout.on('data', ...) and
child.stderr.on('data', ...) logic to treat saturation as overflow by calling
onCaptureExceeded whenever capturedBytes reaches MAX_CAPTURE_BYTES (i.e., change
the condition to >= or add an onCaptureExceeded() call after setting
capturedBytes = MAX_CAPTURE_BYTES in the non-overflow branch) so the cap is
enforced and the overflow path is consistently triggered.
🧹 Nitpick comments (1)
packages/workflows/src/subprocess.ts (1)

124-132: Optional: unref() the SIGKILL grace timer to avoid holding the event loop open.

When the timeout fires and the wrapper exits cleanly after SIGTERM (before the 5 s grace expires), the close handler intentionally leaves killTimer armed — but the promise has already rejected, so any caller awaiting subprocess.exec resumes while a 5 s timer keeps the event loop alive. For short-lived CLIs invoking workflows this can delay process exit by up to 5 s. unref() lets the runtime exit if nothing else is pending while still firing the SIGKILL fallback for stubborn descendants.

♻️ Proposed tweak
         killTimer = setTimeout(() => {
           try {
             killProcessTree(pid, 'SIGKILL');
           } catch {
             // Already dead
           }
         }, PROCESS_GROUP_KILL_GRACE_MS);
+        killTimer.unref?.();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/workflows/src/subprocess.ts` around lines 124 - 132, The SIGKILL
grace timer created and stored in killTimer via setTimeout should be unref()'d
so it doesn't keep the event loop alive after the promise has already rejected;
after assigning killTimer = setTimeout(...), call killTimer.unref() (guarded
with if (typeof killTimer?.unref === 'function')) so the fallback
killProcessTree(pid, 'SIGKILL') still runs but the timer won't prevent process
exit when nothing else is pending; update the code paths around killTimer in
subprocess.exec to perform this unref.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@packages/workflows/src/subprocess.ts`:
- Around line 92-113: The stdout/stderr data handlers fail to call
onCaptureExceeded when a chunk exactly saturates capturedBytes to
MAX_CAPTURE_BYTES, causing future data to be silently dropped; update the
handlers in the child.stdout.on('data', ...) and child.stderr.on('data', ...)
logic to treat saturation as overflow by calling onCaptureExceeded whenever
capturedBytes reaches MAX_CAPTURE_BYTES (i.e., change the condition to >= or add
an onCaptureExceeded() call after setting capturedBytes = MAX_CAPTURE_BYTES in
the non-overflow branch) so the cap is enforced and the overflow path is
consistently triggered.

---

Nitpick comments:
In `@packages/workflows/src/subprocess.ts`:
- Around line 124-132: The SIGKILL grace timer created and stored in killTimer
via setTimeout should be unref()'d so it doesn't keep the event loop alive after
the promise has already rejected; after assigning killTimer = setTimeout(...),
call killTimer.unref() (guarded with if (typeof killTimer?.unref ===
'function')) so the fallback killProcessTree(pid, 'SIGKILL') still runs but the
timer won't prevent process exit when nothing else is pending; update the code
paths around killTimer in subprocess.exec to perform this unref.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f6420cfa-8b08-4ceb-b50c-99e3967be30d

📥 Commits

Reviewing files that changed from the base of the PR and between a718a11 and d08aef9.

📒 Files selected for processing (2)
  • packages/workflows/src/subprocess.test.ts
  • packages/workflows/src/subprocess.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/workflows/src/subprocess.test.ts

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d08aef9b7e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/workflows/src/subprocess.ts Outdated
Comment on lines +100 to +104
capturedBytes += chunk.length;
stdout += chunk.toString();
});
child.stderr.on('data', (chunk: Buffer) => {
if (capturedBytes >= MAX_CAPTURE_BYTES) return;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Track max-buffer limits per stream

This implementation uses a single capturedBytes counter for both stdout and stderr, so it rejects once the combined output crosses 1 MiB. execFileAsync (the previous path) inherits Node's execFile semantics where maxBuffer applies to each stream independently, so workloads that emit moderate output on both streams (for example ~700 KiB stdout + ~700 KiB stderr) now fail with ERR_MAXBUFFER even though neither stream individually exceeds the historical limit.

Useful? React with 👍 / 👎.

Comment thread packages/workflows/src/subprocess.ts Outdated
Comment on lines +40 to +41
if (signal !== 'SIGKILL') return;
const child = spawn('taskkill', ['/pid', String(pid), '/T', '/F'], {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Enforce timeout immediately on Windows

On Windows, killProcessTree returns early for any signal except SIGKILL, so the timeout handler's first kill step (SIGTERM at the configured timeout) is a no-op. That means timed-out commands are allowed to keep running until the 5s grace timer triggers taskkill, so a node with timeout: 500 can continue executing side effects for roughly 5 extra seconds after being marked timed out.

Useful? React with 👍 / 👎.

…s/experimental/

Move two repo-scoped workflows that were sitting untracked at the workflow
root into a dedicated subfolder. Subfolder grouping is supported by the
loader (1 level deep, resolution by filename), so workflow names are
unchanged and the /release skill still resolves archon-release correctly.

Files moved:
- archon-fix-github-issue-experimental.yaml — Path-A variant of the
  issue-fix workflow used today to land coleam00#1434, coleam00#1435, coleam00#1438.
- archon-release.yaml — the live release workflow used by the /release
  skill end-to-end (validate -> binary smoke -> version bump -> changelog
  -> approval -> commit -> PR -> tag -> Homebrew formula update).
@Wirasm
Copy link
Copy Markdown
Collaborator

Wirasm commented Apr 27, 2026

Review Summary

Verdict: minor-fixes-needed

The process-tree kill implementation (subprocess.ts) is solid — detached: true, proper kill(-pid) on POSIX, taskkill /T /F on Windows, and clean error propagation throughout. Three issues need attention before merge: a misleading error message for output-overflow, a stale docstring in your test file, and a timing assertion that may be flaky on loaded CI runners.

Blocking issues

(none — no CRITICAL findings)

Suggested fixes

  • subprocess.ts (close handler): ERR_MAXBUFFER errors are surfacing with the message "Command timed out after ${timeout}ms" — the if (killed) branch fires before if (capturedExceeded) since onCaptureExceeded() sets killed = true. A user whose script produces unbounded output will see "timed out" instead of "output exceeded 1 MiB", causing them to increase the timeout rather than fix the script.

    • Fix: Check capturedExceeded before killed in the close handler, or add reason: 'buffer_exceeded' to SubprocessError and update the dag-executor catch blocks to check it before falling back to the generic timeout classification.
  • script-node-deps.test.ts:1-7 (file docstring): The docstring still claims the tests "mock @archon/git's execFileAsync to verify command construction" — but your PR changes 7 tests (lines 221, 260, 292, 324, 358, 396, 441) to mock ./subprocess.exec instead. The correct docstring text appears in the diff but wasn't committed into the file. Fixing this will also clear the CodeRabbit docstring coverage warning (40% vs 80%).

    • Fix: Replace the file-level docstring with the updated version from the diff.

Minor / nice-to-have

  • subprocess.test.ts timing assertion: expect(elapsed).toBeLessThan(8_000) inside a 15-second it(...) timeout may flake on heavily loaded CI runners. Consider raising the assertion to 15_000 or the outer timeout to 20_000.

  • subprocess.ts:67: execWithProcessGroupKill lacks an explicit return type. Add Promise<{ stdout: string; stderr: string }> to catch future signature drift.

  • subprocess.ts:28: IS_WINDOWS could use a brief comment noting the no-hot-reload assumption to pre-empt confusion if the module is ever refactored for lazy loading.

  • subprocess.ts (Windows taskkill branch): child.on('error') silently swallows taskkill failures. Consider a console.warn for debuggability — the parent error is already propagated, so this is cleanup-only, but a warning would help during diagnostics.

  • subprocess.test.ts (~lines 88-93): The 5-line comment explaining why yes a always hits the buffer-overflow path could be tightened to 1-2 sentences without losing the valuable insight.

Compliments

  • The subprocess.ts file-level docstring is excellent — it captures the non-obvious why (Node's execFile timeout only kills the direct child, not grandchildren spawned by wrapper runtimes like uv/bun). This kind of comment prevents future accidental regressions.
  • The ERR_MAXBUFFER vs timeout ordering comment in the close handler is precise and documents a subtle invariant that downstream dag-executor code depends on.
  • Mock isolation between subprocess.test.ts (run in its own bun test invocation) and script-node-deps.test.ts (which uses mock.module('./subprocess')) is handled correctly — this is easy to get wrong and you got it right.

Reviewed via maintainer-review-pr workflow (Pi/Minimax). Aspects run: code-review, error-handling, test-coverage, comment-quality.

avro198 and others added 3 commits April 27, 2026 21:32
…des (coleam00#1387)

executeBashNode previously only merged explicit envVars on top of
process.env. The three well-known workflow directories (artifactsDir,
logDir, baseBranch) were passed as function parameters and used for
compile-time substitution of $ARTIFACTS_DIR / $LOG_DIR / $BASE_BRANCH
in the script body, but were never added to the subprocess environment.

As a result, any script that relied on shell-runtime expansion — e.g.
JSON_FILE="${ARTIFACTS_DIR}/foo.output.json" inside a heredoc, an
inherited helper script, or a `bash -c` subshell — saw the variable
unset and silently fell back to its default (typically an empty string
or "."), writing artifacts to the workflow cwd instead of the nominal
artifacts directory.

Always build subprocessEnv from process.env plus the three well-known
directories, then allow explicit envVars to override. Compile-time
substitution behavior is unchanged; existing scripts that do not
reference these variables are unaffected; user-supplied envVars still
win on conflict.
…oleam00#1426)

* fix(workflow): substitute \$nodeId.output refs in approval messages

Approval node messages were emitted as raw strings, bypassing the
substituteNodeOutputRefs() pass that prompt/bash/loop/cancel nodes
all run. This made interactive workflows like atlas-onboard show
literal "\$gather-context.output.repo_name" placeholders to humans
at HITL gates, leaving them unable to know what they were approving.

Fix: rendered the approval.message through substituteNodeOutputRefs
once at the top of the standard approval gate path, then used the
resolved string in all 4 emission sites (safeSendMessage,
createWorkflowEvent, pauseWorkflowRun, event-emitter).

Test: new dag-executor.test case wires a structured-output upstream
node into an approval node and asserts pauseWorkflowRun receives the
substituted message ("Repo: hcr-els | App: CCELS | Port: 3012")
rather than the literal placeholders.

Repro: any workflow with an approval node whose message references
\$nodeId.output[.field]. Observed in the wild on atlas-onboard's
confirm-context HITL gate.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(workflow): extend approval-substitution test to cover all 4 emission sites

Per CodeRabbit review: the original test only verified pauseWorkflowRun
received the substituted message, but the fix touches 4 emission sites.
A future regression at safeSendMessage / createWorkflowEvent / event-emitter
would silently leave the test passing while users still saw raw $node.output
placeholders.

Adds two additional assertions:
- platform.sendMessage prompt contains substituted message + does NOT
  contain literal $gather-context.output placeholders
- The persisted approval_requested workflow event's data.message is
  substituted

Event-emitter assertion deferred (no existing pattern for spying on the
global emitter in this test file). Two of three secondary surfaces
covered closes the practical regression risk — both are user-visible
(chat prompt + audit-log event); the emitter is internal only.

Test count: 7 pass / 22 expect() (was 18). Full suite 193 pass / 353
expect() — no regressions.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…m00#1286) (coleam00#1367)

* feat(workflows): expose $LOOP_PREV_OUTPUT in loop node prompts (coleam00#1286)

Adds a new substitution variable that carries the previous loop iteration's
cleaned output into the next iteration's prompt. Empty on iteration 1; the
prior iteration's output (after stripCompletionTags) on iteration 2+.

Why: fresh_context: true loops have no way to reference what the previous
pass produced or why it failed without dragging the full session forward.
$LOOP_PREV_OUTPUT closes that gap with zero session-cost — same trust
boundary as $nodeId.output, no new external surface.

Changes:
- packages/workflows/src/executor-shared.ts: substituteWorkflowVariables
  accepts a 10th positional loopPrevOutput arg and substitutes
  $LOOP_PREV_OUTPUT (defaults to '').
- packages/workflows/src/dag-executor.ts: executeLoopNode passes
  lastIterationOutput on iteration 2+ (and explicit '' on iteration 1 /
  the first iteration of an interactive resume, since lastIterationOutput
  is a per-call variable that does not survive resume metadata).
- Unit tests: 3 new cases in executor-shared.test.ts.
- Integration tests: 2 new cases in dag-executor.test.ts verifying the
  prompt sent to the AI on iter 1 vs iter 2, and that the value reflects
  cleaned output (no <promise> tags).
- Docs: variables.md, loop-nodes.md (new "Retry-on-failure" pattern),
  CLAUDE.md variable reference.

Backward compatibility: prompts that don't reference $LOOP_PREV_OUTPUT are
unaffected. All 843 workflow tests + type-check + lint + format:check +
bun run validate pass locally.

* docs: address coderabbit review on variables/loop-nodes

- variables.md: include $LOOP_PREV_OUTPUT in substitution-order list and
  availability table to match the new variable row at line 30
- loop-nodes.md: document the interactive-resume exception where the first
  iteration after an approval-gate resume still receives an empty
  $LOOP_PREV_OUTPUT regardless of iteration number (per dag-executor.ts
  L1781-1783 where i === startIteration always clears prev output)

* docs(changelog): add Unreleased entry for $LOOP_PREV_OUTPUT (coleam00#1367 review)

* test(loop): add resume-from-approval integration test for $LOOP_PREV_OUTPUT (coleam00#1367 review)

Per maintainer-review-pr suggestion (Wirasm): two-call integration test
covering the resume-from-approval scenario.

  - Call 1: fresh interactive loop pauses at the gate after iteration 1 and
    asserts $LOOP_PREV_OUTPUT substitutes to empty on iter 1 (no prior
    output) plus the gate pause is recorded.
  - Call 2: resumed run with metadata.approval populated. The first
    resumed iteration must substitute $LOOP_PREV_OUTPUT to '', NOT to the
    paused run's iter-1 output (which lived in a different process and is
    not persisted). $LOOP_USER_INPUT still flows through as normal.

Locks the documented invariant at dag-executor.ts:1769-1772.

---------

Co-authored-by: voidborne-d <DottyEstradalco@allergist.com>
@shaun0927 shaun0927 force-pushed the fix/script-node-kill-process-group branch from d08aef9 to b9e6435 Compare April 28, 2026 04:25
@shaun0927
Copy link
Copy Markdown
Author

Pushed an update — addresses the open bot/maintainer findings on subprocess.ts:

  • CodeRabbit major (boundary case at MAX_CAPTURE_BYTES). A chunk that landed exactly on the cap (capturedBytes + chunk.length === MAX_CAPTURE_BYTES) used to skip the overflow notification, then the next chunk hit the early return and was silently dropped without killing the child or rejecting with ERR_MAXBUFFER. Both stdout/stderr data handlers now call onCaptureExceeded() on the saturation/early-return paths, and the boundary check is >= instead of >. onCaptureExceeded() already dedupes via the capturedExceeded flag, so the kill fires once.
  • CodeRabbit nitpick (unref SIGKILL grace timer). When the timeout has rejected and the wrapper exits cleanly during the grace window, the 5 s timer no longer keeps the event loop alive; short-lived CLIs don't sit idle for the grace period.
  • Wirasm: stale docstring on script-node-deps.test.ts. Docstring no longer claims the tests mock @archon/git's execFileAsync for command construction — the production path went through ./subprocess. CodeRabbit's docstring-coverage signal should clear with this.
  • Wirasm: flaky timing assertion in subprocess.test.ts. Bumped the elapsed-time ceiling from 8 s → 12 s and the outer it timeout from 15 s → 20 s. Still tight enough to catch a regression where the grandchild outlives the wrapper, with margin for loaded CI runners.

Also rebased onto current dev. Squashed the four review-cycle commits into one (the diff is unchanged). One conflict on packages/workflows/package.json (test split list) and an import block conflict on dag-executor.ts resolved by keeping upstream's discoverScriptsForCwd rename.

Local: bun run type-check clean, bun run lint clean, subprocess.test.ts 6/6, dag-executor.test.ts + script-node-deps.test.ts 204/204.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b9e6435570

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/workflows/src/subprocess.ts Outdated
// Don't keep the event loop alive just for the SIGKILL fallback — the
// timeout has already rejected, so awaiting callers (e.g. short-lived
// CLI invocations) shouldn't sit idle for the grace period.
killTimer.unref?.();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep SIGKILL fallback timer referenced

Do not unref() the grace-period kill timer here, because on timeout the direct child can exit quickly while a descendant keeps running in the same process group (e.g., ignores SIGTERM and redirects stdio). In that case close fires, the promise rejects, and a short-lived workflow runner can exit before the 5s fallback executes, leaving descendants alive and still performing side effects after a timed-out node.

Useful? React with 👍 / 👎.

Comment thread packages/workflows/src/subprocess.ts Outdated
onCaptureExceeded();
return;
}
if (capturedBytes + chunk.length >= MAX_CAPTURE_BYTES) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Allow output equal to capture cap

This >= threshold treats output that is exactly MAX_CAPTURE_BYTES as overflow and raises ERR_MAXBUFFER, but execFile semantics only fail when the buffer is exceeded. The regression means commands that emit exactly 1 MiB now fail even though they previously succeeded; the same boundary logic is duplicated for both stdout and stderr handlers.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
packages/workflows/src/script-node-deps.test.ts (1)

15-24: ⚠️ Potential issue | 🟠 Major

Use spyOn(subprocess, 'exec') here instead of mock.module('./subprocess').

This helper was explicitly exported as an object for spying, and the file-level module mock is still process-global. That can bleed into subprocess.test.ts or any later workflow test batch that expects the real implementation.

🧪 Suggested direction
-import { describe, it, expect, mock, beforeEach, afterEach } from 'bun:test';
+import { describe, it, expect, mock, beforeEach, afterEach, spyOn } from 'bun:test';
+import { subprocess } from './subprocess';
@@
-const mockSubprocessExec = mock(
-  async (_cmd: string, _args: string[], _opts?: unknown) =>
-    ({ stdout: '', stderr: '' }) as { stdout: string; stderr: string }
-);
-
-mock.module('./subprocess', () => ({
-  subprocess: { exec: mockSubprocessExec },
-}));
+const mockSubprocessExec = spyOn(subprocess, 'exec').mockImplementation(
+  async (_cmd: string, _args: string[], _opts?: unknown) =>
+    ({ stdout: '', stderr: '' }) as { stdout: string; stderr: string }
+);

As per coding guidelines, "Never use mock.module() on the same module path in different test files with different implementations; use spyOn() for modules other test files import directly".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/workflows/src/script-node-deps.test.ts` around lines 15 - 24, The
test currently uses mock.module('./subprocess') to replace subprocess.exec which
creates a process-global module mock and can leak into other tests; instead,
remove the mock.module call and replace it by importing the real subprocess
module and using spyOn(subprocess, 'exec') to stub the implementation with the
existing mockSubprocessExec (or its equivalent), i.e. locate the
mockSubprocessExec helper and the place where mock.module('./subprocess') is
used and change it to spyOn(subprocess,
'exec').mockImplementation(mockSubprocessExec) so only this test's spy is
affected and other tests keep the real implementation.
🧹 Nitpick comments (1)
packages/workflows/src/subprocess.ts (1)

55-59: Extract the exec options shape into an interface.

This inline object type makes the helper harder to reuse consistently and violates the repo’s TS rule for object shapes.

♻️ Proposed refactor
+interface ExecOptions {
+  cwd: string;
+  timeout: number;
+  env?: NodeJS.ProcessEnv;
+}
+
 function execWithProcessGroupKill(
   cmd: string,
   args: string[],
-  options: { cwd: string; timeout: number; env?: NodeJS.ProcessEnv }
+  options: ExecOptions
 ): Promise<{ stdout: string; stderr: string }> {

As per coding guidelines, "Use interface for defining object shapes in TypeScript".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/workflows/src/subprocess.ts` around lines 55 - 59, The inline object
type for exec options in execWithProcessGroupKill should be extracted into a
named interface to comply with the TS rule; define an interface (e.g., interface
ExecOptions { cwd: string; timeout: number; env?: NodeJS.ProcessEnv }) and
replace the inline annotation in execWithProcessGroupKill's signature with that
interface, and update any other functions or call sites that use the same shape
to reference ExecOptions so the shape is shared and reusable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/workflows/src/subprocess.ts`:
- Around line 97-106: The data handlers for child.stdout and child.stderr treat
a chunk that exactly reaches MAX_CAPTURE_BYTES as an overflow; update the logic
in the child.stdout.on('data', ...) and child.stderr.on('data', ...) handlers so
that you only treat it as exceeded when capturedBytes + chunk.length >
MAX_CAPTURE_BYTES, not >=. Specifically: keep the early return when
capturedBytes >= MAX_CAPTURE_BYTES, change the "capturedBytes + chunk.length >=
MAX_CAPTURE_BYTES" check to ">" so exact-fit chunks are appended fully (set
capturedBytes = MAX_CAPTURE_BYTES) without calling onCaptureExceeded(), and only
call onCaptureExceeded() when the incoming chunk would push capturedBytes
strictly beyond MAX_CAPTURE_BYTES (handle the partial copy in that branch). Use
the same fix for both stdout and stderr handlers referencing capturedBytes,
MAX_CAPTURE_BYTES, and onCaptureExceeded.
- Around line 32-40: The Windows branch in killProcessTree currently returns
early for non-SIGKILL signals, leaving processes running during the 5s grace
period; change it to forward SIGTERM to the PID on Windows by calling
process.kill(pid, 'SIGTERM') when signal === 'SIGTERM' so the graceful shutdown
phase starts immediately (retain the existing logic to only invoke taskkill for
SIGKILL and preserve PROCESS_GROUP_KILL_GRACE_MS behavior). Locate
killProcessTree and update the IS_WINDOWS branch to call process.kill for
SIGTERM instead of returning, ensuring the later taskkill still runs only for
SIGKILL.
- Around line 142-145: Remove the call to killTimer.unref() so the SIGKILL
fallback timer remains part of the event loop; locate the SIGKILL fallback /
grace-period logic where killTimer is created (search for the "SIGKILL fallback"
comment or the killTimer variable in subprocess.ts) and delete the
killTimer.unref?.() invocation to ensure the timer fires and kills stubborn
descendant processes within the grace period.

---

Duplicate comments:
In `@packages/workflows/src/script-node-deps.test.ts`:
- Around line 15-24: The test currently uses mock.module('./subprocess') to
replace subprocess.exec which creates a process-global module mock and can leak
into other tests; instead, remove the mock.module call and replace it by
importing the real subprocess module and using spyOn(subprocess, 'exec') to stub
the implementation with the existing mockSubprocessExec (or its equivalent),
i.e. locate the mockSubprocessExec helper and the place where
mock.module('./subprocess') is used and change it to spyOn(subprocess,
'exec').mockImplementation(mockSubprocessExec) so only this test's spy is
affected and other tests keep the real implementation.

---

Nitpick comments:
In `@packages/workflows/src/subprocess.ts`:
- Around line 55-59: The inline object type for exec options in
execWithProcessGroupKill should be extracted into a named interface to comply
with the TS rule; define an interface (e.g., interface ExecOptions { cwd:
string; timeout: number; env?: NodeJS.ProcessEnv }) and replace the inline
annotation in execWithProcessGroupKill's signature with that interface, and
update any other functions or call sites that use the same shape to reference
ExecOptions so the shape is shared and reusable.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ac882d79-add7-44d2-a048-1b137cccdfe3

📥 Commits

Reviewing files that changed from the base of the PR and between d08aef9 and b9e6435.

📒 Files selected for processing (6)
  • packages/workflows/package.json
  • packages/workflows/src/dag-executor.test.ts
  • packages/workflows/src/dag-executor.ts
  • packages/workflows/src/script-node-deps.test.ts
  • packages/workflows/src/subprocess.test.ts
  • packages/workflows/src/subprocess.ts
✅ Files skipped from review due to trivial changes (2)
  • packages/workflows/src/dag-executor.test.ts
  • packages/workflows/src/dag-executor.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/workflows/package.json
  • packages/workflows/src/subprocess.test.ts

Comment thread packages/workflows/src/subprocess.ts Outdated
Comment on lines +32 to +40
function killProcessTree(pid: number, signal: NodeJS.Signals): void {
if (IS_WINDOWS) {
// POSIX process groups don't exist on Windows. Use taskkill to walk the tree
// only on the final SIGKILL — Windows has no SIGTERM analog, and firing
// `taskkill /F` on the SIGTERM call would collapse our two-phase grace
// period into two identical force-kills (and waste a subprocess spawn).
// The 5s grace timer becomes the actual grace period: well-behaved
// descendants can finish flushing before the SIGKILL-phase taskkill fires.
if (signal !== 'SIGKILL') return;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd packages/workflows && wc -l src/subprocess.ts

Repository: coleam00/Archon

Length of output: 81


🏁 Script executed:

cd packages/workflows && sed -n '1,50p' src/subprocess.ts

Repository: coleam00/Archon

Length of output: 2344


🏁 Script executed:

cd packages/workflows && sed -n '30,150p' src/subprocess.ts

Repository: coleam00/Archon

Length of output: 4446


🏁 Script executed:

cd packages/workflows && rg 'PROCESS_GROUP_KILL_GRACE_MS' -A 2 -B 2

Repository: coleam00/Archon

Length of output: 700


🏁 Script executed:

cd packages/workflows && rg 'timeout' src/subprocess.ts -i -B 2 -A 2 | head -80

Repository: coleam00/Archon

Length of output: 2810


Windows processes continue running for up to 5 seconds after timeout, enabling post-timeout side effects.

The killProcessTree function returns early on Windows when sent SIGTERM (line 40), preventing any signal from reaching the process. The timeout rejection happens at options.timeout, but the actual process termination is deferred to the PROCESS_GROUP_KILL_GRACE_MS timer (5 seconds later). On Windows, this allows the process to execute code and produce side effects beyond the timeout boundary.

The proposed fix correctly handles SIGTERM on Windows by calling process.kill(pid, 'SIGTERM') (which terminates on Windows despite the signal name), starting the graceful shutdown phase at the correct time.

Proposed fix
 function killProcessTree(pid: number, signal: NodeJS.Signals): void {
   if (IS_WINDOWS) {
+    if (signal === 'SIGTERM') {
+      try {
+        process.kill(pid, 'SIGTERM');
+      } catch {
+        // Process may have already exited
+      }
+      return;
+    }
+
     // POSIX process groups don't exist on Windows. Use taskkill to walk the tree
     // only on the final SIGKILL — Windows has no SIGTERM analog, and firing
     // `taskkill /F` on the SIGTERM call would collapse our two-phase grace
     // period into two identical force-kills (and waste a subprocess spawn).
     // The 5s grace timer becomes the actual grace period: well-behaved
     // descendants can finish flushing before the SIGKILL-phase taskkill fires.
     if (signal !== 'SIGKILL') return;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/workflows/src/subprocess.ts` around lines 32 - 40, The Windows
branch in killProcessTree currently returns early for non-SIGKILL signals,
leaving processes running during the 5s grace period; change it to forward
SIGTERM to the PID on Windows by calling process.kill(pid, 'SIGTERM') when
signal === 'SIGTERM' so the graceful shutdown phase starts immediately (retain
the existing logic to only invoke taskkill for SIGKILL and preserve
PROCESS_GROUP_KILL_GRACE_MS behavior). Locate killProcessTree and update the
IS_WINDOWS branch to call process.kill for SIGTERM instead of returning,
ensuring the later taskkill still runs only for SIGKILL.

Comment on lines +97 to +106
child.stdout.on('data', (chunk: Buffer) => {
if (capturedBytes >= MAX_CAPTURE_BYTES) {
onCaptureExceeded();
return;
}
if (capturedBytes + chunk.length >= MAX_CAPTURE_BYTES) {
stdout += chunk.toString('utf-8', 0, MAX_CAPTURE_BYTES - capturedBytes);
capturedBytes = MAX_CAPTURE_BYTES;
onCaptureExceeded();
return;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Allow outputs that exactly fill MAX_CAPTURE_BYTES.

Lines 102 and 116 currently reject when a chunk lands on the cap, not only when it goes past it. That turns a valid exact-fit capture into ERR_MAXBUFFER.

🐛 Proposed fix
-      if (capturedBytes + chunk.length >= MAX_CAPTURE_BYTES) {
+      if (capturedBytes + chunk.length > MAX_CAPTURE_BYTES) {
         stdout += chunk.toString('utf-8', 0, MAX_CAPTURE_BYTES - capturedBytes);
         capturedBytes = MAX_CAPTURE_BYTES;
         onCaptureExceeded();
         return;
       }
@@
-      if (capturedBytes + chunk.length >= MAX_CAPTURE_BYTES) {
+      if (capturedBytes + chunk.length > MAX_CAPTURE_BYTES) {
         stderr += chunk.toString('utf-8', 0, MAX_CAPTURE_BYTES - capturedBytes);
         capturedBytes = MAX_CAPTURE_BYTES;
         onCaptureExceeded();
         return;
       }

Also applies to: 111-120

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/workflows/src/subprocess.ts` around lines 97 - 106, The data
handlers for child.stdout and child.stderr treat a chunk that exactly reaches
MAX_CAPTURE_BYTES as an overflow; update the logic in the
child.stdout.on('data', ...) and child.stderr.on('data', ...) handlers so that
you only treat it as exceeded when capturedBytes + chunk.length >
MAX_CAPTURE_BYTES, not >=. Specifically: keep the early return when
capturedBytes >= MAX_CAPTURE_BYTES, change the "capturedBytes + chunk.length >=
MAX_CAPTURE_BYTES" check to ">" so exact-fit chunks are appended fully (set
capturedBytes = MAX_CAPTURE_BYTES) without calling onCaptureExceeded(), and only
call onCaptureExceeded() when the incoming chunk would push capturedBytes
strictly beyond MAX_CAPTURE_BYTES (handle the partial copy in that branch). Use
the same fix for both stdout and stderr handlers referencing capturedBytes,
MAX_CAPTURE_BYTES, and onCaptureExceeded.

Comment thread packages/workflows/src/subprocess.ts Outdated
Comment on lines +142 to +145
// Don't keep the event loop alive just for the SIGKILL fallback — the
// timeout has already rejected, so awaiting callers (e.g. short-lived
// CLI invocations) shouldn't sit idle for the grace period.
killTimer.unref?.();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n packages/workflows/src/subprocess.ts | sed -n '120,160p'

Repository: coleam00/Archon

Length of output: 1739


🏁 Script executed:

rg -n "PROCESS_GROUP_KILL_GRACE_MS" packages/workflows/src/subprocess.ts

Repository: coleam00/Archon

Length of output: 150


🏁 Script executed:

fd -e test.ts -e spec.ts -e test.js packages/workflows | head -20

Repository: coleam00/Archon

Length of output: 437


🏁 Script executed:

fd --full-path 'packages/workflows' -e test.ts -e spec.ts

Repository: coleam00/Archon

Length of output: 1211


🏁 Script executed:

cat packages/workflows/src/subprocess.test.ts

Repository: coleam00/Archon

Length of output: 4709


Remove unref() from the SIGKILL fallback timer.

The grace period timer must remain active in the event loop to guarantee that stubborn descendant processes are killed even when the wrapper exits cleanly before the grace period expires. With unref(), if the parent process has no other pending I/O, the event loop can exit without executing the unref'd timer, reintroducing the orphaned-grandchild leak. The test "kills the entire process tree on timeout, not just the wrapper" explicitly validates that descendants are killed within the grace period; this cannot be guaranteed if the timer is unref'd.

Recommended fix
-        // Don't keep the event loop alive just for the SIGKILL fallback — the
-        // timeout has already rejected, so awaiting callers (e.g. short-lived
-        // CLI invocations) shouldn't sit idle for the grace period.
-        killTimer.unref?.();
+        // Keep the process alive until the fallback SIGKILL fires so detached
+        // descendants cannot outlive the parent after a timeout.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/workflows/src/subprocess.ts` around lines 142 - 145, Remove the call
to killTimer.unref() so the SIGKILL fallback timer remains part of the event
loop; locate the SIGKILL fallback / grace-period logic where killTimer is
created (search for the "SIGKILL fallback" comment or the killTimer variable in
subprocess.ts) and delete the killTimer.unref?.() invocation to ensure the timer
fires and kills stubborn descendant processes within the grace period.

@shaun0927 shaun0927 force-pushed the fix/script-node-kill-process-group branch from b9e6435 to e2240a8 Compare April 28, 2026 04:49
@shaun0927
Copy link
Copy Markdown
Author

Pushed an update — picks up the items CR flagged on the previous round:

  • subprocess.ts data handlers: reverted the inner check from >= back to > so a chunk that exactly fills the buffer still gets captured intact. The early-return if (capturedBytes >= MAX_CAPTURE_BYTES) now does the saturation trip — so an exact-fit chunk on its own is fine, and the very next chunk fires onCaptureExceeded().
  • subprocess.ts Windows killProcessTree: SIGTERM phase now calls process.kill(pid, 'SIGTERM') on Windows so the graceful-shutdown phase actually fires (was a no-op before). taskkill /T /F still only runs in the SIGKILL phase, so the two-phase grace window stays intact.
  • subprocess.ts SIGKILL grace timer: removed the unref() — keeping it in the event loop is the entire point of the fallback (kill stubborn descendants even if the parent has rejected). Short-lived CLIs paying up to 5s for the kill to fire is the correct trade.
  • subprocess.ts exec options: extracted the inline { cwd; timeout; env? } shape into an ExecOptions interface.
  • script-node-deps.test.ts: switched from mock.module('./subprocess') to spyOn(subprocess, 'exec'). The module is process-global in bun and subprocess.test.ts (different test invocation) still expects the real implementation; spyOn keeps the override scoped.

Local: bun run type-check clean, workflows lint clean, subprocess.test.ts + script-node-deps.test.ts 13/13.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: e2240a8ffc

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

// Preserve the SIGKILL fallback only when the process exits unrelated to the
// timeout. On timeout (killed=true), let the grace timer run so stubborn
// descendants are killed even if the wrapper exited cleanly.
if (!killed && killTimer) clearTimeout(killTimer);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Cancel stale SIGKILL timer after timeout close

When a timeout occurs, the close handler intentionally leaves killTimer armed (if (!killed && killTimer) ...), so a second process.kill(-pid, 'SIGKILL') can run 5s after the child already exited. In high-churn hosts where PIDs/PGIDs are recycled quickly, that delayed signal can target an unrelated process group (Node docs explicitly warn that signaling an exited PID may hit a reassigned process), turning one timed-out workflow step into collateral process termination. Please clear the fallback timer once the timed-out process tree is confirmed closed, or validate identity before sending the deferred kill.

Useful? React with 👍 / 👎.

@shaun0927
Copy link
Copy Markdown
Author

Pushed an update — picks up Codex's P1 follow-up:

  • subprocess.ts close handler now always clears the SIGKILL grace timer. Previously the timeout path kept the timer armed so a stubborn descendant would still get force-killed after the wrapper exited. Codex's concern (referencing Node's kill-PID docs) is that on a busy host the recycled PID can belong to an unrelated process group by the time the deferred kill fires. Reasoning: if the wrapper has actually closed, descendants that share its detached pgid have either already exited (POSIX) or already been signalled by taskkill /T (Windows SIGKILL phase). The grace timer was only doing useful work in the case where the wrapper itself survived past SIGTERM, in which case close hasn't fired yet and clearing here doesn't apply.

Local: subprocess.test.ts 6/6 still pass.

@shaun0927 shaun0927 force-pushed the fix/script-node-kill-process-group branch from e2240a8 to 35798cf Compare April 28, 2026 05:29
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 35798cfc04

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/workflows/src/subprocess.ts Outdated
// if close fires we're past that case, and on a busy host the recycled
// PID could end up belonging to an unrelated process group by the time
// the deferred kill would fire (Node's kill-PID docs warn about this).
if (killTimer) clearTimeout(killTimer);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve SIGKILL fallback after timeout close

Clearing killTimer here drops the only escalation path when the timed-out wrapper exits but a descendant survives SIGTERM (for example, a grandchild that ignores TERM and redirects stdio). In that case close can fire immediately, this branch cancels the 5s SIGKILL, and the orphan keeps running and performing side effects after the node is already marked timed out. Fresh evidence in this revision is that the new unconditional clearTimeout(killTimer) in the close handler makes that survival path deterministic whenever descendants outlive the wrapper.

Useful? React with 👍 / 👎.

@shaun0927 shaun0927 force-pushed the fix/script-node-kill-process-group branch from 35798cf to b566318 Compare April 28, 2026 05:53
@shaun0927
Copy link
Copy Markdown
Author

Pushed an update — Codex's two latest P1s pulled in opposite directions on the SIGKILL grace timer (earlier round flagged 'PID reuse risk → cancel', this round flagged 'descendants survive → preserve'). The fix splits the difference:

  • Restored the conditional clear (if (!killed && killTimer) clearTimeout(killTimer)). Timeout path keeps the SIGKILL fallback armed so a descendant that ignores SIGTERM and outlives the wrapper is still killed.
  • Liveness probe before the deferred kill. The grace timer now does process.kill(-pid, 0) first on POSIX (signal 0 = existence check, throws ESRCH if the process group is empty / no longer ours). If the pgid is gone we no-op — that closes the recycled-PID-targets-stranger window without giving up the descendant kill. Windows path is unchanged because taskkill /T /F walks by PID rather than group.
  • Buffer-overflow no longer marked as a timeout (err.killed = false in the ERR_MAXBUFFER branch). dag-executor's killed === true heuristic is now reserved for genuine timeouts so users see "output exceeded MAX_CAPTURE_BYTES bytes" instead of a misleading "timed out" message.
  • Test updated to assert killed === false on max-buffer to lock that contract.

Local: bun run type-check clean, bun run lint clean, subprocess.test.ts 6/6.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b5663186a9

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/workflows/src/subprocess.ts Outdated
Comment on lines +39 to +43
if (signal !== 'SIGKILL') {
try {
process.kill(pid, 'SIGTERM');
} catch {
// Process may already be dead — fall through to the SIGKILL phase.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Kill Windows process tree in initial timeout phase

On Windows this branch sends process.kill(pid, 'SIGTERM'), which only targets the wrapper process and (per Node's Windows semantics) kills it abruptly rather than signaling descendants. For wrapper runtimes like uv run/bun run, that means grandchildren can keep running during the 5s grace window and may survive entirely if the later taskkill /T /F no longer has a live root PID to walk. This leaves timed-out script/bash nodes still executing side effects on Windows, the same class of leak this change is intended to prevent.

Useful? React with 👍 / 👎.

@shaun0927 shaun0927 force-pushed the fix/script-node-kill-process-group branch from b566318 to 0dd7380 Compare April 28, 2026 06:04
@shaun0927
Copy link
Copy Markdown
Author

Pushed an update — picks up Codex's latest P1:

  • Windows SIGTERM phase now walks the entire process tree. Previously the SIGTERM branch called process.kill(pid, 'SIGTERM'), which on Windows maps to TerminateProcess on just the wrapper — descendants kept running, and by the time the SIGKILL phase fired the wrapper PID was gone so taskkill /T /F had nothing to walk. The branch now uses taskkill /T (no /F) for SIGTERM (Windows-documented graceful shutdown of console/window children) and taskkill /T /F for SIGKILL. POSIX path is unchanged.

Local: subprocess.test.ts 6/6 still pass.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0dd7380dcf

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

let stderr = '';
let killed = false;
let killTimer: ReturnType<typeof setTimeout> | undefined;
let capturedBytes = 0;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Enforce max-buffer limit independently per output stream

Using a single capturedBytes counter for both stdout and stderr changes behavior from execFile and causes premature ERR_MAXBUFFER failures when each stream is individually below 1 MiB but their combined size exceeds it (for example, ~700 KiB on each stream). This is a regression for bash/script nodes that previously succeeded under the old execFileAsync path and can now fail unexpectedly despite neither stream being excessively large.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug(workflows): script node timeout does not kill wrapper grandchildren (uv/bun pattern)

8 participants