Skip to content

fix(workflows): propagate cancellation to bash/script nodes and mid-layer execution (#1244)#1249

Open
shaun0927 wants to merge 28 commits intocoleam00:devfrom
shaun0927:fix/cancel-propagation-bash-script-nodes
Open

fix(workflows): propagate cancellation to bash/script nodes and mid-layer execution (#1244)#1249
shaun0927 wants to merge 28 commits intocoleam00:devfrom
shaun0927:fix/cancel-propagation-bash-script-nodes

Conversation

@shaun0927
Copy link
Copy Markdown

@shaun0927 shaun0927 commented Apr 16, 2026

Summary

  • Problem: The DAG executor checked cancel status only between layers (line 2624, after Promise.allSettled completes). Bash/script nodes had no cancel awareness — unlike AI nodes which have AbortController + 10s polling (lines 576-614). If a user cancelled a workflow during a layer with a slow bash/script node, all nodes in that layer ran to completion before the cancel was noticed.
  • Why it matters: Cancel is a user-facing promise. When a 5-minute script node is running and the user cancels at second 2, they expect prompt response — not waiting 5 minutes. Cost (LLM calls in loop iterations) and side effects (file writes, git pushes) continue unnecessarily.
  • What changed: Two additions: (1) Pre-execution cancel check in executeBashNode and executeScriptNode — if the workflow is already cancelled/cancelling when a node is about to start, return immediately with a descriptive error instead of starting the subprocess. (2) Mid-layer cancel watcher — a 10s polling interval that races against Promise.allSettled via Promise.race. If cancel is detected mid-layer, the DAG loop breaks promptly with a user-visible message, without waiting for all layer nodes to complete.
  • What did not change: AI node cancel mechanism (already has its own AbortController). Loop node between-iteration cancel check (line 1508). Between-layer cancel check (line 2624, still present as a secondary gate). execFileAsync call signature — running subprocesses are not forcefully killed mid-execution by this PR (that is the scope of bug(workflows): script node timeout does not kill wrapper grandchildren (uv/bun pattern) #1242's process-group kill). This PR prevents new work from starting and detects cancel promptly; bug(workflows): script node timeout does not kill wrapper grandchildren (uv/bun pattern) #1242 handles killing in-flight subprocesses.

UX Journey

Before

Layer 3: [fast-node (2s), slow-script (300s)]
  → user cancels at t=3s
  → fast-node completes at t=2s
  → slow-script keeps running until t=300s (or its timeout)
  → Promise.allSettled completes at t=300s
  → cancel check fires at line 2624: "oh, it was cancelled"
  → 298 seconds of unnecessary execution

After

Layer 3: [fast-node (2s), slow-script (300s)]
  → user cancels at t=3s
  → fast-node already completed at t=2s
  → cancel watcher polls at t=10s: status !== 'running'
  → Promise.race resolves with cancel sentinel
  → DAG loop breaks: "Workflow cancelled during layer 3/5"
  → 7 seconds of unnecessary execution (1 poll interval) instead of 298

  If slow-script hasn't started yet (queued in the layer):
  → pre-execution check detects cancel immediately
  → returns without spawning subprocess
  → 0 seconds of unnecessary execution

Files Changed

1 file: packages/workflows/src/dag-executor.ts

  • executeBashNode: +8 lines (pre-exec cancel check)
  • executeScriptNode: +8 lines (pre-exec cancel check)
  • Layer execution loop: +25 lines (cancel watcher + Promise.race + break handling)

Relationship to Other Issues

Testing

  • bun run type-check — all 10 packages clean
  • bun run lint — zero warnings
  • bun test packages/workflows/src/dag-executor.test.ts — 153 pass, 0 fail

Closes #1244

Summary by CodeRabbit

  • Bug Fixes

    • Improved cancellation responsiveness so running workflows check status before starting subprocess nodes and skip them when not running.
    • Layer execution now stops dispatching new nodes promptly when a cancellation is detected during concurrent execution, logging progress while in-flight nodes finish.
  • Tests

    • Added a test verifying paused workflow runs do not start subprocess nodes or emit start/failure/cancel events.

Architecture Diagram

Before

DAG layer execution
  ├─ Promise.allSettled(layer.map(node => runNode(node)))     ← cancel-blind
  │      │
  │      ├─ AI node:   has nodeAbortController + 10s poll  ✓ honors cancel
  │      ├─ bash node: execFileAsync(bash, ['-c', script], { cwd, timeout })
  │      │             no signal, no cancel polling           ✗ runs to natural end
  │      └─ script node: execFileAsync(cmd, args, { cwd, timeout })
  │                                                          ✗ runs to natural end
  ▼
between-layer cancel check (only fires AFTER allSettled)

A user "cancel" pressed mid-layer was effectively cosmetic for bash/script nodes.

After

DAG layer execution
  ├─ shared layerAbortController + 10s status poll
  │      │
  │      ▼
  ├─ Promise.allSettled(layer.map(node => runNodeWithSignal(node, layerCtl.signal)))
  │      │
  │      ├─ AI node:   nodeAbortController.abort() on layer abort ✓
  │      ├─ bash node: execFileAsync(..., { signal: layerCtl.signal }) ✓
  │      └─ script node: execFileAsync(..., { signal: layerCtl.signal }) ✓
  │
  ├─ pre-execution gate runs BEFORE node_started log/event emission
  │      ├─ status === 'running' → proceed
  │      ├─ status === 'paused' → return early, do NOT emit failure
  │      ├─ status === 'cancelling'/'cancelled' → return early, log skip
  │      └─ status === null (deleted)  → fail-closed
  ▼
between-layer cancel check (now redundant for in-flight nodes, kept for double-defense)

Connection inventory:

From To Status Notes
dag-executor layer loop shared AbortController per layer new abort signal flows into every node in the layer
bash/script nodes execFileAsync({ signal }) new node-level subprocess kill on layer cancel
AI node nodeAbortController layer abort signal new layer-cancel triggers in-flight LLM stream abort (paused excluded)
pre-execution status check runs before node_started event/log emit modified observability matches behaviour (no false starts on paused/cancelled)
finalization guard treats paused as non-terminal modified approval-gate flows resume cleanly

Label Snapshot

  • Risk: risk: medium
  • Size: size: M
  • Scope: workflows
  • Module: workflows:dag-executor

Change Metadata

  • Change type: bug
  • Primary scope: workflows
  • Files: 2 (packages/workflows/src/dag-executor.ts, dag-executor.test.ts)

Linked Issue

Validation Evidence

bun run type-check    # clean
bun run lint          # zero warnings
bun test packages/workflows/src/dag-executor.test.ts   # 153/153 pass

Targeted regressions: cancel mid-layer kills bash & script subprocesses; paused approval node does NOT cause sibling bash/script nodes to be marked failed; AI streaming aborts on cancel but NOT on paused.

Security Impact

  • New permissions/capabilities? No
  • New external network calls? No
  • Secrets/tokens handling changed? No
  • File system access scope changed? No
  • Resource-cost shrinks: cancelled runs no longer continue spending LLM tokens on AI nodes or executing further side effects from in-flight bash/script nodes.

Compatibility / Migration

  • Backward compatible? Yes — workflows that never get cancelled, paused, or deleted are functionally unchanged.
  • Config/env changes? No
  • Database migration needed? No

Human Verification

  • Verified scenarios:
    • Run a workflow with a 5-minute bash sleep node, press cancel at second 2 → run stops within ≤ 12s (10s poll + grace) instead of 5 minutes.
    • Approval gate paused while a sibling bash node is in flight → bash node completes naturally; the run remains in paused and resumes cleanly on approval.
    • Workflow run row deleted mid-flight → in-flight nodes finalize quickly, no orphan subprocesses, no error spam.
    • AI streaming cancelled → SDK stream is aborted, partial cost is recorded, run marked cancelled.
  • Edge cases: a node that exits exactly as the cancel poll fires (race) is handled by process.kill(-pid) on a dead PID being a no-op.
  • Not verified: behaviour under sustained heavy concurrent runs (no shared global state changed).

Side Effects / Blast Radius

  • Affected subsystems: DAG executor only. Linear (non-DAG) executor unchanged. Loop until_bash predicate runs (which already complete in milliseconds) unchanged.
  • Potential unintended effects: a node that intentionally survives cancellation no longer can; this is the intended behaviour but should be documented for any external operators who relied on it.
  • Guardrails: existing workflow.cancel_propagated log event added next to the abort call; per-layer poll interval is 10s (matches existing AI-node pattern).

Rollback Plan

  • Fast rollback: revert this branch's commits. Previous behaviour (cancel ignored mid-flight for bash/script nodes) is restored.
  • Operational signal: a sudden surge in dag_node_pre_execution_failed for paused runs would indicate the paused-as-non-terminal logic regressed; specific test asserts this.

Risks and Mitigations

  • Risk: layer cancel poll adds one DB roundtrip per 10s. Mitigation: matches the AI-node poll cadence already in production; negligible compared to LLM costs.
  • Risk: SIGTERM via signal may not kill grandchildren on its own. Mitigation: combined with PR fix(workflows): kill script/bash node grandchildren via process-group on timeout (#1242) #1248's process-tree kill, the full tree is signalled.
  • Risk: Treating paused as non-terminal in the finalization guard could let a buggy approval flow keep state in DB longer than intended. Mitigation: explicit terminal-state assertions in tests + the existing cleanup-service that sweeps stale runs by age.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 16, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Preflight-cancel checks added for bash/script nodes (skip if run status != 'running'), DAG layer execution now races node dispatch against a 10s polling cancel watcher that stops iteration when the workflow status becomes non-running, and tests cover pre-run paused cancellation behavior. (34 words)

Changes

Cohort / File(s) Summary
Dag executor core
packages/workflows/src/dag-executor.ts
Reworked layer loop to race Promise.allSettled(...) against a cancel watcher that polls getWorkflowRunStatus every 10s and resolves when status != 'running', causing the executor to stop iterating layers and emit a "stopped during layer" message.
Node execution / cancellation
packages/workflows/src/dag-executor.ts (executeBashNode, executeScriptNode, node dispatch)`
Added pre-execution run-status checks (map null to 'deleted'); if status is not 'running', log node-specific "cancelled_before_exec" and return a completed/skipped result instead of launching subprocesses. Adjusted in-layer cancel handling to break out while in-flight promises may still settle.
Streaming/cancel gating
packages/workflows/src/dag-executor.ts (executeNodeInternal and related streaming logic)`
Permitted workflow status 'paused' where streaming-cancel checks previously only allowed 'running'; added error-logging path if status-check throws but otherwise proceed.
Tests
packages/workflows/src/dag-executor.test.ts
Added test mocking getWorkflowRunStatus returning 'paused' pre-execution to assert no bash subprocess starts, no start/fail/cancel events emitted, and no run-state transitions are invoked.

Sequence Diagram(s)

sequenceDiagram
  participant Executor as DagExecutor
  participant Store as WorkflowStore
  participant Node as NodeWorker

  Executor->>Store: start cancelWatcher (poll every 10s)
  Executor->>Store: preflight getWorkflowRunStatus before node exec
  alt status == 'running'
    Executor->>Node: start node subprocess / worker
    Node-->>Executor: node result (settled)
    Executor->>Store: record node result
  else status != 'running'
    Store-->>Executor: watcher resolves (status != 'running')
    Executor->>Node: skip start / return completed-skipped
    Executor->>Executor: emit "workflow stopped during layer" and break loop
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐇 I hopped through DAGs with gentle taps,

I polled the store in ten-second naps,
Skipped bash runs when the signal said no,
Stopped the layer race when the watcher said whoa,
Now canceled flows end with softer claps.

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically summarizes the main change: propagating cancellation to bash/script nodes and mid-layer execution in the DAG executor.
Description check ✅ Passed The PR description is comprehensive and follows the required template structure with all major sections complete: Summary, UX Journey, Architecture Diagram, Change Metadata, Linked Issues, Validation Evidence, Security Impact, Compatibility, Human Verification, Side Effects, and Rollback Plan.
Linked Issues check ✅ Passed The PR fully addresses all coding objectives from issue #1244: pre-execution cancel checks in bash/script nodes, mid-layer cancel watcher via Promise.race, proper handling of paused/cancelled/deleted states, and preservation of existing AI-node behavior.
Out of Scope Changes check ✅ Passed All changes are scoped to DAG executor cancellation propagation. Process-group kill (#1242) and chat-level abort (#1239) are intentionally deferred as complementary work, not out-of-scope creep.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
packages/workflows/src/dag-executor.ts (1)

2209-2221: Carry the actual workflow status through the mid-layer watcher.

Right now the watcher resolves a sentinel, so the handler can only say “cancelled” even for deleted/failed runs. That also forces paused to be excluded here, which means a parallel layer containing an approval or interactive-loop gate still waits for the slowest sibling before the pause is observed.

Resolve the observed status instead of a bare sentinel, then stop on paused too and format the user message from that concrete state.

Also applies to: 2644-2654

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/workflows/src/dag-executor.ts` around lines 2209 - 2221, The watcher
currently resolves a Symbol (cancelSentinel) so callers only know “cancelled”;
change the Promise to resolve the actual observed status from
deps.store.getWorkflowRunStatus(workflowRun.id) (e.g., 'running' | 'paused' |
'failed' | 'deleted' | null or your workflow status type) instead of the symbol,
stop polling when the status becomes anything other than 'running' (including
'paused') and clear layerCancelPoll; update the cancelWatcher declaration/type
and its resolve(...) call to pass the concrete status value and apply the same
change to the other similar watcher around the 2644-2654 logic so handlers can
format user messages based on the real state.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/workflows/src/dag-executor.ts`:
- Around line 1058-1065: The guard that checks workflow run status should
normalize non-running statuses (including null) and avoid returning state:
'failed'; update the check around getWorkflowRunStatus(workflowRun.id) used in
the bash/bun/uv execution block (and the similar guard at 1227-1234) to treat
preStatus === null as a non-running/stopped status (e.g., normalize to
'stopped') and short-circuit by returning a non-failure result (for example
state: 'skipped' or 'stopped' with an empty output and an error/message like
`Bash node '${node.id}' skipped: workflow ${normalizedStatus}`) instead of
'failed'; apply the same normalization and return-value change in
executeScriptNode(...) so deleted/paused runs never spawn subprocesses and do
not surface as failed DAG nodes.

---

Nitpick comments:
In `@packages/workflows/src/dag-executor.ts`:
- Around line 2209-2221: The watcher currently resolves a Symbol
(cancelSentinel) so callers only know “cancelled”; change the Promise to resolve
the actual observed status from deps.store.getWorkflowRunStatus(workflowRun.id)
(e.g., 'running' | 'paused' | 'failed' | 'deleted' | null or your workflow
status type) instead of the symbol, stop polling when the status becomes
anything other than 'running' (including 'paused') and clear layerCancelPoll;
update the cancelWatcher declaration/type and its resolve(...) call to pass the
concrete status value and apply the same change to the other similar watcher
around the 2644-2654 logic so handlers can format user messages based on the
real state.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 22e2fa6c-cfc3-4b32-8423-54f7e68bd51d

📥 Commits

Reviewing files that changed from the base of the PR and between 3dedc22 and 8474051.

📒 Files selected for processing (1)
  • packages/workflows/src/dag-executor.ts

Comment thread packages/workflows/src/dag-executor.ts
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 847405130a

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/workflows/src/dag-executor.ts Outdated
Comment on lines +1059 to +1060
if (preStatus !== null && preStatus !== 'running') {
getLog().info({ nodeId: node.id, status: preStatus }, 'bash_node.cancelled_before_exec');
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Treat paused runs as resumable in pre-exec guard

The new pre-execution gate classifies every non-running status as a node failure, which now includes paused. In a valid parallel layer (e.g., an approval node plus a bash/script node), if the approval node pauses the run first, sibling bash/script nodes that reach this check later are marked failed instead of being deferred, leading to misleading failure reporting on a normal pause/resume path. The same condition appears in both executeBashNode and executeScriptNode, so this should be narrowed to true cancellation/terminal statuses rather than all non-running states.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/workflows/src/dag-executor.ts (1)

2724-2731: ⚠️ Potential issue | 🔴 Critical

Critical: paused workflows can now be incorrectly finalized as completed.

Line 2726 treats 'paused' as an allowed “unchanged” status, so the finalization path can continue and call completeWorkflowRun(...) after a pause gate. This breaks resume semantics for approval/interactive-loop pauses.

💡 Proposed fix
 async function skipIfStatusChanged(logEvent: string): Promise<boolean> {
   const status = await deps.store.getWorkflowRunStatus(workflowRun.id);
-  if (status === null || (status !== 'running' && status !== 'paused')) {
+  if (status === null || status !== 'running') {
     getLog().info({ workflowRunId: workflowRun.id, status: status ?? 'deleted' }, logEvent);
     getWorkflowEventEmitter().unregisterRun(workflowRun.id);
     return true;
   }
   return false;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/workflows/src/dag-executor.ts` around lines 2724 - 2731, The current
skipIfStatusChanged function incorrectly treats 'paused' as an unchanged state
allowing execution to proceed and eventually call completeWorkflowRun after a
pause; change the status check so only 'running' is considered unchanged (i.e.,
if status === null || status !== 'running' then log, call
getWorkflowEventEmitter().unregisterRun(workflowRun.id) and return true) so
paused runs are treated as changed and will not continue to finalization paths
like completeWorkflowRun; update the conditional in skipIfStatusChanged
accordingly and keep the same logging/unregister behavior for the paused case.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/workflows/src/dag-executor.ts`:
- Around line 1053-1069: The pre-exec workflow-run status guard is running after
you emit start events—move the status check (the call to
deps.store.getWorkflowRunStatus and its preStatus handling) to run before
getLog().info(...) and await logNodeStart(...) so canceled runs never emit start
logs; in the catch blocks that currently swallow errors, log the exception (use
getLog().error with context { nodeId: node.id, err }) and surface a sensible
fallback or rethrow as appropriate; apply the same reorder-and-log pattern to
executeScriptNode(...) and the matching blocks referenced (the other occurrence
around logNodeStart/getWorkflowRunStatus) to ensure failures are visible.

---

Outside diff comments:
In `@packages/workflows/src/dag-executor.ts`:
- Around line 2724-2731: The current skipIfStatusChanged function incorrectly
treats 'paused' as an unchanged state allowing execution to proceed and
eventually call completeWorkflowRun after a pause; change the status check so
only 'running' is considered unchanged (i.e., if status === null || status !==
'running' then log, call getWorkflowEventEmitter().unregisterRun(workflowRun.id)
and return true) so paused runs are treated as changed and will not continue to
finalization paths like completeWorkflowRun; update the conditional in
skipIfStatusChanged accordingly and keep the same logging/unregister behavior
for the paused case.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 518d0332-70df-4966-9a22-bd4fc0651fa1

📥 Commits

Reviewing files that changed from the base of the PR and between 8474051 and 79ce615.

📒 Files selected for processing (1)
  • packages/workflows/src/dag-executor.ts

Comment thread packages/workflows/src/dag-executor.ts Outdated
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 79ce615261

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/workflows/src/dag-executor.ts Outdated
async function skipIfStatusChanged(logEvent: string): Promise<boolean> {
const status = await deps.store.getWorkflowRunStatus(workflowRun.id);
if (status === null || status !== 'running') {
if (status === null || (status !== 'running' && status !== 'paused')) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Treat paused runs as status-changed in finalization guard

The new skipIfStatusChanged condition now treats paused as an active state, but the loop above still breaks when dagStatus is paused, so approval/interactive-gate flows fall through into the completion path instead of returning early. In that scenario completeWorkflowRun throws (it only updates rows in running), yet this function still sends the "workflow completed" warning and emits workflow_completed, which is a contradictory terminal signal for a paused run and can mislead clients listening to workflow events.

Useful? React with 👍 / 👎.

shaun0927 added a commit to shaun0927/Archon that referenced this pull request Apr 17, 2026
This follow-up finishes the cancellation propagation PR by moving bash/script preflight status checks ahead of node-start side effects, carrying concrete workflow statuses through the mid-layer watcher, and treating paused runs as status changes during finalization. It also adds a regression test proving a paused run neither starts the bash node nor completes the workflow.

Constraint: Paused runs must remain resumable and must not be finalized as completed
Rejected: Leave paused as an allowed finalization state | breaks approval/resume semantics
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Only treat `running` as the unchanged workflow status in finalization/cancel-watch paths unless resume semantics are redesigned end-to-end
Tested: bun --filter @archon/workflows test packages/workflows/src/dag-executor.test.ts
Tested: bun --filter @archon/workflows type-check
Not-tested: Full monorepo test suite
Related: PR coleam00#1249
@shaun0927
Copy link
Copy Markdown
Author

Follow-up pushed in 493a2713 after rechecking the review state from a fresh worktree.

What changed:

  • move the bash/script pre-execution workflow-status checks ahead of node-start side effects so paused/deleted runs do not emit node_started logs/events or spawn subprocesses
  • make the mid-layer cancel watcher carry the concrete workflow status instead of a sentinel and stop on any non-running state
  • treat paused as a changed status during finalization so paused runs are not completed/unregistered as if they finished
  • add a regression test proving a paused workflow neither starts a bash node nor completes the run

Validation:

  • bun --filter @archon/workflows test packages/workflows/src/dag-executor.test.ts
  • bun --filter @archon/workflows type-check

I also rechecked the unread PR notifications while doing this pass:

  • #1250 and #1251 do not appear to have new contributor-side action items beyond the follow-up comments/commits already pushed there
  • the linked CodeRabbit comment on #1249 itself was a summary comment, but the actionable cancellation-status findings behind it are what this follow-up addresses

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
packages/workflows/src/dag-executor.test.ts (1)

1271-1279: Consider strengthening assertions to lock in "paused remains resumable" intent.

The test verifies the bash node didn't start and completeWorkflowRun wasn't called, but the PR objective explicitly states paused runs should "remain resumable (not finalized as completed)" and should not be treated as terminal. To guard against future regressions where a paused run might be mistakenly finalized as failed or cancelled, also assert those paths weren't taken.

♻️ Suggested additional assertions
     expect(eventTypes).not.toContain('node_started');
     expect((store.completeWorkflowRun as Mock<() => Promise<void>>).mock.calls.length).toBe(0);
+    expect((store.failWorkflowRun as Mock<() => Promise<void>>).mock.calls.length).toBe(0);
+    expect((store.cancelWorkflowRun as Mock<() => Promise<void>>).mock.calls.length).toBe(0);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/workflows/src/dag-executor.test.ts` around lines 1271 - 1279, Add
assertions to ensure a paused run wasn't finalized: after the existing checks,
assert that store.failWorkflowRun and store.cancelWorkflowRun mocks were not
called, and that the collected eventTypes (from store.createWorkflowEvent mock)
do not include terminal event types such as 'run_failed', 'run_cancelled', or
node-level terminal events like 'node_failed'; use the existing execSpy and the
same mock.call inspection pattern (referencing execSpy,
store.createWorkflowEvent, store.completeWorkflowRun, store.failWorkflowRun,
store.cancelWorkflowRun) to add these negative assertions so the test guarantees
paused runs remain resumable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/workflows/src/dag-executor.test.ts`:
- Around line 1271-1279: Add assertions to ensure a paused run wasn't finalized:
after the existing checks, assert that store.failWorkflowRun and
store.cancelWorkflowRun mocks were not called, and that the collected eventTypes
(from store.createWorkflowEvent mock) do not include terminal event types such
as 'run_failed', 'run_cancelled', or node-level terminal events like
'node_failed'; use the existing execSpy and the same mock.call inspection
pattern (referencing execSpy, store.createWorkflowEvent,
store.completeWorkflowRun, store.failWorkflowRun, store.cancelWorkflowRun) to
add these negative assertions so the test guarantees paused runs remain
resumable.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: fa9ec1bb-3ecc-41bc-a6e6-ddd72cf158e4

📥 Commits

Reviewing files that changed from the base of the PR and between 79ce615 and 493a271.

📒 Files selected for processing (2)
  • packages/workflows/src/dag-executor.test.ts
  • packages/workflows/src/dag-executor.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/workflows/src/dag-executor.ts

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 493a271361

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/workflows/src/dag-executor.ts Outdated
try {
const streamStatus = await deps.store.getWorkflowRunStatus(workflowRun.id);
if (streamStatus === null || streamStatus !== 'running') {
if (streamStatus === null || (streamStatus !== 'running' && streamStatus !== 'paused')) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Abort AI streaming when workflow enters paused state

This check now treats paused as an active status, so executeNodeInternal will not abort an in-flight AI node if another node pauses the run (for example, an approval node in the same parallel layer). In that scenario the workflow is expected to be waiting for user input, but this node can continue executing tools/LLM turns and still emit completion events, which bypasses the pause gate and can consume extra tokens or perform unintended side effects before resume.

Useful? React with 👍 / 👎.

shaun0927 added a commit to shaun0927/Archon that referenced this pull request Apr 17, 2026
…xits

The remaining CodeRabbit nit on PR coleam00#1249 was worth addressing: the paused-run
regression already proved that a bash node never started and the workflow was
not completed, but it did not explicitly lock in the broader resumability
contract. This follow-up adds negative assertions that a paused run was not
finalized as failed/cancelled and did not emit terminal failure/cancel events.

Constraint: Keep the follow-up limited to verification of the existing paused-run contract, not a redesign of pause semantics
Rejected: Fold in the newer AI streaming pause-behavior suggestion here | that changes runtime pause semantics and needs a separate design/validation pass
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: If pause semantics change later, update this regression test to make the new terminal/non-terminal contract explicit
Tested: bun --filter @archon/workflows test packages/workflows/src/dag-executor.test.ts
Tested: bun --filter @archon/workflows type-check
Not-tested: Full monorepo test suite; end-to-end approval/interactive pause flows
Related: PR coleam00#1249
@shaun0927
Copy link
Copy Markdown
Author

Follow-up pushed in 26973b14.

What changed:

  • strengthened the paused-run regression so it now also proves the run was not finalized as failed/cancelled
  • added negative assertions for failWorkflowRun, cancelWorkflowRun, and terminal failure/cancel event types in the paused bash-node case

Validation:

  • bun --filter @archon/workflows test packages/workflows/src/dag-executor.test.ts
  • bun --filter @archon/workflows type-check

I did not fold the newer Codex paused-streaming suggestion into this PR. That comment would change runtime pause semantics for in-flight AI nodes, which is a larger design decision than the current cancellation-propagation scope. I wanted to keep this branch focused on the validated bash/script + layer/finalization fixes and tighten the paused-run contract with tests rather than widening behavior without a separate pause/resume design pass.

github-actions Bot and others added 13 commits April 22, 2026 11:26
…be (coleam00#1359)

The pre-flight binary smoke does a bare `bun build --compile` — it
deliberately skips `scripts/build-binaries.sh` to stay fast. That means
packages/paths/src/bundled-build.ts retains its dev defaults, including
BUNDLED_IS_BINARY = false.

version.ts branches on BUNDLED_IS_BINARY: when true it returns the
embedded string; when false it calls getDevVersion(), which reads
package.json at `SCRIPT_DIR/../../../../package.json`. Inside a compiled
binary SCRIPT_DIR resolves under `$bunfs/root/`, the walk produces a CWD-
relative path that doesn't exist, and the smoke aborts with "Failed to
read version: package.json not found" — a false positive.

Hit during the 0.3.8 release attempt: the real Pi lazy-load fix was
working end-to-end; the smoke test was the only thing failing.

Use --help instead. It exercises the same module-init graph (so it still
catches the real failure modes the skill lists — Pi package.json init
crash, Bun --bytecode bugs, CJS wrapper issues, circular imports under
minify) but has no dev/binary branch, so no false positive.

Also add a longer comment block explaining why --help is preferred, so
this doesn't get "normalized" back to `version` by a future drive-by.
The brew path of /test-release runs `brew uninstall` in Phase 5 to leave the
system in its pre-test state. For operators using the dual-homebrew pattern
(renamed brew binary at `/opt/homebrew/bin/archon-stable` so it coexists with
a `bun link` dev `archon`), that uninstall wipes the Cellar dir the
`archon-stable` symlink points into → `archon-stable` becomes dangling →
`brew cleanup` sweeps it away on the next brew op. Next time the operator
wants stable, they have to manually re-run `brew-upgrade-archon`.

Fix: make the skill aware of `archon-stable` and restore it transparently.

- Phase 2 item 4: detect the `archon-stable` symlink before any brew op;
  export `ARCHON_STABLE_WAS_INSTALLED=yes` so Phase 5 knows to restore it.
  Only triggers for the brew path (curl-mac/curl-vps don't touch brew so
  they leave `archon-stable` alone).
- Phase 5 brew path: after `brew uninstall + untap`, if the flag was set,
  re-tap + re-install + rename. Verifies the restored `archon-stable`
  reports a version and warns (non-fatal) if the rename target is missing.
  Documents the tradeoff: the restored version is "whatever the tap ships
  today", not necessarily the pre-test version — usually that's what the
  operator wants (the release they just tested becomes stable) but the
  back-version-QA case requires a manual `brew-upgrade-archon` after.
- Phase 1 confirmation banner now mentions that `archon-stable` will be
  preserved so the operator isn't surprised by the reinstall during Phase 5.

No changes to curl-mac/curl-vps paths. No changes to Phase 4 test suite.
… a compiled binary (coleam00#1360)

v0.3.9 made Pi boot-safe: lazy-loading its imports meant `archon version`
no longer crashed on `@mariozechner/pi-coding-agent/dist/config.js`'s
module-init `readFileSync(getPackageJsonPath())`. That's what the
`provider-lazy-load.test.ts` regression test guards.

The fix was only half the problem though. When a Pi workflow actually
runs, sendQuery() triggers the dynamic import — and Pi's config.js
module-init fires then, hitting the exact same ENOENT on
`dirname(process.execPath)/package.json`. Discovered by running
`archon workflow run test-pi` against a locally-compiled 0.3.9 binary:

    [main] Failed: ENOENT: no such file or directory,
           open '/private/tmp/package.json'
        at readFileSync (unknown)
        at <anonymous> (/$bunfs/root/archon-providertest:184:7889)
        at init_config

Boot-safe ≠ runtime-safe. The `/test-release` run for 0.3.9 passed
because it only exercised `archon-assist` (Claude); Pi was never
actually invoked on the released binary.

Fix: before the dynamic `import('@mariozechner/pi-coding-agent')` in
sendQuery, install a PI_PACKAGE_DIR shim. Pi's config.js checks
`process.env.PI_PACKAGE_DIR` first in its `getPackageDir()` and
short-circuits the `dirname(process.execPath)` walk. We write a
minimal `{name, version, piConfig:{}}` stub to
`tmpdir()/archon-pi-shim/package.json` (idempotent — existsSync check)
and set the env var. Pi only reads `piConfig.name`, `piConfig.configDir`,
and `version` from that file, all optional, so the stub surface is
genuinely minimal.

Localized to PiProvider: no global state, no mutation of any shared
config, no upstream fork. Claude and Codex providers are unaffected
(their SDKs don't have this class of module-init side effect).

Verified end-to-end: built a compiled archon binary with this patch,
ran `archon workflow run test-pi --no-worktree` (Pi workflow with
model `anthropic/claude-haiku-4-5`), got a clean response. Before the
patch, same binary crashed at `dag_node_started` with the ENOENT above.

Regression test added: asserts `PI_PACKAGE_DIR` is set after sendQuery
hits even its fast-fail "no model" path. Together with the existing
`provider-lazy-load.test.ts` (boot-safe) this covers both halves.
… and Codex (coleam00#1361)

Both binary resolvers previously stopped at env-var + explicit config and
threw a "not found" error when neither was set. Users who followed the
upstream-recommended install flow (Anthropic's `curl install.sh` for
Claude, `npm install -g @openai/codex`) still had to manually set either
`CLAUDE_BIN_PATH` / `CODEX_BIN_PATH` or the corresponding config field
before any workflow could run.

Add a tier-N autodetect step between the explicit config tier and the
install-instructions throw. Purely additive: env and config still win
when set (precedence covered by new tests). On autodetect miss, the same
install-instructions error fires as before.

Claude probe list (verified against docs.claude.com "Uninstall Claude
Code → Native installation" section):
  - $HOME/.local/bin/claude            (mac/linux native installer)
  - $USERPROFILE\.local\bin\claude.exe (Windows native installer)

Codex probe list (verified against openai/codex README; npm global-
install puts the binary at `{npm_prefix}/bin/<name>` on POSIX,
`{npm_prefix}\<name>.cmd` on Windows):
  - $HOME/.npm-global/bin/codex   (user-set `npm config set prefix`)
  - /opt/homebrew/bin/codex       (mac arm64 with homebrew-node)
  - /usr/local/bin/codex          (mac intel / linux system node)
  - %APPDATA%\npm\codex.cmd       (Windows npm global default)
  - $HOME\.npm-global\codex.cmd   (Windows user-set prefix)

Not probed (explicit override still required):
  - Custom npm prefixes — `npm root -g` would need a subprocess per
    resolve, too much surface for a probe helper
  - `brew install --cask codex` — cask layout isn't a PATH binary
  - Manual GitHub Releases extracts — placement is user-determined
  - `~/.bun/bin/codex` — not documented in openai/codex README

Pi provider intentionally has no equivalent change: the Pi SDK is
bundled into the archon binary (no subprocess), so there's no "binary"
to resolve. Pi auth lives at `~/.pi/agent/auth.json` which the SDK
already finds by default, and the PR A shim (`PI_PACKAGE_DIR`) handles
the package-dir case via Pi's own documented escape hatch.

E2E verified: removed both config entries from ~/.archon/config.yaml,
rebuilt compiled binary, ran `archon workflow run archon-assist` and a
Codex workflow. Logs showed `source: 'autodetect'` for both, responses
returned cleanly.
…ry autodetect test

The native-installer autodetect test computed its expected path from
process.env.HOME, but the implementation uses node:os homedir(). On
Windows, HOME is typically unset (Windows uses USERPROFILE), so the
test fell back to '/Users/test' while the resolver returned the real
home dir — making the spy's path-equality check fail and breaking CI
on windows-latest.

Mirror the implementation by importing homedir() from node:os and
joining with node:path so the expected path matches the actual
platform-resolved home and separator.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…ver (coleam00#1365)

Reported in coleam00#1365: a user running `archon serve` with DISCORD_BOT_TOKEN
set but the "Message Content Intent" toggle disabled in the Discord
Developer Portal saw the entire server crash with `Used disallowed
intents`. Discord rejects the gateway connection (close code 4014) when
a privileged intent is requested without being enabled, and the
unguarded `await discord.start()` propagated the error all the way up,
taking the web UI down with it.

Wrap discord.start() in try/catch — log the failure with an actionable
hint (special-cased for the disallowed-intent error) and continue
running. Other adapters and the web UI come up regardless. The shutdown
handler already uses optional chaining (`discord?.stop()`) so nulling
discord after a failed start is safe.

Other adapters (Telegram, Slack, GitHub, Gitea, GitLab) have the same
unguarded-start pattern but are out of scope for this fix — addressing
them is tracked separately.

Also expanded the Discord setup docs with a caution callout that names
the exact error string and the new log event so users can grep for
both.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…0#1362)

* docs(script-nodes): add dedicated guide and teach the archon skill how to write them

Script nodes (script:) have been a first-class DAG node type since v0.3.3 but
were documented only as one-liners in CLAUDE.md and a CI smoke test. Claude
Code reading the archon skill would see "Four Node Types: command, prompt,
bash, loop" and reach for bash+node/python one-liners instead of a proper
script node — losing bun's --no-env-file isolation, uv's --with dependency
pins, and the .archon/scripts/ reuse story.

- New packages/docs-web/src/content/docs/guides/script-nodes.md mirroring the
  structure of loop-nodes.md / approval-nodes.md: schema, inline vs named
  dispatch, runtime/deps semantics, scripts directory precedence (repo > home),
  extension-runtime mapping, env isolation, stdout/stderr contract, patterns,
  and the explicit list of ignored AI fields.
- guides/authoring-workflows.md and guides/index.md updated so the new guide is
  discoverable from both the node-types table and the guides landing page.
- reference/variables.md calls out the no-shell-quote difference between
  bash: and script: substitution — a subtle correctness trap when adapting a
  bash pattern into a script node.
- Sidebar order bumped +1 on hooks/mcp-servers/skills/global-workflows/
  remotion-workflow to slot script-nodes at order 5 next to the other
  node-type guides.

- .claude/skills/archon/SKILL.md: replaces stale "Four Node Types" (which
  also silently omitted approval and cancel) with the accurate seven, with a
  script-node code block showing both inline and named patterns.
- references/workflow-dag.md: full Script Node section covering dispatch,
  resolution, deps, stdout contract, and the list of AI-only fields that are
  ignored; validation-rules list updated.
- references/dag-advanced.md and references/variables.md: retry-support line
  corrected; no-shell-quote note added.
- examples/dag-workflow.yaml: added an extract-labels TypeScript script node
  and updated the header comment.

* fix(docs): review follow-ups for script-node guide

- skills example: extract-labels was reading process.env.ISSUE_JSON which is
  never set; use String.raw`$fetch-issue.output` so the upstream bash node's
  JSON is actually consumed
- guides/script-nodes.md + skills/workflow-dag.md: idle_timeout is accepted
  but ignored on script (and bash) nodes — executeScriptNode only reads
  node.timeout. Clarify that script/bash use `timeout`, not idle_timeout
- archon-workflow-builder.yaml: prompt enumerated only bash/prompt/command/loop,
  so the AI builder could never propose script or approval nodes. Add both
  (plus examples + rule about script output not being shell-quoted) and
  regenerate bundled defaults
- book/dag-workflows.md + book/quick-reference.md + adapters/web.md: fill in
  the node-type references that were missing script, approval, and cancel.
  adapters/web.md also overclaimed "loop" in the palette — NodePalette.tsx
  only drags command/prompt/bash, so note that the other kinds are YAML-only
…nv gaps, add good-practices + troubleshooting (coleam00#1363)

* fix(skill/when): document the full `when:` operator set and compound expressions

The skill reference previously stated "operators: ==, != only" which is
materially wrong — the condition evaluator supports ==, !=, <, >, <=, >=
plus && / || compound expressions with && binding tighter than ||, plus
dot-notation JSON field access. An agent authoring a workflow from the
skill would think half the operators don't exist.

Replaces the single-sentence section with a structured reference covering:
- All six comparison operators (string and numeric modes)
- Compound expressions with precedence rules and short-circuit eval
- JSON dot notation semantics and failure modes
- The fail-closed rules in full (invalid expression, non-numeric side,
  missing field, skipped upstream)

Grounded in packages/workflows/src/condition-evaluator.ts.

* feat(skill): document Approval and Cancel node types

Approval and cancel nodes are first-class DAG node types (approval since the
workflow lifecycle work in coleam00#871, cancel as a guarded-exit primitive) but the
skill never described either one. An agent reading the skill and asked to
"add a review gate before implementation" or "stop the workflow if the input
is unsafe" would fall back to bash + exit 1, losing the proper semantics
(cancelled vs. failed, on_reject AI rework, web UI auto-resume).

Approval node coverage (references/workflow-dag.md, SKILL.md):
- Full configuration block with message, capture_response, on_reject
- The interactive: true workflow-level requirement for web UI delivery
- Approve/reject commands across all platforms (CLI, slash, natural
  language) and the capture_response → $node-id.output flow
- Ignored-fields list + the on_reject.prompt AI sub-node exception

Cancel node coverage (references/workflow-dag.md, SKILL.md):
- Single-field schema (cancel: "<reason>")
- Lifecycle: cancelled (not failed); in-flight parallel nodes stopped;
  no DAG auto-resume path
- The "cancel: vs bash-exit-1" decision rule (expected precondition miss
  vs. check itself failing)
- Two canonical patterns — upstream-classification gate, pre-expensive-step
  gate

Validation-rules list updated to enumerate approval/cancel constraints
(message non-empty, on_reject.max_attempts range 1-10, cancel reason
non-empty), plus a forward note that script: joins the mutually-exclusive
set once PR coleam00#1362 lands.

Placement in both files is after the Loop section and before the validation
section, so this commit stays additive with respect to PR coleam00#1362's Script
node insertion between Bash and Loop — rebase is clean.

* feat(skill): document workflow-level fields beyond name/provider/model

The skill's Schema section previously showed only name, description, provider,
and model at the workflow level — which is most of a stub. Agents asked to
"use the 1M-context Claude beta" or "run this under a network sandbox" or
"add a fallback model in case Opus rate-limits" had no way to discover
that any of these fields existed at the workflow level.

Adds a comprehensive Workflow-Level Fields section covering:
- Core: name, description, provider, model, interactive (with explicit
  callout that interactive: true is REQUIRED for approval/loop gates on
  web UI — a common footgun)
- Isolation: worktree.enabled for pin-on/pin-off (the only worktree field
  at workflow level; baseBranch/copyFiles/path/initSubmodules are
  config.yaml only, so a cross-reference points there)
- Claude SDK advanced: effort, thinking, fallbackModel, betas, sandbox,
  with explicit per-node-only exceptions (maxBudgetUsd, systemPrompt)
- Codex-specific: modelReasoningEffort (with note that it's NOT the same
  as Claude's effort — this has confused users), webSearchMode,
  additionalDirectories
- A complete worked example combining sandbox + approval + interactive

All fields cross-referenced against packages/workflows/src/schemas/workflow.ts
and packages/workflows/src/schemas/dag-node.ts.

* feat(skill/loop): document interactive loops and gate_message

Interactive loop nodes pause between iterations for human feedback via
/workflow approve — used by archon-piv-loop and archon-interactive-prd.
The skill's Loop Nodes section previously omitted both interactive: true
and gate_message entirely, so an agent writing a guided-refinement
workflow wouldn't know the feature exists or that gate_message is
required at parse time.

Adds:
- interactive and gate_message rows to the config table (marking
  gate_message as required when interactive: true — enforced by the
  loader's superRefine)
- A dedicated "Interactive Loops" subsection explaining the 6-step
  iterate-pause-approve-resume flow
- Explicit call-out that $LOOP_USER_INPUT populates ONLY on the first
  iteration of a resumed session — easy to miss and a common surprise
- Workflow-level interactive: true requirement for web UI delivery
  (loader warning otherwise) so the full-flow example is complete
- Note that until_bash substitution DOES shell-quote $nodeId.output
  (unlike script bodies) — called out since the audit surfaced this
  inconsistency

* fix(skill/cli): complete the CLI command reference with missing lifecycle commands

The CLI reference previously documented only list, run, cleanup, validate,
complete, version, setup, and chat — missing nearly every workflow
lifecycle command an agent needs to operate a paused, failed, or stuck
run. The interactive-workflows reference assumed these commands existed
without actually documenting them.

Adds full documentation for:
- archon workflow status — show running workflow(s)
- archon workflow approve <run-id> [comment] — resume approval gate
  (also populates $LOOP_USER_INPUT on interactive loops and the gate
  node's output when capture_response: true)
- archon workflow reject <run-id> [reason] — reject gate; cancels or
  triggers on_reject rework depending on node config
- archon workflow cancel <run-id> — terminate running/paused with
  in-flight subprocess kill
- archon workflow abandon <run-id> — mark stuck row cancelled without
  subprocess kill (for orphan-cleanup after server crashes — matches
  the coleam00#1216 precedent)
- archon workflow resume <run-id> [message] — force-resume specific
  run (auto-resume is default; this is for explicit override)
- archon workflow cleanup [days] — disk hygiene for old terminal runs
  (with explicit callout that it does NOT transition 'running' rows,
  a common confusion)
- archon workflow event emit — used inside loop prompts for state
  signalling; documented so agents don't invent their own mechanism
- archon continue <branch> [flags] [msg] — iterative-session entry
  point with --workflow and --no-context flags

Also:
- Adds --allow-env-keys flag to the `workflow run` flag table with
  audit-log context and the env-leak-gate remediation use case
- Adds an "Auto-resume without --resume" note disambiguating when
  --resume is needed vs. when auto-resume handles it
- Adds --include-closed flag to `isolation cleanup`, which was
  previously missing; converts the flag list to a structured table
- Explains the cancel/abandon distinction (live subprocess vs. orphan)

All grounded in packages/cli/src/commands/workflow.ts, continue.ts,
and isolation.ts.

* feat(skill/repo-init): add scripts/ and state/, three-path env model, per-project env injection

The repo-init reference was missing two first-class .archon/ directories
(scripts/ since v0.3.3, state/ since the workflow-state feature) and had
nothing to say about env — the coleam00#1 thing a user hits on first-run when
their repo has a .env file with API keys.

Directory tree updates:
- Adds .archon/scripts/ with the extension->runtime rule (.ts/.js -> bun,
  .py -> uv) so agents know where to put named scripts referenced by
  script: nodes.
- Adds .archon/state/ with explicit "always gitignore" callout — these
  are runtime artifacts, not source. Previously undocumented in the skill.
- Adds .archon/.env (repo-scoped Archon env) and distinguishes it from
  the target repo's top-level .env.
- Adds a "What each directory is for" list so the structure isn't just
  a tree with no narrative.

.gitignore guidance:
- state/ and .env added as must-gitignore (state/ matches CLAUDE.md and
  reference/archon-directories.md — skill was lagging).
- mcp/ demoted to conditional — gitignore only if you hardcode secrets.

New "Three-Path Env Model" section:
- ~/.archon/.env (trusted, user), <cwd>/.archon/.env (trusted, repo),
  <cwd>/.env (UNTRUSTED, target project — stripped from subprocess env).
- Precedence (override: true across archon-owned paths) and the
  observable [archon] loaded N keys / stripped K keys log lines so
  operators can verify what actually happened.
- Decision tree for where to put API keys vs. target-project env vs.
  things Archon shouldn't touch.
- Links to archon setup --scope home|project with --force for writing
  to the right file with timestamped backups.

New "Per-Project Env Injection" section:
- Documents both managed surfaces: .archon/config.yaml env: block
  (git-committed, $REF expansion) and Web UI Settings → Projects →
  Env Vars (DB-stored, never returned over API).
- Names every execution surface that receives the injected vars:
  Claude/Codex/Pi subprocess, bash: nodes, script: nodes, and direct
  codebase-scoped chat.
- Documents the env-leak gate with all 5 remediation paths so an agent
  hitting "Cannot register: env has sensitive keys" knows the options.

Grounded in CHANGELOG v0.3.7 (three-path env + setup flags), v0.3.0
(env-leak gate), and reference/security.md on the docs site.

* fix(skill/authoring-commands): correct override paths and add home-scoped commands

The file-location and discovery sections described an override layout that
does not match the actual resolver. It showed:

  .archon/commands/defaults/archon-assist.md  # Overrides the bundled

and claimed `.archon/commands/defaults/` was where repo-level overrides
lived. In fact the resolver (executor-shared.ts:152-200 + command-
validation.ts) walks `.archon/commands/` 1 level deep and uses basename
matching — putting `archon-assist.md` at the top of `.archon/commands/`
is the canonical way to override the bundled version. The `defaults/`
subfolder is a Archon-internal convention for shipping bundled defaults,
not a user-facing override pattern.

Also, home-scoped commands (`~/.archon/commands/`, shipped in v0.3.7)
were completely absent — agents authoring personal helpers wouldn't
know they could live at the user level and be shared across every repo.

Changes:
- File Location section now shows all three discovery scopes (repo,
  home, bundled) with precedence ordering and 1-level subfolder rules
- Duplicate-basename rule documented as a user error surface
- Discovery and Priority section rewritten with accurate 3-step lookup
  order — no more references to the nonexistent defaults/ override path
- Adds the Web UI "Global (~/.archon/commands/)" palette label note so
  users authoring helpers for the builder know what to expect

No code changes — this is a pure fix of stale/incorrect skill reference
material.

* feat(skill): add workflow good-practices and troubleshooting reference pages

Closes two gaps from the audit. The skill previously had zero guidance on
designing multi-node workflows (what to avoid, what to reach for first,
how to structure artifact chains) and zero guidance on where to look
when things go wrong (log paths, env-leak gate remediations, orphan-row
cleanup, resume semantics).

New references/good-practices.md (9 Good Practices + 7 Anti-Patterns):

- Use deterministic nodes (bash:/script:) for deterministic work, AI for
  reasoning — the single biggest quality lever
- output_format required whenever downstream when: reads a field — the
  most common source of "workflow silently routes wrong"
- trigger_rule: none_failed_min_one_success after conditional branches —
  the classic bug where all_success fails because a skipped when:-gated
  branch doesn't count as a success
- context: fresh requires artifacts for state passing — commands must
  explicitly "read $ARTIFACTS_DIR/..." when downstream of fresh
- Cheap models (haiku) for glue, strong for substance
- Workflow descriptions as routing affordances
- Validate (archon validate workflows) + smoke-run before shipping
- Artifact-chain-first design
- worktree.enabled: true for code-changing workflows (reversibility)
- Anti-patterns with before/after YAML examples for each (AI-for-tests,
  free-form when: matching, context: fresh without artifacts, long flat
  AI-node layers, secrets in YAML, retry on loop nodes, tiny
  max_iterations, missing workflow-level interactive:, tool-restricted
  MCP nodes)

New references/troubleshooting.md:

- Log location (~/.archon/workspaces/<owner>/<repo>/logs/<run-id>.jsonl)
  with jq recipes for common queries (last assistant message, failed
  events, full stream)
- Artifact location for cross-node handoff debugging
- 9 Common Failure Modes, each with root cause + concrete fix:
  - $BASE_BRANCH unresolvable
  - Env-leak gate (5 remediations)
  - Claude/Codex binary not found (compiled-binary-only)
  - "running" forever (AI working / orphan / idle_timeout)
  - Mid-workflow failure and auto-resume semantics
  - Approval gate missing on web UI (workflow-level interactive:)
  - MCP plugin connection noise (filtered by design)
  - Empty $nodeId.output / field access (4 causes)
- Diagnostic command cheat sheet (list, status, isolation list, validate,
  tail-log, --verbose, LOG_LEVEL=debug)
- Escalation protocol (version + validate + log tail + CHANGELOG + issue)

SKILL.md routing table now dispatches "Workflow good practices /
anti-patterns" and "Troubleshoot a failing / stuck workflow" to the new
references so an agent can find them without having to know they exist.

* docs(book): update node-types coverage from four to all seven

The book is the curated first-contact reading path (landing page → "Get
Started" → /book/). Both dag-workflows.md and quick-reference.md were
stuck on "four node types" — missing script, approval, and cancel. A user
reading the book as their first introduction would form an incomplete
mental model, then find three more node types in the reference section
later with no explanation of when they arrived.

book/dag-workflows.md:
- "four node types" → "seven node types. Exactly one mode field is
  required per node"
- Table now lists Command, Prompt, Bash, Script, Loop, Approval, Cancel
  with one-line "when to use" for each, and cross-links to the dedicated
  guide pages for Script / Loop / Approval
- New sections below the table for Script (inline + named examples with
  runtime and deps), Approval (with the interactive: true workflow-level
  note that's easy to miss), and Cancel (guarded-exit pattern) — keeping
  the existing narrative shape for Bash and Loop

book/quick-reference.md:
- Node Options table now includes script, approval, cancel rows
- agents row added (inline sub-agents, Claude-only)
- New "Script-specific fields" and "Approval-specific fields" subsections
  so the cheat-sheet is actually complete rather than pointing users
  elsewhere for the required constraints
- Retry row callout that loop nodes hard-error on retry — previously
  omitted
- bash timeout note widened to cover script timeout (same semantics)

Both files are docs-web content; the CI build on the docs-script-nodes
PR (coleam00#1362) previously validated the Starlight build path with a similar
table addition, so this should render clean.

* fix(skill/cli): remove nonexistent \`archon workflow cancel\`, fix workflow status jq recipe

Two accuracy issues from the PR code-reviewer (comment 4311243858).

C1: \`archon workflow cancel <run-id>\` does NOT exist as a CLI subcommand.
The switch at packages/cli/src/cli.ts:318-485 dispatches on list / run /
status / resume / abandon / approve / reject / cleanup / event — running
\`archon workflow cancel\` hits the default case and exits with "Unknown
workflow subcommand: cancel" (cli.ts:478-484). Active cancellation is
only available via:
  - /workflow cancel <run-id> chat slash command (all platforms)
  - Cancel button on the Web UI dashboard
  - POST /api/workflows/runs/{runId}/cancel REST endpoint

cli-commands.md: removed the \`### archon workflow cancel <run-id>\`
subsection; kept the \`abandon\` subsection but made it explicit that
abandon does NOT kill a subprocess. Added a call-out box at the bottom
of the abandon section explaining where to go for actual cancellation.

troubleshooting.md "running forever" section: split the original
cancel-vs-abandon advice into three bullets — Web UI / CLI abandon (for
orphans, no subprocess kill) / chat \`/workflow cancel\` (for live runs
that need interruption). Added an explicit "there is no archon workflow
cancel CLI subcommand" parenthetical since the wrong command was being
suggested in flow.

I1: the \`archon workflow list --json\` diagnostic used an incorrect jq
filter. workflow list's --json output (workflow.ts:185-219) has shape
{ workflows: [{ name, description, provider?, model?, ... }], errors: [...] }
with no \`runs\` field — \`jq '.workflows[] | select(.runs)'\` returns empty
unconditionally. Replaced with \`archon workflow status --json | jq '.runs[]'\`,
which matches the actual shape of workflowStatusCommand at
workflow.ts:852+ ({ runs: WorkflowRun[] }). Also tightened the narration
to distinguish JSON from human-readable status output.

No change to the commit history in this PR — these are follow-up fixes
to claims I introduced in earlier commits of this branch (f10b989 for
C1, 66d2b86 for I1).

* fix(skill): remove env-leak gate references (feature was removed in provider extraction)

C2 from the PR code-reviewer (comment 4311243858). The pre-spawn env-leak
gate was removed from the codebase during the provider-extraction refactor
— see TODO(coleam00#1135) at packages/providers/src/claude/provider.ts:908. Zero
hits for --allow-env-keys / allowEnvKeys / allow_env_keys / allow_target_repo_keys
across packages/. The CLI's parseArgs (cli.ts:182-208) has no
--allow-env-keys option, and because parseArgs uses strict: false, an
unknown --allow-env-keys would be silently ignored rather than error.

What remains accurate and is NOT touched:
- Three-Path Env Model section (user/repo archon-owned envs are loaded;
  target repo <cwd>/.env keys are stripped from process.env at boot)
  still correctly describes current behavior, grounded in
  packages/paths/src/strip-cwd-env.ts + env-integration.test.ts
- Per-Project Env Injection section (Option 1: .archon/config.yaml env:
  block; Option 2: Web UI Settings → Projects → Env Vars) is unchanged —
  both remain the sanctioned way to get env vars into subprocesses

Removed claims (all three files):
- cli-commands.md: --allow-env-keys flag row in the workflow run flags
  table
- repo-init.md: the "Env-leak gate" subsection at the end of Per-Project
  Env Injection listing 5 remediations (all of which reference UI/CLI/
  config surfaces that don't exist). Replaced with a succinct callout
  that explains the actual current behavior — target repo .env keys are
  stripped, workflows that need those values should use managed
  injection — so the reader still gets the "where to put my env vars"
  answer
- troubleshooting.md: the "Cannot register: codebase has sensitive env
  keys" section (error message that can no longer be emitted)

If the env-leak gate is ever resurrected per TODO(coleam00#1135), the docs can be
re-added then. The CHANGELOG v0.3.0 entry describing the gate is a
historical record of past behavior and does not need to be rewritten.

* fix(skill/troubleshooting): correct JSONL event type names and field name

C3 from the PR code-reviewer (comment 4311243858). The troubleshooting
reference's event-types table used _started / _completed / _failed
suffixes, but packages/workflows/src/logger.ts:19-30 shows the actual
WorkflowEvent.type enum is:

  workflow_start | workflow_complete | workflow_error |
  assistant | tool | validation |
  node_start | node_complete | node_skipped | node_error

The second jq recipe also queried `.event` but the discriminator is `.type`.

Fixes:
- Event table: renamed columns (_started → _start, _completed → _complete,
  _failed → _error). Explicitly called out the field name as `type` so the
  reader knows what jq selector to use
- Replaced the "tool_use / tool_result" row with a single `tool` row and
  listed its actual payload fields (tool_name, tool_input, duration_ms,
  tokens) — tool_use/tool_result are SDK message kinds that appear within
  the AI stream, not top-level log event types
- Added a `validation` row (was missing; it's emitted by workflow-level
  validation calls with `check` and `result` fields)
- Removed `retry_attempt` row — this event type is not emitted to the
  JSONL file. Retry bookkeeping goes through pino logs, not the workflow
  log file
- Added an explicit callout that loop_iteration_started /
  loop_iteration_completed (and other emitter-only events) go through
  the workflow event emitter + DB workflow_events table, NOT the JSONL
  file. Pointed readers to the DB or Web UI for loop-level detail. This
  distinguishes the two parallel event systems — easy to conflate
  (store.ts:11-17 uses _started/_completed/_failed for the DB side,
  logger.ts uses _start/_complete/_error for JSONL)
- Fixed the "all failed events" jq recipe: .event → .type and _failed → _error
- Minor cleanup: the inline "tool_use events" mention in the "running
  forever" section said the wrong event name — updated to "tool or
  assistant events in the tail"

Grounded in packages/workflows/src/logger.ts (canonical JSONL event
shape) and packages/workflows/src/store.ts (the parallel DB event
naming, which the reviewer correctly flagged as different and worth
keeping distinct).

* fix(skill): two stragglers from the code-reviewer audit

Cleanup of two references that slipped through the earlier C1 and C3 fixes:

- references/troubleshooting.md:126: \`node_failed\` → \`node_error\`
  (the "Node output is empty" diagnostics section references the JSONL
  log, which uses the logger.ts enum — not the DB workflow_events table
  which does use \`node_failed\`). The C3 fix corrected the event table
  and one jq recipe but missed this inline mention.

- references/interactive-workflows.md:106: removed \`archon workflow
  cancel <run-id>\` (nonexistent CLI subcommand) from the
  troubleshooting bullet. This was pre-existing before the hardening
  PR but fell within the C1 remediation scope. Replaced with the
  correct triage: reject (approval gate only) vs abandon (orphan
  cleanup, no subprocess kill) vs chat /workflow cancel (actual
  subprocess termination).

Grounded in the same sources as the earlier C1/C3 commits:
packages/cli/src/cli.ts:318-485 (no cancel case) and
packages/workflows/src/logger.ts:19-30 (JSONL type enum).

* feat(skill): point to archon.diy as the canonical docs source

The skill had no reference to archon.diy (the live docs site built from
packages/docs-web/). Several reference files said "see the docs site"
without naming the URL, leaving the agent to guess or grep the repo for
the hostname. An agent with the skill loaded should know that when the
distilled reference pages don't cover a case, the full canonical docs
are one WebFetch away.

SKILL.md: new "Richer Context: archon.diy" section between Routing and
Running Workflows. Covers:
- When to reach for the live docs (longer examples, tutorial framing,
  features the skill only mentions in passing, "where's that
  documented?" user questions)
- URL map — 13 starting points covering getting-started, book (tutorial
  series), guides/ (authoring + per-node-type + per-node-feature),
  reference/ (variables, CLI, security, architecture, configuration,
  troubleshooting), adapters/, deployment/
- Precedence: skill refs first (context-cheap, tuned for agents), docs
  site as escalation. Prevents agents defaulting to WebFetch when a
  local skill ref already covers the answer

Also upgrades the 5 existing generic "docs site" mentions across
reference files to concrete archon.diy URLs with anchor fragments where
helpful:
- good-practices.md: Inline sub-agents pattern → archon.diy/guides/
  authoring-workflows/#inline-sub-agents
- troubleshooting.md: "Install page on the docs site" → archon.diy/
  getting-started/installation/
- workflow-dag.md: "Workflow Description Best Practices" → anchor link;
  sandbox schema reference → archon.diy/guides/authoring-workflows/
  #claude-sdk-advanced-options
- repo-init.md: Security Model reference → archon.diy/reference/
  security/#target-repo-env-isolation (deep-link into the section that
  covers the <cwd>/.env strip behavior)

URL source of truth: astro.config.mjs:5 (site: 'https://archon.diy').
URL structure mirrors packages/docs-web/src/content/docs/<section>/
<page>.md — verified by the 62 pages the docs build produces.
…#1395)

Anthropic's Opus 4.7 landed 2026-04-16; on the Anthropic API, opus /
opus[1m] now resolve to 4.7 with a 1M context window at standard
pricing. Using the alias instead of the hard-pinned claude-opus-4-6[1m]
lets bundled default workflows auto-track the recommended Opus version.

No explicit effort is set, so nodes inherit the per-model default
(xhigh on 4.7, high on 4.6).
…m00#1398)

* fix(workflow): migrate piv-loop plan handoff to $ARTIFACTS_DIR (coleam00#1380)

The create-plan node used a relative path (.claude/archon/plans/{slug}.plan.md)
that the AI agent would sometimes write to a different location, breaking all
downstream nodes that glob for the plan file. Migrated all plan/progress file
references to $ARTIFACTS_DIR/plan.md and $ARTIFACTS_DIR/progress.txt, matching
the pattern used by archon-fix-github-issue and other workflows.

Changes:
- Replace slug-based plan path with $ARTIFACTS_DIR/plan.md in create-plan node
- Replace ls -t glob discovery with direct $ARTIFACTS_DIR/plan.md reads in
  refine-plan, code-review, and fix-feedback nodes
- Replace empty-string guard with file-existence check in implement-setup bash
- Migrate progress.txt references in implement loop to $ARTIFACTS_DIR/
- Add explicit plan/progress paths in finalize node
- Regenerated bundled-defaults.generated.ts

Fixes coleam00#1380

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(workflow): address review findings in archon-piv-loop

- Rename 'Step 2: Write the Plan' to 'Step 2: Plan File Location' to
  eliminate the duplicate heading that collided with Step 3's identical
  title in the create-plan node
- Guard implement-setup against a 0-task plan file: exit 1 with a
  clear error when no '### Task N:' sections are found, preventing a
  silent no-op implement loop
- Remove 2>/dev/null from code-review commit so pre-commit hook failures
  and other stderr are visible to the agent instead of silently swallowed
- Replace '|| true' on git push in finalize with an explicit WARNING echo
  so push failures (auth, upstream conflict, no remote) surface to the
  agent rather than being silently ignored
- Regenerate bundled-defaults.generated.ts

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* chore(workflows): regenerate bundled defaults to match opus[1m] alias

The bundle was stale relative to the YAML sources after coleam00#1395 merged —
check:bundled was failing CI. Regenerated; no YAML edits.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…cutor (coleam00#1403)

PIV Task 1: Adds three new tests in a dedicated describe block
'executeDagWorkflow -- final status derivation' covering the anyFailed
branch (dag-executor.ts ~line 2956) that previously had no direct test:
- one success + one independent failure calls failWorkflowRun (not completeWorkflowRun)
- multiple successes + one failure calls failWorkflowRun (not completeWorkflowRun)
- trigger_rule: none_failed skips dependent node but anyFailed still marks run failed

Fixes coleam00#1381.
New reference for the archon skill: a single-glance lookup of which
parameter works on which node type, an intent-based "how do I..." table,
a consolidated silent-failure catalog, and an inline agents: section
(previously only referenced via archon.diy).

Purpose is complementary, not duplicative:
- workflow-dag.md remains the authoring guide
- dag-advanced.md remains the hooks/MCP/skills/retry deep-dive
- good-practices.md remains the patterns and anti-patterns
- parameter-matrix.md is the grep-this-first lookup when you know the
  outcome you want but not which field gets you there

Also registers the new reference in SKILL.md routing table.
@Wirasm
Copy link
Copy Markdown
Collaborator

Wirasm commented Apr 27, 2026

Hi @shaun0927 — thanks for opening this PR.

This repository uses a PR template at .github/pull_request_template.md with several required sections. A few of them appear to be empty or placeholder here:

  • Label Snapshot
  • Change Metadata
  • Architecture Diagram
  • Security Impact
  • Compatibility / Migration
  • Human Verification
  • Side Effects / Blast Radius
  • Rollback Plan
  • Risks and Mitigations

Could you fill those out (even briefly)? The template helps reviewers understand scope, risk, and rollback — it speeds up review significantly.

If a section genuinely doesn't apply, just write "N/A" in it rather than leaving it blank.

@Wirasm
Copy link
Copy Markdown
Collaborator

Wirasm commented Apr 27, 2026

@shaun0927 related to #1242 — overlapping area or partial fix.

Wirasm and others added 3 commits April 27, 2026 10:16
Add explicit references to .github/PULL_REQUEST_TEMPLATE.md in both
CONTRIBUTING.md and CLAUDE.md, plus a reminder to link issues with
Closes/Fixes/Resolves so they auto-close on merge. Repo-triage runs
were flagging dozens of partially-filled or unlinked PRs each cycle.
…riage (coleam00#1428)

* feat(workflows): add maintainer-standup workflow for daily PR/issue triage

Daily morning briefing that pulls origin/dev, triages all open PRs and assigned
issues against direction.md, and surfaces progress vs. the previous run. Designed
for live-checkout use (worktree.enabled: false) so it can read its own state.

Layout under .archon/maintainer-standup/:
  - direction.md (committed) — project north-star: what Archon IS / IS NOT.
    Drives PR P4 polite-decline classification with cited clauses.
  - README.md / profile.md.example — setup docs and template for new maintainers.
  - profile.md, state.json, briefs/YYYY-MM-DD.md — gitignored, per-maintainer.

Engine:
  - 3 parallel gather scripts in .archon/scripts/maintainer-standup-*.ts
    (git-status, gh-data, read-context) — bun runtime, JSON stdout.
  - Synthesis node: command file with output_format schema for
    { brief_markdown, next_state }.
  - Persist node: tiny inline bun script writes both to disk.

Run-to-run continuity: state.json carries observed_prs/issues snapshots, so the
next run can detect what merged, what closed, what the maintainer shipped, and
which carry-over items aged past N days.

Also adds .archon/** to the ESLint global ignore list (matches the existing
.claude/skills/** pattern) since .archon/ is user content and not part of any
tsconfig project.

* fix(maintainer-standup): address CodeRabbit review on coleam00#1428

- gh-data: bump --limit 100 → 1000 on all_open_prs and warn loudly when
  the cap is hit; preserves the observed_prs invariant the next-run
  "resolved since last run" diff depends on. (CodeRabbit critical)
- maintainer-standup.md: clarify P1 CI signal — the gathered payload only
  carries mergeStateStatus, not statusCheckRollup; for borderline P1s,
  drill in via `gh pr checks <n>`. (CodeRabbit minor)
- workflow.yaml persist: write briefs under local YYYY-MM-DD (sv-SE
  locale) instead of UTC ISO date, so an evening run doesn't file
  tomorrow's brief and break recent_briefs lookups. (CodeRabbit minor)
- workflow.yaml persist: wrap state/brief writes in try/catch; on
  failure dump brief_markdown and next_state to stderr so a 5-minute
  Sonnet synthesis isn't lost to a transient disk error. (CodeRabbit minor)
- gh-data + git-status: switch from execSync (shell-string) to
  execFileSync (argv array) for git/gh invocations. Defense-in-depth
  against shell metacharacters in values that pass through (esp. the
  gh_handle from profile.md). (CodeRabbit nitpick)
Add optional `tags: string[]` to `workflowBaseSchema`. Explicit values take precedence over keyword inference; `tags: []` suppresses inference end-to-end; omitting the field falls back to inference (backwards compatible). Non-array values warn-and-ignore matching the sibling `worktree`/`additionalDirectories` patterns.
Wirasm added 4 commits April 27, 2026 13:31
…oleam00#1432)

Six findings, two majors and four minors/nitpicks:

- gate.md L17 vs L77: resolved conflicting input-source instructions.
  Body claimed "all inline, no extra fetch" while a later phase
  permitted reading PULL_REQUEST_TEMPLATE.md. Now: explicit "one
  allowed extra read" callout in Phase 1 + matching wording in Gate C.
  (CodeRabbit major)

- gate.md fenced blocks: added missing language identifiers (text/json/
  markdown) to satisfy markdownlint MD040. (CodeRabbit minor)

- gate.md L155 + read-context.ts: deterministic clock. The 3-day deadline
  was anchored to prior_state.last_run_at, which can be stale and produce
  past-dated deadlines. Moved both today and deadline_3d into the
  read-context.ts output (computed via sv-SE locale → ISO date in local
  time) and instructed the gate to use $read-context.output.deadline_3d
  directly. LLMs are unreliable at calendar arithmetic; this avoids it
  entirely. (CodeRabbit major)

- maintainer-review-pr.yaml fetch-diff: dropped 2>/dev/null on gh pr diff
  so auth / network / deleted-PR failures fail the node instead of
  feeding an empty diff to the gate. Empty-but-successful diff (PR has
  no changes) is now an explicit marker the gate can detect. (CodeRabbit
  minor)

- maintainer-review-pr.yaml approve-unclear: added capture_response: true
  so the maintainer's approve comment flows to the report node. Reject
  reasoning is already captured by Archon's run record. (CodeRabbit
  minor)

- maintainer-review-pr.yaml post-decline + report.md: the gh pr edit
  --add-label call previously swallowed all errors with || true and the
  report still claimed the label was applied. Now writes applied/skipped
  to $ARTIFACTS_DIR/.label-applied + the gh stderr to .label-error so
  the report can describe the actual outcome. (CodeRabbit nitpick)
…ume (coleam00#1435)

* fix(workflows): approval gate bypass after reject-with-redraft on resume

When an approval node was rejected with on_reject.prompt, the synthetic
PromptNode built to run the on_reject prompt reused the approval gate's
own node ID. executeNodeInternal then wrote a node_completed event with
that ID, causing getCompletedDagNodeOutputs to treat the gate as already
completed on the next resume — bypassing the human gate entirely.

Fix: give the synthetic node the ID `${node.id}:on_reject` so its
node_completed event has a distinct step_name that won't match the
approval gate slot in priorCompletedNodes.

Adds a regression test asserting no node_completed event with the
approval gate's ID is written during on_reject execution.

Fixes coleam00#1429

* test(workflows): add positive assertion and SSE side-effect comment for on_reject synthetic node

Add complementary positive assertion to the regression test to verify that
node_completed is written exactly once with step_name 'review:on_reject',
ensuring future refactors that suppress the event entirely would be caught.

Add inline comment in executeApprovalNode documenting the known SSE side-effect:
node_started/node_completed events with nodeId='review:on_reject' flow through
the SSE pipeline into the web UI, resulting in a transient phantom node in the
execution view. This is cosmetic-only — the human gate contract is preserved.

* simplify: reduce duplicate cast pattern in on_reject test assertions
…e checkout (coleam00#1438)

* feat(workflows): add mutates_checkout field to skip path-lock for concurrent runs

Add `mutates_checkout: boolean` (optional, default true) to the workflow
schema. When set to false, the executor skips the path-exclusive lock
that serializes all runs on the same working path, allowing N concurrent
runs on the same live checkout.

The primary use case is `maintainer-review-pr`, which reads shared state
but writes only to per-run artifact paths and GitHub PR comments — two
parallel reviews of different PRs should not fail with "Workflow already
active on this path".

Changes:
- `schemas/workflow.ts`: add optional `mutates_checkout` field
- `loader.ts`: parse and propagate the field (warn-and-ignore on invalid values)
- `executor.ts`: wrap path-lock guard in `if (workflow.mutates_checkout !== false)`
- `executor.test.ts`: two new tests in the concurrent-run guard suite
- `maintainer-review-pr.yaml`: opt in with `mutates_checkout: false`

* test(workflows): add loader tests for mutates_checkout parsing

- Add 5 tests covering false, true, omitted, and invalid (string "yes") values
- Invalid non-boolean values are silently dropped with warn — now explicitly tested
- Remove the // end mutates_checkout guard trailing comment (no precedent in file)
- Clarify loader comment: "parse/warn pattern" not "warn-and-ignore pattern" to avoid implying the return style matches interactive

* simplify: collapse nodeType/aiFields pair into single nonAiNode object in parseDagNode
…es (coleam00#1434)

* docs: replace String.raw with direct assignment in script node examples

String.raw`$nodeId.output` fails silently when substituted output contains
a backtick, terminating the template literal early and producing cryptic parse
errors. JSON is valid JS expression syntax, so direct assignment is safe for
all valid JSON values including those with backticks.

- Replace String.raw pattern in dag-workflow.yaml example
- Replace String.raw pattern in archon-workflow-builder.yaml template
- Add CAUTION bullet in workflow-dag.md Script Node section
- Add Silent Failures item coleam00#14 in parameter-matrix.md
- Add Starlight caution aside in script-nodes.md
- Extend script bodies bullet in variables.md
- Regenerate bundled-defaults.generated.ts

Fixes coleam00#1427

* docs: fix Rule 6 in generate-yaml prompt to distinguish bun vs uv patterns

Rule 6 still referenced JSON.parse after the example was updated to direct
assignment, creating a contradiction for the AI code generator. Update the
prose to explicitly distinguish TypeScript/bun (direct assignment) from
Python/uv (json.loads), matching the updated embedded example.
@shaun0927
Copy link
Copy Markdown
Author

Updated the PR description with the missing template sections.

No additional code change in this comment — the bot's P1 findings (treat paused as resumable in pre-exec guard, treat paused as non-terminal in finalization guard, abort AI streaming on cancel but not on paused, run pre-exec guard before node_started log/event emission) were addressed in the prior follow-ups (70d5f0f5, 79ce6152, 493a2713, 26973b14).

Current behaviour on the branch HEAD:

  • if (streamStatus === null || (streamStatus !== 'running' && streamStatus !== 'paused')) { nodeAbortController.abort(); }paused is explicitly excluded from the abort condition for AI nodes.
  • Finalization guard: if (effectiveStatus !== 'paused') keeps approval/interactive flows on the resumable path.
  • Pre-execution gate runs before node_started events for bash/script nodes; deleted runs (status null) fail closed.

@Wirasm
Copy link
Copy Markdown
Collaborator

Wirasm commented Apr 27, 2026

Review Summary

Verdict: ready-to-merge

Your change adds proper cancellation propagation for bash and script nodes in the DAG executor, matching the pattern already used for AI nodes. The Promise.race mid-layer watcher is clean, the pre-execution status checks are well-placed, and CodeRabbit feedback has been fully addressed. The PR is solid — a couple of minor polish items below, neither blocking.

Blocking issues

(none)

Suggested fixes

  • packages/workflows/src/dag-executor.ts:1058–1066 and packages/workflows/src/dag-executor.ts:1238–1246: The pre-execution status-check catch blocks log the error but then proceed unconditionally. If getWorkflowRunStatus() fails (DB timeout, connection issue), the user gets no signal that their cancellation was supposed to stop execution but didn't. Consider surfacing a non-blocking warning via safeSendMessage in the catch block, or retrying the check once before proceeding. The executeBashNode and executeScriptNode paths are identical here — fix both.

Minor / nice-to-have

  • packages/workflows/src/dag-executor.ts:2253–2257: Mid-layer cancel watcher logs at warn level while the node-level checks use error level for the same class of failure. For consistency, align the level (or leave a comment explaining the intentional difference).

Compliments

  • The Promise.race([Promise.allSettled(...), cancelWatcher]) mid-layer cancellation pattern is elegant and well-explained in the PR description.
  • All existing error/event naming conventions are respected (bash_node.status_check_failed, dag.layer_cancel_status_check_failed, etc.).
  • The PR stays tightly scoped — all changes serve the single cancellation-propagation goal.

Reviewed via maintainer-review-pr workflow (Pi/Minimax). Aspects run: error-handling.

…s/experimental/

Move two repo-scoped workflows that were sitting untracked at the workflow
root into a dedicated subfolder. Subfolder grouping is supported by the
loader (1 level deep, resolution by filename), so workflow names are
unchanged and the /release skill still resolves archon-release correctly.

Files moved:
- archon-fix-github-issue-experimental.yaml — Path-A variant of the
  issue-fix workflow used today to land coleam00#1434, coleam00#1435, coleam00#1438.
- archon-release.yaml — the live release workflow used by the /release
  skill end-to-end (validate -> binary smoke -> version bump -> changelog
  -> approval -> commit -> PR -> tag -> Homebrew formula update).
@Wirasm
Copy link
Copy Markdown
Collaborator

Wirasm commented Apr 27, 2026

There is a merge conflict that needs resolving

avro198 and others added 3 commits April 27, 2026 21:32
…des (coleam00#1387)

executeBashNode previously only merged explicit envVars on top of
process.env. The three well-known workflow directories (artifactsDir,
logDir, baseBranch) were passed as function parameters and used for
compile-time substitution of $ARTIFACTS_DIR / $LOG_DIR / $BASE_BRANCH
in the script body, but were never added to the subprocess environment.

As a result, any script that relied on shell-runtime expansion — e.g.
JSON_FILE="${ARTIFACTS_DIR}/foo.output.json" inside a heredoc, an
inherited helper script, or a `bash -c` subshell — saw the variable
unset and silently fell back to its default (typically an empty string
or "."), writing artifacts to the workflow cwd instead of the nominal
artifacts directory.

Always build subprocessEnv from process.env plus the three well-known
directories, then allow explicit envVars to override. Compile-time
substitution behavior is unchanged; existing scripts that do not
reference these variables are unaffected; user-supplied envVars still
win on conflict.
…oleam00#1426)

* fix(workflow): substitute \$nodeId.output refs in approval messages

Approval node messages were emitted as raw strings, bypassing the
substituteNodeOutputRefs() pass that prompt/bash/loop/cancel nodes
all run. This made interactive workflows like atlas-onboard show
literal "\$gather-context.output.repo_name" placeholders to humans
at HITL gates, leaving them unable to know what they were approving.

Fix: rendered the approval.message through substituteNodeOutputRefs
once at the top of the standard approval gate path, then used the
resolved string in all 4 emission sites (safeSendMessage,
createWorkflowEvent, pauseWorkflowRun, event-emitter).

Test: new dag-executor.test case wires a structured-output upstream
node into an approval node and asserts pauseWorkflowRun receives the
substituted message ("Repo: hcr-els | App: CCELS | Port: 3012")
rather than the literal placeholders.

Repro: any workflow with an approval node whose message references
\$nodeId.output[.field]. Observed in the wild on atlas-onboard's
confirm-context HITL gate.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(workflow): extend approval-substitution test to cover all 4 emission sites

Per CodeRabbit review: the original test only verified pauseWorkflowRun
received the substituted message, but the fix touches 4 emission sites.
A future regression at safeSendMessage / createWorkflowEvent / event-emitter
would silently leave the test passing while users still saw raw $node.output
placeholders.

Adds two additional assertions:
- platform.sendMessage prompt contains substituted message + does NOT
  contain literal $gather-context.output placeholders
- The persisted approval_requested workflow event's data.message is
  substituted

Event-emitter assertion deferred (no existing pattern for spying on the
global emitter in this test file). Two of three secondary surfaces
covered closes the practical regression risk — both are user-visible
(chat prompt + audit-log event); the emitter is internal only.

Test count: 7 pass / 22 expect() (was 18). Full suite 193 pass / 353
expect() — no regressions.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…m00#1286) (coleam00#1367)

* feat(workflows): expose $LOOP_PREV_OUTPUT in loop node prompts (coleam00#1286)

Adds a new substitution variable that carries the previous loop iteration's
cleaned output into the next iteration's prompt. Empty on iteration 1; the
prior iteration's output (after stripCompletionTags) on iteration 2+.

Why: fresh_context: true loops have no way to reference what the previous
pass produced or why it failed without dragging the full session forward.
$LOOP_PREV_OUTPUT closes that gap with zero session-cost — same trust
boundary as $nodeId.output, no new external surface.

Changes:
- packages/workflows/src/executor-shared.ts: substituteWorkflowVariables
  accepts a 10th positional loopPrevOutput arg and substitutes
  $LOOP_PREV_OUTPUT (defaults to '').
- packages/workflows/src/dag-executor.ts: executeLoopNode passes
  lastIterationOutput on iteration 2+ (and explicit '' on iteration 1 /
  the first iteration of an interactive resume, since lastIterationOutput
  is a per-call variable that does not survive resume metadata).
- Unit tests: 3 new cases in executor-shared.test.ts.
- Integration tests: 2 new cases in dag-executor.test.ts verifying the
  prompt sent to the AI on iter 1 vs iter 2, and that the value reflects
  cleaned output (no <promise> tags).
- Docs: variables.md, loop-nodes.md (new "Retry-on-failure" pattern),
  CLAUDE.md variable reference.

Backward compatibility: prompts that don't reference $LOOP_PREV_OUTPUT are
unaffected. All 843 workflow tests + type-check + lint + format:check +
bun run validate pass locally.

* docs: address coderabbit review on variables/loop-nodes

- variables.md: include $LOOP_PREV_OUTPUT in substitution-order list and
  availability table to match the new variable row at line 30
- loop-nodes.md: document the interactive-resume exception where the first
  iteration after an approval-gate resume still receives an empty
  $LOOP_PREV_OUTPUT regardless of iteration number (per dag-executor.ts
  L1781-1783 where i === startIteration always clears prev output)

* docs(changelog): add Unreleased entry for $LOOP_PREV_OUTPUT (coleam00#1367 review)

* test(loop): add resume-from-approval integration test for $LOOP_PREV_OUTPUT (coleam00#1367 review)

Per maintainer-review-pr suggestion (Wirasm): two-call integration test
covering the resume-from-approval scenario.

  - Call 1: fresh interactive loop pauses at the gate after iteration 1 and
    asserts $LOOP_PREV_OUTPUT substitutes to empty on iter 1 (no prior
    output) plus the gate pause is recorded.
  - Call 2: resumed run with metadata.approval populated. The first
    resumed iteration must substitute $LOOP_PREV_OUTPUT to '', NOT to the
    paused run's iter-1 output (which lived in a different process and is
    not persisted). $LOOP_USER_INPUT still flows through as normal.

Locks the documented invariant at dag-executor.ts:1769-1772.

---------

Co-authored-by: voidborne-d <DottyEstradalco@allergist.com>
@shaun0927 shaun0927 force-pushed the fix/cancel-propagation-bash-script-nodes branch from 26973b1 to 951ce5b Compare April 28, 2026 04:11
@shaun0927
Copy link
Copy Markdown
Author

Rebased on latest dev. Squashed the five review-cycle commits into one so the history doesn't carry the incremental fixups; the diff is unchanged.

One small adaptation: the per-node streaming status check now calls the upstream shouldContinueStreamingForStatus helper that landed on dev — same intent (paused tolerated, everything else aborts the stream), just routed through the helper.

bun run validate clean locally (type-check, lint, workflows tests all pass).

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
packages/workflows/src/dag-executor.test.ts (1)

1275-1276: Prefer an interface for the mocked event shape.

This segment repeats an inline object-shape type ({ event_type: string }); please extract it to an interface to align with repo TypeScript rules.

♻️ Proposed refactor
+    interface WorkflowEventTypeOnly {
+      event_type: string;
+    }
     const eventTypes = (
-      store.createWorkflowEvent as Mock<(event: { event_type: string }) => Promise<void>>
-    ).mock.calls.map((call: [{ event_type: string }]) => call[0].event_type);
+      store.createWorkflowEvent as Mock<(event: WorkflowEventTypeOnly) => Promise<void>>
+    ).mock.calls.map((call: [WorkflowEventTypeOnly]) => call[0].event_type);

As per coding guidelines, "Use interface for defining object shapes in TypeScript".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/workflows/src/dag-executor.test.ts` around lines 1275 - 1276, The
inline object type "{ event_type: string }" used in the mock call mapping should
be extracted to a named interface to follow repo TS rules; define an interface
(e.g., WorkflowEvent) and replace all occurrences of the inline shape in this
test expression and the Mock typing so the code reads like
"store.createWorkflowEvent as Mock<(event: WorkflowEvent) => Promise<void>>" and
the mapping uses "call: [WorkflowEvent]" while keeping the existing references
to store.createWorkflowEvent and event_type intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/workflows/src/dag-executor.ts`:
- Around line 1226-1246: The status-probe catch currently swallows errors and
allows execution to proceed; change it to "fail closed" by returning a
failed/skipped result (or rethrowing) when
deps.store.getWorkflowRunStatus(workflowRun.id) throws: in the pre-execution
check that calls getWorkflowRunStatus (and the analogous block referenced at the
later spot and inside executeScriptNode), replace the silent catch with logic
that either throws a clear error or returns { state: 'completed'|'failed',
output: `Bash node '${node.id}' skipped: workflow status check failed` } so that
no subprocess (bash/bun/uv) is spawned when the status probe fails. Ensure you
update both the existing try/catch around getWorkflowRunStatus and the similar
status-check inside executeScriptNode to use the same fail-closed behavior.
- Around line 2922-2939: When you detect a non-running raceResult and set
effectiveStatus (the branch starting at raceResult check/use of
effectiveStatus), capture/latch that observed stop status into a local variable
on the execution context (e.g., latchedStatus) and propagate that into any later
calls or checks (including skipIfStatusChanged() logic and any subsequent
finalization path that might call failWorkflowRun() or completeWorkflowRun()) so
that the executor never re-reads the store and flips back to running; update
places around layerIdx/layers/nodeOutputs handling (and the similar block around
lines 3017-3025) to consult the latchedStatus instead of re-querying the store,
and refuse to autonomously finalize the run if latchedStatus is non-running.

---

Nitpick comments:
In `@packages/workflows/src/dag-executor.test.ts`:
- Around line 1275-1276: The inline object type "{ event_type: string }" used in
the mock call mapping should be extracted to a named interface to follow repo TS
rules; define an interface (e.g., WorkflowEvent) and replace all occurrences of
the inline shape in this test expression and the Mock typing so the code reads
like "store.createWorkflowEvent as Mock<(event: WorkflowEvent) =>
Promise<void>>" and the mapping uses "call: [WorkflowEvent]" while keeping the
existing references to store.createWorkflowEvent and event_type intact.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a0d5414c-2b16-470a-87a5-a0374458f0b6

📥 Commits

Reviewing files that changed from the base of the PR and between 26973b1 and 951ce5b.

📒 Files selected for processing (2)
  • packages/workflows/src/dag-executor.test.ts
  • packages/workflows/src/dag-executor.ts

Comment thread packages/workflows/src/dag-executor.ts Outdated
Comment on lines +1226 to +1246
// Pre-execution cancel check — avoid starting work if workflow is already cancelled
try {
const preStatus = await deps.store.getWorkflowRunStatus(workflowRun.id);
if (preStatus !== 'running') {
const effectiveStatus = preStatus ?? 'deleted';
getLog().info(
{ nodeId: node.id, status: effectiveStatus },
'bash_node.cancelled_before_exec'
);
return {
state: 'completed',
output: `Bash node '${node.id}' skipped: workflow ${effectiveStatus}`,
};
}
} catch (err) {
getLog().error(
{ err: err as Error, workflowRunId: workflowRun.id, nodeId: node.id },
'bash_node.status_check_failed'
);
// Non-fatal — proceed if status check fails
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fail closed when the workflow-status probe errors.

Line 1245 and Line 1426 still proceed to spawn subprocess work after getWorkflowRunStatus() fails. That reintroduces the cancellation gap during store outages: a cancelled/paused/deleted run can still launch bash/bun/uv. Return a failed/skipped result or rethrow here instead of continuing.

Proposed fix
   } catch (err) {
     getLog().error(
       { err: err as Error, workflowRunId: workflowRun.id, nodeId: node.id },
       'bash_node.status_check_failed'
     );
-    // Non-fatal — proceed if status check fails
+    return {
+      state: 'failed',
+      output: '',
+      error: `Bash node '${node.id}' could not verify workflow status before execution`,
+    };
   }

Apply the same shape in executeScriptNode(...).

As per coding guidelines: "Prefer throwing early with a clear error for unsupported or unsafe states; never silently swallow errors (Fail Fast + Explicit Errors)".

Also applies to: 1407-1427

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/workflows/src/dag-executor.ts` around lines 1226 - 1246, The
status-probe catch currently swallows errors and allows execution to proceed;
change it to "fail closed" by returning a failed/skipped result (or rethrowing)
when deps.store.getWorkflowRunStatus(workflowRun.id) throws: in the
pre-execution check that calls getWorkflowRunStatus (and the analogous block
referenced at the later spot and inside executeScriptNode), replace the silent
catch with logic that either throws a clear error or returns { state:
'completed'|'failed', output: `Bash node '${node.id}' skipped: workflow status
check failed` } so that no subprocess (bash/bun/uv) is spawned when the status
probe fails. Ensure you update both the existing try/catch around
getWorkflowRunStatus and the similar status-check inside executeScriptNode to
use the same fail-closed behavior.

Comment on lines +2922 to +2939
if (!Array.isArray(raceResult)) {
const effectiveStatus = raceResult;
getLog().info(
{
workflowRunId: workflowRun.id,
layerIdx,
totalLayers: layers.length,
status: effectiveStatus,
},
'dag.cancel_detected_mid_layer'
);
await safeSendMessage(
platform,
conversationId,
`⚠️ **Workflow ${effectiveStatus}** during layer ${String(layerIdx + 1)}/${String(layers.length)}. Some nodes in this layer may still be finishing.`,
{ workflowId: workflowRun.id }
);
break;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Latch the observed stop status before finalization.

Once Line 2923 has observed a non-running state and broken the layer, this execution must never finalize the run. Right now skipIfStatusChanged() re-reads the store, so a fast approval/resume can flip the status back to running and let this executor call failWorkflowRun()/completeWorkflowRun() using a partial nodeOutputs snapshot.

Proposed fix
+  let observedStopStatus: string | null = null;
+
   for (let layerIdx = 0; layerIdx < layers.length; layerIdx++) {
     // ...
     if (!Array.isArray(raceResult)) {
-      const effectiveStatus = raceResult;
+      observedStopStatus = raceResult;
+      const effectiveStatus = observedStopStatus;
       getLog().info(
         {
           workflowRunId: workflowRun.id,
@@
   async function skipIfStatusChanged(logEvent: string): Promise<boolean> {
-    const status = await deps.store.getWorkflowRunStatus(workflowRun.id);
+    const status = observedStopStatus ?? (await deps.store.getWorkflowRunStatus(workflowRun.id));
     if (status === 'running') return false;
     getLog().info({ workflowRunId: workflowRun.id, status: status ?? 'deleted' }, logEvent);
     if (status !== 'paused') {
       getWorkflowEventEmitter().unregisterRun(workflowRun.id);
     }

Based on learnings: "When a process cannot reliably distinguish 'actively running elsewhere' from 'orphaned by a crash', it must not autonomously mark work as failed/cancelled/abandoned based on a timer or staleness guess; instead surface the ambiguous state and provide a one-click action".

Also applies to: 3017-3025

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/workflows/src/dag-executor.ts` around lines 2922 - 2939, When you
detect a non-running raceResult and set effectiveStatus (the branch starting at
raceResult check/use of effectiveStatus), capture/latch that observed stop
status into a local variable on the execution context (e.g., latchedStatus) and
propagate that into any later calls or checks (including skipIfStatusChanged()
logic and any subsequent finalization path that might call failWorkflowRun() or
completeWorkflowRun()) so that the executor never re-reads the store and flips
back to running; update places around layerIdx/layers/nodeOutputs handling (and
the similar block around lines 3017-3025) to consult the latchedStatus instead
of re-querying the store, and refuse to autonomously finalize the run if
latchedStatus is non-running.

@shaun0927 shaun0927 force-pushed the fix/cancel-propagation-bash-script-nodes branch from 951ce5b to c5b539d Compare April 28, 2026 04:55
@shaun0927
Copy link
Copy Markdown
Author

Pushed an update — picks up the items CR flagged on the previous round:

  • dag-executor.ts bash + script pre-exec status probe: now fail-closed. If getWorkflowRunStatus itself throws, the node returns { state: 'completed', output: "... skipped: workflow status probe failed" } instead of swallowing the error and proceeding to spawn a subprocess we can't tear down on cancel. Same change applied symmetrically to executeBashNode and executeScriptNode.
  • dag-executor.ts latched stop status: introduced a latchedStopStatus local in the layer-loop scope. The mid-layer cancel watcher and the between-layer status check both write to it when they see a non-running status. skipIfStatusChanged now consults the latch before re-querying the store, so a concurrent resume / store flake can't race the executor into completing a run that the loop has already decided to abort.
  • dag-executor.test.ts event-shape mock: replaced the inline { event_type: string } with a small named EventTypeOnly interface to match the repo TS rule.

Local: bun run type-check clean, lint clean, full @archon/workflows test suite 7/7 batches pass.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c5b539d894

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/workflows/src/dag-executor.ts Outdated
Comment on lines +1247 to +1249
return {
state: 'completed',
output: `Bash node '${node.id}' skipped: workflow status probe failed`,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Fail node instead of completing on status probe error

This catch path marks the bash node as completed even though execution never started, so a transient getWorkflowRunStatus failure can silently turn skipped work into success; downstream nodes may consume the synthetic output and the workflow can be finalized as successful with missing execution. The same pattern exists in executeScriptNode, so status-probe errors should not be recorded as successful completion.

Useful? React with 👍 / 👎.

`⚠️ **Workflow ${effectiveStatus}** during layer ${String(layerIdx + 1)}/${String(layers.length)}. Some nodes in this layer may still be finishing.`,
{ workflowId: workflowRun.id }
);
break;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Wait for layer tasks before breaking on mid-layer stop

Breaking here exits the layer without awaiting the in-flight Promise.allSettled(...), so nodes in this layer keep running after executeDagWorkflow returns. In approval flows, the API auto-resume path dispatches a new run immediately, and resume seeding only includes node outputs already persisted as node_completed, so unfinished nodes from the old executor can be re-executed concurrently and duplicate side effects.

Useful? React with 👍 / 👎.

@shaun0927
Copy link
Copy Markdown
Author

Pushed an update — picks up Codex's two P1 follow-ups:

  • Status-probe error now fails the node, not completes it. Both executeBashNode and executeScriptNode previously returned state: 'completed' on a getWorkflowRunStatus exception, which would silently turn skipped work into a synthetic success that downstream nodes could consume. Now they return state: 'failed' with an explicit error of <node> skipped: workflow status probe failed (<probeErr.message>) so the workflow's failure aggregation surfaces the real cause.
  • Drain in-flight layer tasks before breaking on mid-layer stop. The previous code did Promise.race([Promise.allSettled(...), cancelWatcher]) and broke as soon as cancelWatcher won — but Promise.allSettled keeps running its inner promises regardless of who wins the race. That meant nodes from the cancelled layer kept executing after executeDagWorkflow returned, so a follow-up resume could re-dispatch the same nodes concurrently (their node_completed events hadn't been written yet). The allSettled is now held in layerSettledPromise; on a stop, we await it before break and persist the drained outputs to nodeOutputs so the run-id's state is consistent before the executor exits. Also adjusted the user-facing message accordingly.

Local: bun run type-check clean, bun run lint clean, @archon/workflows test 198/198 pass.

@shaun0927 shaun0927 force-pushed the fix/cancel-propagation-bash-script-nodes branch 2 times, most recently from baa323f to 3929775 Compare April 28, 2026 05:39
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 39297758a8

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/workflows/src/dag-executor.ts Outdated
Comment on lines +1251 to +1254
return {
state: 'failed',
output: '',
error: `Bash node '${node.id}' skipped: workflow status probe failed (${probeErr.message})`,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Emit node_failed on bash status-probe failure

In executeBashNode, the pre-execution status-probe error path returns a failed NodeOutput immediately but never persists/emits a node_failed event. When getWorkflowRunStatus transiently fails, the workflow can end as failed while step-level telemetry is missing, so consumers that derive DAG state from node_* events cannot attribute the failure to this node. The same early-return pattern exists in executeScriptNode, so both probe-failure paths should emit/persist node failure before returning.

Useful? React with 👍 / 👎.

@shaun0927 shaun0927 force-pushed the fix/cancel-propagation-bash-script-nodes branch from 3929775 to 4e2ce7e Compare April 28, 2026 05:56
@shaun0927
Copy link
Copy Markdown
Author

Pushed an update — picks up Codex's latest P2:

  • Status-probe error path now emits node_failed before returning. Both executeBashNode and executeScriptNode now persist a createWorkflowEvent({ event_type: 'node_failed', step_name: node.id, data: { error, type, reason: 'status_probe_failed' } }) and emit a matching event on the in-memory event emitter before the early return. Consumers that derive DAG state from node_started → node_completed/failed transitions can now attribute the workflow failure to the right node instead of seeing it disappear.

Local: bun run type-check clean, lint clean, dag-executor.test.ts 198/198 pass.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 4e2ce7ef41

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +1237 to +1239
return {
state: 'completed',
output: `Bash node '${node.id}' skipped: workflow ${effectiveStatus}`,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Return skipped state when paused before node execution

Fresh evidence in this commit: the pre-exec guard now returns state: 'completed' when status is non-running (same pattern exists in both bash and script handlers). In a parallel layer where an approval node pauses the run, sibling bash/script nodes can be marked successful with synthetic "skipped" output; if the run is resumed to running before the 10s layer watcher or between-layer probe observes the stop, downstream all_success dependencies may run against fake output and the real subprocess work is never executed in this pass.

Useful? React with 👍 / 👎.

Comment on lines +3112 to 3115
const status = latchedStopStatus ?? (await deps.store.getWorkflowRunStatus(workflowRun.id));
if (status === 'running') return false;
getLog().info({ workflowRunId: workflowRun.id, status: status ?? 'deleted' }, logEvent);
if (status !== 'paused') {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Recheck live status before unregistering a latched stop

Using latchedStopStatus as authoritative here can unregister events for an active resumed run: approval transitions paused runs through failed, and if an older executor latches that state mid-layer while a newer resume invocation has already set the same run back to running, this branch still calls unregisterRun. Since routing is keyed by runId, removing that mapping can suppress subsequent workflow/node events from the resumed execution for conversation subscribers.

Useful? React with 👍 / 👎.

@shaun0927 shaun0927 force-pushed the fix/cancel-propagation-bash-script-nodes branch from 4e2ce7e to d2a2d5c Compare April 28, 2026 06:09
@shaun0927
Copy link
Copy Markdown
Author

Pushed an update — picks up Codex's two latest findings:

  • Pre-exec stop now returns state: 'skipped' instead of 'completed' for both bash and script nodes. The previous wording made a sibling-paused approval flow leak synthetic "completed" outputs to downstream all_success deps. skipped falls through checkTriggerRule correctly, so resumed runs pick the work back up cleanly.
  • skipIfStatusChanged re-queries live status before unregistering the run from the event emitter. The latched stop is still authoritative for the skip-finalization decision, but unregistering events is gated on a fresh getWorkflowRunStatus — if a newer resume invocation already flipped the run back to running, we keep the SSE/event subscription alive for the resumed execution and emit a dag.skip_unregister_run_resumed info line so the race is observable.

Local: bun run type-check clean, lint clean, dag-executor.test.ts 198/198 pass.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d2a2d5c594

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +3125 to +3129
const liveStatus = await deps.store
.getWorkflowRunStatus(workflowRun.id)
.catch(() => undefined);
if (liveStatus !== 'running') {
getWorkflowEventEmitter().unregisterRun(workflowRun.id);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Do not unregister run mapping on status probe errors

The resumed-run safeguard is bypassed when the live status recheck fails: .catch(() => undefined) followed by liveStatus !== 'running' still calls unregisterRun. In a resume race, a transient DB error here can remove the active run’s conversation mapping and suppress subsequent node_*/workflow_* events for subscribers, which is exactly the race this block is trying to prevent.

Useful? React with 👍 / 👎.

Comment on lines +3026 to +3030
for (const result of drainedResults) {
if (result.status === 'fulfilled') {
const { nodeId, output } = result.value;
if (output.costUsd !== undefined) totalCostUsd += output.costUsd;
nodeOutputs.set(nodeId, output);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Process rejected tasks when draining a stopped layer

When the cancel watcher wins, the drain path only records fulfilled entries and drops rejected ones. A node task can still reject (for example, safeSendMessage throws on FATAL platform errors), and unlike the normal layer-processing path those rejections are not converted into failures. That leaves missing entries in nodeOutputs, so a later resume can treat those nodes as never-run and re-dispatch work that already partially executed.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug(workflows): DAG executor does not propagate cancellation to bash/script nodes or mid-layer parallel execution

8 participants