security(codex): pin vendor binary SHA-256 on first use and verify on load (#1245)#1250
security(codex): pin vendor binary SHA-256 on first use and verify on load (#1245)#1250shaun0927 wants to merge 28 commits intocoleam00:devfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdd SHA‑256 verification and deterministic per‑binary pinning for Codex binaries: new exported helpers Changes
Sequence DiagramsequenceDiagram
participant Caller
participant Resolver as resolveCodexBinaryPath
participant Verifier as verifyOrPinBinaryHash
participant Reader as readFile
participant FS as Filesystem
participant Crypto
Caller->>Resolver: request codex binary path
Resolver->>Resolver: assemble candidate paths
loop per candidate
Resolver->>Verifier: verifyOrPinBinaryHash(candidate)
Verifier->>Reader: readFile(candidate)
Reader->>FS: read binary bytes
FS-->>Reader: Buffer
Reader-->>Verifier: Buffer
Verifier->>Crypto: compute SHA-256(Buffer)
Crypto-->>Verifier: digest
alt trust-pin exists
Verifier->>FS: read trust-pin file
FS-->>Verifier: storedHash or error
Verifier->>Verifier: validate format & symlink status
Verifier->>Verifier: compare storedHash vs digest
alt match
Verifier-->>Resolver: verified
else mismatch
Verifier-->>Resolver: throw error (hash mismatch)
end
else trust-pin missing
Verifier->>FS: create trust-pin (exclusive write)
FS-->>Verifier: success or error
alt write success
Verifier-->>Resolver: pinned
else write fails
Verifier-->>Resolver: warn (non-fatal)
end
end
end
Resolver-->>Caller: return first verified candidate or throw
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related issues
Possibly related PRs
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6348722a93
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| */ | ||
| /** Exported for test spyability (same pattern as fileExists). */ | ||
| export function verifyOrPinBinaryHash(binaryPath: string): void { | ||
| const hashPath = binaryPath + '.sha256'; |
There was a problem hiding this comment.
Store pinned hash outside writable binary directory
verifyOrPinBinaryHash pins the trust file as binaryPath + '.sha256', so the hash lives in the same directory as the binary. In the exact threat model this change targets (another user/process can write the vendor binary directory), an attacker can replace both the binary and the sidecar hash before execution, and the verification will still pass. This means the new check does not actually prevent post-install tampering unless the directory is already protected, so the security hardening is ineffective in writable-dir deployments.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/providers/src/codex/binary-resolver.test.ts (1)
24-43:⚠️ Potential issue | 🟠 MajorTest setup correctly mocks hash verification, but
verifyOrPinBinaryHashlacks dedicated tests.The spy setup follows the established pattern and appropriately isolates resolution tests from filesystem access. However, the security-critical
verifyOrPinBinaryHashfunction has no dedicated tests covering:
- Hash mismatch detection (throws on tampered binary)
- Hash pinning on first use (writes
.sha256sidecar)- Non-fatal handling when pin file write fails
These scenarios are essential for validating the security guarantees this PR introduces.
Would you like me to generate a test suite for
verifyOrPinBinaryHashcovering these scenarios? I can create tests that mockreadFileandfileExiststo simulate hash match, mismatch, and first-use pinning paths.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/codex/binary-resolver.test.ts` around lines 24 - 43, Add unit tests for the security-critical function verifyOrPinBinaryHash in the resolver: create tests that (1) simulate a tampered binary by mocking readFile to return content whose computed hash does not match the expected hash and assert that verifyOrPinBinaryHash throws, (2) simulate first-use pinning by mocking fileExists to return false and mocking writeFile to succeed, then assert that verifyOrPinBinaryHash writes the sidecar .sha256 file with the expected hash, and (3) simulate a pin file write failure by mocking writeFile to throw and assert that verifyOrPinBinaryHash does not crash but returns/handles the error path as designed; use the same spy/mocking patterns (spyOn resolver.verifyOrPinBinaryHash, mock readFile, writeFile, fileExists) used elsewhere in the test suite to isolate filesystem access and verify each behavior.
🧹 Nitpick comments (1)
packages/providers/src/codex/binary-resolver.ts (1)
32-40: Consolidate duplicate JSDoc comments.Lines 32-38 and line 39 both use JSDoc syntax (
/**). The second comment should be a regular comment since it's supplementary information about why the function is exported.✨ Suggested fix
/** * Verify a binary's SHA-256 against a pinned hash, or pin on first use. * * On first resolution: computes SHA-256 and writes it to a `.sha256` sidecar file. * On subsequent loads: verifies the binary matches the pinned hash. * Throws if the hash doesn't match (possible tampering). + * + * Exported for test spyability (same pattern as fileExists). */ -/** Exported for test spyability (same pattern as fileExists). */ export function verifyOrPinBinaryHash(binaryPath: string): void {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/codex/binary-resolver.ts` around lines 32 - 40, The file has duplicate JSDoc blocks: the main JSDoc for the function and an extra `/** Exported for test spyability (same pattern as fileExists). */` immediately before the function; change that second JSDoc to a regular single-line or block comment (e.g. `// Exported for test spyability...` or `/* ... */`) so only the primary JSDoc documents verifyOrPinBinaryHash and the supplementary note is not treated as JSDoc.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@packages/providers/src/codex/binary-resolver.test.ts`:
- Around line 24-43: Add unit tests for the security-critical function
verifyOrPinBinaryHash in the resolver: create tests that (1) simulate a tampered
binary by mocking readFile to return content whose computed hash does not match
the expected hash and assert that verifyOrPinBinaryHash throws, (2) simulate
first-use pinning by mocking fileExists to return false and mocking writeFile to
succeed, then assert that verifyOrPinBinaryHash writes the sidecar .sha256 file
with the expected hash, and (3) simulate a pin file write failure by mocking
writeFile to throw and assert that verifyOrPinBinaryHash does not crash but
returns/handles the error path as designed; use the same spy/mocking patterns
(spyOn resolver.verifyOrPinBinaryHash, mock readFile, writeFile, fileExists)
used elsewhere in the test suite to isolate filesystem access and verify each
behavior.
---
Nitpick comments:
In `@packages/providers/src/codex/binary-resolver.ts`:
- Around line 32-40: The file has duplicate JSDoc blocks: the main JSDoc for the
function and an extra `/** Exported for test spyability (same pattern as
fileExists). */` immediately before the function; change that second JSDoc to a
regular single-line or block comment (e.g. `// Exported for test spyability...`
or `/* ... */`) so only the primary JSDoc documents verifyOrPinBinaryHash and
the supplementary note is not treated as JSDoc.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 00263c13-7cc0-48e6-b90e-da8c9599409a
📒 Files selected for processing (2)
packages/providers/src/codex/binary-resolver.test.tspackages/providers/src/codex/binary-resolver.ts
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 67eb24d3d3
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const expected = readFile(hashPath).toString('utf-8').trim(); | ||
| if (actual !== expected) { | ||
| throw new Error( | ||
| 'Codex binary hash mismatch — possible tampering detected.\n' + | ||
| ` binary: ${binaryPath}\n` + |
There was a problem hiding this comment.
Stop leaking sidecar file contents in mismatch errors
In the writable-directory threat model this change targets, an attacker can point codex.sha256 at any file the Archon user can read (for example via symlink), and the mismatch path will include that file’s full contents in the thrown error via expected. That can exfiltrate local secrets into CLI output or centralized logs even when execution is blocked; the sidecar should be parsed/validated as a 64-char hex digest and the raw value should never be echoed.
Useful? React with 👍 / 👎.
| getLog().debug({ binaryPath, hash: actual }, 'codex.binary_hash_verified'); | ||
| } else { | ||
| try { | ||
| writeFileSync(hashPath, actual + '\n', { mode: 0o644 }); |
There was a problem hiding this comment.
Refuse symlinked hash paths when pinning
The pinning write uses writeFileSync(hashPath, ...) with default semantics, which follows symlinks. If the binary directory is attacker-writable (the scenario this hardening is meant to cover), an attacker can race in or pre-place a dangling symlink at codex.sha256 so the first-run pin write creates/truncates an arbitrary file as the current user. Open the file with no-follow/exclusive semantics (or lstat + safe create) before writing.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/providers/src/codex/binary-resolver.test.ts`:
- Around line 126-133: Update the "pins hash on first use when sidecar does not
exist" test to assert the pin side-effect instead of only checking no throw:
spyOn the method that writes the sidecar (e.g., spyOn(resolver, 'writeFile') or
spyOn(fs, 'writeFileSync') depending on implementation) before calling
resolver.verifyOrPinBinaryHash('/tmp/codex'), then expect that the write call
was made with the sidecar filename (e.g., '/tmp/codex.sha256') and the SHA-256
value computed from binaryContent (or assert that a `codex.binary_hash_pinned`
log/event was emitted) to ensure the sidecar is actually written.
- Around line 140-144: The test falsely spies on node:fs.writeFileSync because
binary-resolver.ts uses an ESM named import; create an internal wrapper
following the readFile/fileExists pattern: add a _writeFileSync binding to the
original writeFileSync (from 'node:fs') and expose a thin writeFile(...)
function that delegates to _writeFileSync, update binary-resolver.ts to call
writeFile(...) instead of writeFileSync, and change the test
(binary-resolver.test.ts) to spyOn the module's writeFile wrapper
(spyOn(writeFile)). This ensures the test can mock the implementation while
preserving the original sync call via _writeFileSync.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c6e7e267-35eb-4b21-97c1-70886834176a
📒 Files selected for processing (2)
packages/providers/src/codex/binary-resolver.test.tspackages/providers/src/codex/binary-resolver.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/providers/src/codex/binary-resolver.ts
| // Mock writeFileSync to throw by re-importing and spying on the fs module | ||
| const fs = require('node:fs'); | ||
| const writeSpy = spyOn(fs, 'writeFileSync').mockImplementation(() => { | ||
| throw new Error('EACCES: permission denied'); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Bun test runtime and Node ESM semantics, if module A imports writeFileSyncviaimport { writeFileSync } from 'node:fs', will spyOn(require('node:fs'), 'writeFileSync') reliably intercept calls inside module A? If not, what mocking pattern is recommended?
💡 Result:
No, spyOn(require('node:fs'), 'writeFileSync') will not reliably intercept calls to writeFileSync inside module A that uses ESM import { writeFileSync } from 'node:fs'. Module A captures a direct reference to the writeFileSync function at import time via ESM named import live bindings. The spyOn approach only replaces the 'writeFileSync' property on the module object returned by require('node:fs'), but module A calls the original captured function reference, not the property on the module object. Bun's test runner follows standard ESM semantics where spyOn replaces object properties but does not affect closed-over or directly imported function references, as confirmed in Bun GitHub issues discussing spyOn behavior with ESM modules. Recommended mocking pattern: Use Bun's mock.module to mock the entire 'node:fs' module before module A is imported. This intercepts both ESM imports and CJS requires. Example (use --preload for hoisting if needed before import): import { test, expect, mock } from "bun:test"; mock.module('node:fs', => ({ writeFileSync: mock( => { // spy implementation }), // include other fs exports as needed })); test('intercepts fs call in A', async => { const A = await import('./A.js'); A.someFunction; // calls mocked writeFileSync expect(fsSpy).toHaveBeenCalled; });
Citations:
- 1: https://bun.sh/docs/test/mocks
- 2: https://bun.com/docs/test/mocks
- 3: https://stackoverflow.com/questions/77869048/mocking-fs-with-buns-test-mock
- 4:
spyOnis not working as expected oven-sh/bun#11556 - 5: mock-fs does not work in Bun oven-sh/bun#17515
🏁 Script executed:
#!/bin/bash
# Find and read the binary-resolver.ts file
fd -p binary-resolver.ts packages/providers/src/codex/ | head -1Repository: coleam00/Archon
Length of output: 106
🏁 Script executed:
#!/bin/bash
# Get a sense of the file structure first
fd -p "binary-resolver" packages/providers/src/codex/Repository: coleam00/Archon
Length of output: 216
🏁 Script executed:
#!/bin/bash
# Read the binary-resolver.ts file to check imports and the writeFileSync usage
cat -n packages/providers/src/codex/binary-resolver.ts | head -50Repository: coleam00/Archon
Length of output: 2422
🏁 Script executed:
#!/bin/bash
# Read the test file around lines 140-144
cat -n packages/providers/src/codex/binary-resolver.test.ts | sed -n '130,160p'Repository: coleam00/Archon
Length of output: 1051
🏁 Script executed:
#!/bin/bash
# Search for writeFileSync usage in binary-resolver.ts
grep -n "writeFileSync" packages/providers/src/codex/binary-resolver.tsRepository: coleam00/Archon
Length of output: 226
🏁 Script executed:
#!/bin/bash
# Read the section with writeFileSync usage (likely in verifyOrPinBinaryHash function)
cat -n packages/providers/src/codex/binary-resolver.ts | sed -n '40,70p'Repository: coleam00/Archon
Length of output: 1523
Create an internal wrapper for writeFileSync to enable spyOn mocking in tests.
The current test mocks writeFileSync by spying on require('node:fs').writeFileSync, but binary-resolver.ts imports writeFileSync via ESM named import (import { ... writeFileSync } from 'node:fs'). ESM imports capture a direct reference at load time, so spying on the module object's property won't intercept calls from within the module—this makes the test a false positive.
Follow the existing pattern in the file (like readFile and fileExists): create a writeFile wrapper that delegates to _writeFileSync, then spy on the wrapper in tests.
Required refactor
# packages/providers/src/codex/binary-resolver.ts
-import { existsSync as _existsSync, readFileSync as _readFileSync, writeFileSync } from 'node:fs';
+import { existsSync as _existsSync, readFileSync as _readFileSync, writeFileSync as _writeFileSync } from 'node:fs';
/** Wrapper for readFileSync — enables spyOn in tests. */
export function readFile(path: string): Buffer {
return _readFileSync(path);
}
+/** Wrapper for writeFileSync — enables spyOn in tests. */
+export function writeFile(path: string, data: string, options: { mode: number }): void {
+ _writeFileSync(path, data, options);
+}# packages/providers/src/codex/binary-resolver.ts (in verifyOrPinBinaryHash)
- writeFileSync(hashPath, actual + '\n', { mode: 0o644 });
+ writeFile(hashPath, actual + '\n', { mode: 0o644 });# packages/providers/src/codex/binary-resolver.test.ts (line 140-144)
- const fs = require('node:fs');
- const writeSpy = spyOn(fs, 'writeFileSync').mockImplementation(() => {
+ const writeSpy = spyOn(resolver, 'writeFile').mockImplementation(() => {
throw new Error('EACCES: permission denied');
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/providers/src/codex/binary-resolver.test.ts` around lines 140 - 144,
The test falsely spies on node:fs.writeFileSync because binary-resolver.ts uses
an ESM named import; create an internal wrapper following the
readFile/fileExists pattern: add a _writeFileSync binding to the original
writeFileSync (from 'node:fs') and expose a thin writeFile(...) function that
delegates to _writeFileSync, update binary-resolver.ts to call writeFile(...)
instead of writeFileSync, and change the test (binary-resolver.test.ts) to spyOn
the module's writeFile wrapper (spyOn(writeFile)). This ensures the test can
mock the implementation while preserving the original sync call via
_writeFileSync.
|
I addressed the current review findings on this branch and pushed the follow-up commit. What changed:
Validation:
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b4b12c7abe
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const resolvedBinaryPath = realpathSync(binaryPath); | ||
| const pinName = createHash('sha256').update(resolvedBinaryPath).digest('hex') + '.sha256'; |
There was a problem hiding this comment.
Pin hash by logical binary path
getHashPinPath derives the trust-pin filename from realpathSync(binaryPath), so the pin key changes whenever the resolved target changes. In the exact model this patch addresses (attacker can write the vendor binary directory but not the trust store), an attacker can replace vendor/codex/codex with a symlink to a different executable; the resolver then computes a new pin path, sees no existing pin, and silently pins+executes the attacker binary instead of detecting tampering. The pin key needs to stay bound to the configured/logical path (or inode/device identity checked against prior state), not the mutable symlink target path.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/providers/src/codex/binary-resolver.ts (1)
154-182:⚠️ Potential issue | 🟠 MajorAvoid re-hashing the full binary on every request.
verifyOrPinBinaryHashreads the entire binary and computes SHA-256 on every invocation (line 81). Perpackages/providers/src/codex/provider.ts:477-500,createCodexClientcallsresolveCodexBinaryPathon every request that supplies a customrequestEnv, so under sustained traffic you pay a full-file I/O + hash per call (tens of MB for a Codex CLI → measurable latency and Buffer allocation pressure). The cachedgetCodex()path only hashes once, but the per-requestnew Codex({ codexPathOverride: ... })path does not.Consider memoizing successful verification per
realpathSync(binaryPath)for the process lifetime (the pin is TOFU, so once verified in-process it doesn't need to be re-verified on every request):♻️ Sketch
+const verifiedPaths = new Set<string>(); + export function verifyOrPinBinaryHash(binaryPath: string): void { + const realBinaryPath = realpathSync(binaryPath); + if (verifiedPaths.has(realBinaryPath)) return; const hashPath = getHashPinPath(binaryPath); const actual = createHash('sha256').update(readFile(binaryPath)).digest('hex'); // ... existing verify/pin logic ... + verifiedPaths.add(realBinaryPath); }(If you go this route, export a test-only reset hook, or clear the set between tests.)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/codex/binary-resolver.ts` around lines 154 - 182, resolveCodexBinaryPath currently calls verifyOrPinBinaryHash on every resolution which re-reads and re-hashes the full binary; memoize successful verifications by realpathSync(binaryPath) to avoid repeated file I/O: maintain an in-process Set (e.g., verifiedBinaryRealpaths) keyed by the resolved realpath and check it at the top of verifyOrPinBinaryHash (or in resolveCodexBinaryPath) to skip re-hashing if present, add the realpath to the set after a successful hash/pin, and expose a test-only reset hook to clear the set between tests; update references to verifyOrPinBinaryHash, resolveCodexBinaryPath, and createCodexClient/getCodex to rely on this memoization.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/providers/src/codex/binary-resolver.ts`:
- Around line 85-93: Before throwing the existing Error in the hash check inside
binary-resolver.ts, emit a structured security log event named
"codex.binary_hash_mismatch" (error level) that includes binaryPath, hashPath,
expected, and actual; then re-throw the same Error so upstream createCodexClient
still receives the exception. Locate the hash comparison block (the if (actual
!== expected) { ... } in binary-resolver.ts) and call the module's logger (the
same logger used elsewhere in this module) with a structured payload { event:
'codex.binary_hash_mismatch', binaryPath, hashPath, expected, actual }
immediately before the throw to ensure the security event is captured even if
createCodexClient wraps the error.
---
Outside diff comments:
In `@packages/providers/src/codex/binary-resolver.ts`:
- Around line 154-182: resolveCodexBinaryPath currently calls
verifyOrPinBinaryHash on every resolution which re-reads and re-hashes the full
binary; memoize successful verifications by realpathSync(binaryPath) to avoid
repeated file I/O: maintain an in-process Set (e.g., verifiedBinaryRealpaths)
keyed by the resolved realpath and check it at the top of verifyOrPinBinaryHash
(or in resolveCodexBinaryPath) to skip re-hashing if present, add the realpath
to the set after a successful hash/pin, and expose a test-only reset hook to
clear the set between tests; update references to verifyOrPinBinaryHash,
resolveCodexBinaryPath, and createCodexClient/getCodex to rely on this
memoization.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: cd035205-1227-4e07-9347-6ad928fc3766
📒 Files selected for processing (2)
packages/providers/src/codex/binary-resolver.test.tspackages/providers/src/codex/binary-resolver.ts
|
I addressed the structured mismatch logging follow-up in What changed:
Validation:
On the memoization suggestion: I intentionally did not cache successful verification results in-process on this branch. In the current threat model (writable binary directory / TOFU pinning), process-lifetime memoization would reduce repeated I/O, but it would also weaken detection of same-process post-start tampering. I think that tradeoff should be an explicit design decision rather than bundled into this security hardening PR. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 8c7bd3aced
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (fileExists(hashPath)) { | ||
| const expected = readPinnedHash(hashPath); |
There was a problem hiding this comment.
Reject dangling pin symlinks before falling back to warn-only path
Fresh evidence in this revision: verifyOrPinBinaryHash gates on fileExists(hashPath), and existsSync returns false for dangling symlinks, so a dangling trust/codex/*.sha256 symlink is treated as “missing.” The code then enters the pin-write branch, openSync(...O_EXCL...) fails with EEXIST, and the broad catch only logs codex.binary_hash_pin_write_failed, allowing resolution to continue without any hash verification. This creates a persistent integrity-check bypass until the symlink is removed; EEXIST (or a pre-existing non-regular path) should be handled as a hard failure or re-verified path, not warn-and-continue.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
packages/providers/src/codex/binary-resolver.ts (1)
79-81: Stream the binary into the hasher instead of loading it whole into memory.
readFile(binaryPath)buffers the entire Codex CLI binary (commonly tens of MB, can exceed 100MB depending on release) before hashing. SinceverifyOrPinBinaryHashruns on everyresolveCodexBinaryPath()call (no in-process memoization — intentional per PR notes), each resolution allocates a Buffer the size of the binary on the request path. A streamed hash keeps peak RSS bounded regardless of binary size and avoids a large sync read on the event loop.♻️ Proposed fix
-import { createHash } from 'node:crypto'; -import { dirname, join } from 'node:path'; +import { createHash } from 'node:crypto'; +import { createReadStream } from 'node:fs'; +import { dirname, join } from 'node:path'; +import { pipeline } from 'node:stream/promises';-export function verifyOrPinBinaryHash(binaryPath: string): void { +export async function verifyOrPinBinaryHash(binaryPath: string): Promise<void> { const hashPath = getHashPinPath(binaryPath); - const actual = createHash('sha256').update(readFile(binaryPath)).digest('hex'); + const hasher = createHash('sha256'); + await pipeline(createReadStream(binaryPath), hasher); + const actual = hasher.digest('hex');…and
awaitthe calls at lines 158, 171, 183 inresolveCodexBinaryPath.Note: if you prefer to keep the function synchronous,
readFileSyncwith a stream-like chunked approach is awkward; switching to async (the resolver is alreadyasync) is the cleaner path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/codex/binary-resolver.ts` around lines 79 - 81, The function verifyOrPinBinaryHash currently reads the entire binary into memory; change it to an async function that streams the file into the hasher instead: convert to async verifyOrPinBinaryHash(binaryPath: string): Promise<void>, create a ReadStream (fs.createReadStream) and use either stream/promises pipeline or listen to 'data'/'end' to feed chunks into createHash('sha256'), await stream completion, then compare/write the pin file via async fs methods; also update every call site in resolveCodexBinaryPath that invokes verifyOrPinBinaryHash to await the returned Promise so hashing runs asynchronously and doesn't block or buffer the whole binary.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/providers/src/codex/binary-resolver.test.ts`:
- Around line 200-212: The test 'refuses symlinked trust pin paths' uses
symlinkSync which can fail on Windows (EPERM); update the test in
binary-resolver.test.ts so it skips or tolerates symlink creation failures on
Windows: either add a platform guard (skip the test when process.platform ===
'win32') or wrap the symlinkSync call in a try/catch and if it throws due to
EPERM/permission issues, skip the assertion (i.e., return early) so that
resolver.verifyOrPinBinaryHash(binaryPath) is only asserted when the symlink was
actually created; reference the test name, symlinkSync invocation,
getTrustPinPath(binaryPath) and resolver.verifyOrPinBinaryHash(binaryPath) when
making the change.
In `@packages/providers/src/codex/binary-resolver.ts`:
- Around line 99-106: The write path uses openSync(hashPath, ...,
(fsConstants.O_NOFOLLOW ?? 0)) which silently disables O_NOFOLLOW on Windows;
before calling openSync in the pin creation logic, call lstatSync(hashPath) and
if it exists and isSymbolicLink() throw/abort to avoid following a pre-planted
symlink, or alternatively only include fsConstants.O_NOFOLLOW when it is defined
(i.e., if (typeof fsConstants.O_NOFOLLOW !== 'undefined') use it else perform
the lstatSync check); refer to the existing readPinnedHash behavior and the
openSync call on hashPath to locate where to add the check.
---
Nitpick comments:
In `@packages/providers/src/codex/binary-resolver.ts`:
- Around line 79-81: The function verifyOrPinBinaryHash currently reads the
entire binary into memory; change it to an async function that streams the file
into the hasher instead: convert to async verifyOrPinBinaryHash(binaryPath:
string): Promise<void>, create a ReadStream (fs.createReadStream) and use either
stream/promises pipeline or listen to 'data'/'end' to feed chunks into
createHash('sha256'), await stream completion, then compare/write the pin file
via async fs methods; also update every call site in resolveCodexBinaryPath that
invokes verifyOrPinBinaryHash to await the returned Promise so hashing runs
asynchronously and doesn't block or buffer the whole binary.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0ea3fa58-c793-45fd-9547-0deed41e9d3e
📒 Files selected for processing (2)
packages/providers/src/codex/binary-resolver.test.tspackages/providers/src/codex/binary-resolver.ts
…be (coleam00#1359) The pre-flight binary smoke does a bare `bun build --compile` — it deliberately skips `scripts/build-binaries.sh` to stay fast. That means packages/paths/src/bundled-build.ts retains its dev defaults, including BUNDLED_IS_BINARY = false. version.ts branches on BUNDLED_IS_BINARY: when true it returns the embedded string; when false it calls getDevVersion(), which reads package.json at `SCRIPT_DIR/../../../../package.json`. Inside a compiled binary SCRIPT_DIR resolves under `$bunfs/root/`, the walk produces a CWD- relative path that doesn't exist, and the smoke aborts with "Failed to read version: package.json not found" — a false positive. Hit during the 0.3.8 release attempt: the real Pi lazy-load fix was working end-to-end; the smoke test was the only thing failing. Use --help instead. It exercises the same module-init graph (so it still catches the real failure modes the skill lists — Pi package.json init crash, Bun --bytecode bugs, CJS wrapper issues, circular imports under minify) but has no dev/binary branch, so no false positive. Also add a longer comment block explaining why --help is preferred, so this doesn't get "normalized" back to `version` by a future drive-by.
The brew path of /test-release runs `brew uninstall` in Phase 5 to leave the system in its pre-test state. For operators using the dual-homebrew pattern (renamed brew binary at `/opt/homebrew/bin/archon-stable` so it coexists with a `bun link` dev `archon`), that uninstall wipes the Cellar dir the `archon-stable` symlink points into → `archon-stable` becomes dangling → `brew cleanup` sweeps it away on the next brew op. Next time the operator wants stable, they have to manually re-run `brew-upgrade-archon`. Fix: make the skill aware of `archon-stable` and restore it transparently. - Phase 2 item 4: detect the `archon-stable` symlink before any brew op; export `ARCHON_STABLE_WAS_INSTALLED=yes` so Phase 5 knows to restore it. Only triggers for the brew path (curl-mac/curl-vps don't touch brew so they leave `archon-stable` alone). - Phase 5 brew path: after `brew uninstall + untap`, if the flag was set, re-tap + re-install + rename. Verifies the restored `archon-stable` reports a version and warns (non-fatal) if the rename target is missing. Documents the tradeoff: the restored version is "whatever the tap ships today", not necessarily the pre-test version — usually that's what the operator wants (the release they just tested becomes stable) but the back-version-QA case requires a manual `brew-upgrade-archon` after. - Phase 1 confirmation banner now mentions that `archon-stable` will be preserved so the operator isn't surprised by the reinstall during Phase 5. No changes to curl-mac/curl-vps paths. No changes to Phase 4 test suite.
… a compiled binary (coleam00#1360) v0.3.9 made Pi boot-safe: lazy-loading its imports meant `archon version` no longer crashed on `@mariozechner/pi-coding-agent/dist/config.js`'s module-init `readFileSync(getPackageJsonPath())`. That's what the `provider-lazy-load.test.ts` regression test guards. The fix was only half the problem though. When a Pi workflow actually runs, sendQuery() triggers the dynamic import — and Pi's config.js module-init fires then, hitting the exact same ENOENT on `dirname(process.execPath)/package.json`. Discovered by running `archon workflow run test-pi` against a locally-compiled 0.3.9 binary: [main] Failed: ENOENT: no such file or directory, open '/private/tmp/package.json' at readFileSync (unknown) at <anonymous> (/$bunfs/root/archon-providertest:184:7889) at init_config Boot-safe ≠ runtime-safe. The `/test-release` run for 0.3.9 passed because it only exercised `archon-assist` (Claude); Pi was never actually invoked on the released binary. Fix: before the dynamic `import('@mariozechner/pi-coding-agent')` in sendQuery, install a PI_PACKAGE_DIR shim. Pi's config.js checks `process.env.PI_PACKAGE_DIR` first in its `getPackageDir()` and short-circuits the `dirname(process.execPath)` walk. We write a minimal `{name, version, piConfig:{}}` stub to `tmpdir()/archon-pi-shim/package.json` (idempotent — existsSync check) and set the env var. Pi only reads `piConfig.name`, `piConfig.configDir`, and `version` from that file, all optional, so the stub surface is genuinely minimal. Localized to PiProvider: no global state, no mutation of any shared config, no upstream fork. Claude and Codex providers are unaffected (their SDKs don't have this class of module-init side effect). Verified end-to-end: built a compiled archon binary with this patch, ran `archon workflow run test-pi --no-worktree` (Pi workflow with model `anthropic/claude-haiku-4-5`), got a clean response. Before the patch, same binary crashed at `dag_node_started` with the ENOENT above. Regression test added: asserts `PI_PACKAGE_DIR` is set after sendQuery hits even its fast-fail "no model" path. Together with the existing `provider-lazy-load.test.ts` (boot-safe) this covers both halves.
… and Codex (coleam00#1361) Both binary resolvers previously stopped at env-var + explicit config and threw a "not found" error when neither was set. Users who followed the upstream-recommended install flow (Anthropic's `curl install.sh` for Claude, `npm install -g @openai/codex`) still had to manually set either `CLAUDE_BIN_PATH` / `CODEX_BIN_PATH` or the corresponding config field before any workflow could run. Add a tier-N autodetect step between the explicit config tier and the install-instructions throw. Purely additive: env and config still win when set (precedence covered by new tests). On autodetect miss, the same install-instructions error fires as before. Claude probe list (verified against docs.claude.com "Uninstall Claude Code → Native installation" section): - $HOME/.local/bin/claude (mac/linux native installer) - $USERPROFILE\.local\bin\claude.exe (Windows native installer) Codex probe list (verified against openai/codex README; npm global- install puts the binary at `{npm_prefix}/bin/<name>` on POSIX, `{npm_prefix}\<name>.cmd` on Windows): - $HOME/.npm-global/bin/codex (user-set `npm config set prefix`) - /opt/homebrew/bin/codex (mac arm64 with homebrew-node) - /usr/local/bin/codex (mac intel / linux system node) - %APPDATA%\npm\codex.cmd (Windows npm global default) - $HOME\.npm-global\codex.cmd (Windows user-set prefix) Not probed (explicit override still required): - Custom npm prefixes — `npm root -g` would need a subprocess per resolve, too much surface for a probe helper - `brew install --cask codex` — cask layout isn't a PATH binary - Manual GitHub Releases extracts — placement is user-determined - `~/.bun/bin/codex` — not documented in openai/codex README Pi provider intentionally has no equivalent change: the Pi SDK is bundled into the archon binary (no subprocess), so there's no "binary" to resolve. Pi auth lives at `~/.pi/agent/auth.json` which the SDK already finds by default, and the PR A shim (`PI_PACKAGE_DIR`) handles the package-dir case via Pi's own documented escape hatch. E2E verified: removed both config entries from ~/.archon/config.yaml, rebuilt compiled binary, ran `archon workflow run archon-assist` and a Codex workflow. Logs showed `source: 'autodetect'` for both, responses returned cleanly.
…ry autodetect test The native-installer autodetect test computed its expected path from process.env.HOME, but the implementation uses node:os homedir(). On Windows, HOME is typically unset (Windows uses USERPROFILE), so the test fell back to '/Users/test' while the resolver returned the real home dir — making the spy's path-equality check fail and breaking CI on windows-latest. Mirror the implementation by importing homedir() from node:os and joining with node:path so the expected path matches the actual platform-resolved home and separator. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…ver (coleam00#1365) Reported in coleam00#1365: a user running `archon serve` with DISCORD_BOT_TOKEN set but the "Message Content Intent" toggle disabled in the Discord Developer Portal saw the entire server crash with `Used disallowed intents`. Discord rejects the gateway connection (close code 4014) when a privileged intent is requested without being enabled, and the unguarded `await discord.start()` propagated the error all the way up, taking the web UI down with it. Wrap discord.start() in try/catch — log the failure with an actionable hint (special-cased for the disallowed-intent error) and continue running. Other adapters and the web UI come up regardless. The shutdown handler already uses optional chaining (`discord?.stop()`) so nulling discord after a failed start is safe. Other adapters (Telegram, Slack, GitHub, Gitea, GitLab) have the same unguarded-start pattern but are out of scope for this fix — addressing them is tracked separately. Also expanded the Discord setup docs with a caution callout that names the exact error string and the new log event so users can grep for both. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…0#1362) * docs(script-nodes): add dedicated guide and teach the archon skill how to write them Script nodes (script:) have been a first-class DAG node type since v0.3.3 but were documented only as one-liners in CLAUDE.md and a CI smoke test. Claude Code reading the archon skill would see "Four Node Types: command, prompt, bash, loop" and reach for bash+node/python one-liners instead of a proper script node — losing bun's --no-env-file isolation, uv's --with dependency pins, and the .archon/scripts/ reuse story. - New packages/docs-web/src/content/docs/guides/script-nodes.md mirroring the structure of loop-nodes.md / approval-nodes.md: schema, inline vs named dispatch, runtime/deps semantics, scripts directory precedence (repo > home), extension-runtime mapping, env isolation, stdout/stderr contract, patterns, and the explicit list of ignored AI fields. - guides/authoring-workflows.md and guides/index.md updated so the new guide is discoverable from both the node-types table and the guides landing page. - reference/variables.md calls out the no-shell-quote difference between bash: and script: substitution — a subtle correctness trap when adapting a bash pattern into a script node. - Sidebar order bumped +1 on hooks/mcp-servers/skills/global-workflows/ remotion-workflow to slot script-nodes at order 5 next to the other node-type guides. - .claude/skills/archon/SKILL.md: replaces stale "Four Node Types" (which also silently omitted approval and cancel) with the accurate seven, with a script-node code block showing both inline and named patterns. - references/workflow-dag.md: full Script Node section covering dispatch, resolution, deps, stdout contract, and the list of AI-only fields that are ignored; validation-rules list updated. - references/dag-advanced.md and references/variables.md: retry-support line corrected; no-shell-quote note added. - examples/dag-workflow.yaml: added an extract-labels TypeScript script node and updated the header comment. * fix(docs): review follow-ups for script-node guide - skills example: extract-labels was reading process.env.ISSUE_JSON which is never set; use String.raw`$fetch-issue.output` so the upstream bash node's JSON is actually consumed - guides/script-nodes.md + skills/workflow-dag.md: idle_timeout is accepted but ignored on script (and bash) nodes — executeScriptNode only reads node.timeout. Clarify that script/bash use `timeout`, not idle_timeout - archon-workflow-builder.yaml: prompt enumerated only bash/prompt/command/loop, so the AI builder could never propose script or approval nodes. Add both (plus examples + rule about script output not being shell-quoted) and regenerate bundled defaults - book/dag-workflows.md + book/quick-reference.md + adapters/web.md: fill in the node-type references that were missing script, approval, and cancel. adapters/web.md also overclaimed "loop" in the palette — NodePalette.tsx only drags command/prompt/bash, so note that the other kinds are YAML-only
…nv gaps, add good-practices + troubleshooting (coleam00#1363) * fix(skill/when): document the full `when:` operator set and compound expressions The skill reference previously stated "operators: ==, != only" which is materially wrong — the condition evaluator supports ==, !=, <, >, <=, >= plus && / || compound expressions with && binding tighter than ||, plus dot-notation JSON field access. An agent authoring a workflow from the skill would think half the operators don't exist. Replaces the single-sentence section with a structured reference covering: - All six comparison operators (string and numeric modes) - Compound expressions with precedence rules and short-circuit eval - JSON dot notation semantics and failure modes - The fail-closed rules in full (invalid expression, non-numeric side, missing field, skipped upstream) Grounded in packages/workflows/src/condition-evaluator.ts. * feat(skill): document Approval and Cancel node types Approval and cancel nodes are first-class DAG node types (approval since the workflow lifecycle work in coleam00#871, cancel as a guarded-exit primitive) but the skill never described either one. An agent reading the skill and asked to "add a review gate before implementation" or "stop the workflow if the input is unsafe" would fall back to bash + exit 1, losing the proper semantics (cancelled vs. failed, on_reject AI rework, web UI auto-resume). Approval node coverage (references/workflow-dag.md, SKILL.md): - Full configuration block with message, capture_response, on_reject - The interactive: true workflow-level requirement for web UI delivery - Approve/reject commands across all platforms (CLI, slash, natural language) and the capture_response → $node-id.output flow - Ignored-fields list + the on_reject.prompt AI sub-node exception Cancel node coverage (references/workflow-dag.md, SKILL.md): - Single-field schema (cancel: "<reason>") - Lifecycle: cancelled (not failed); in-flight parallel nodes stopped; no DAG auto-resume path - The "cancel: vs bash-exit-1" decision rule (expected precondition miss vs. check itself failing) - Two canonical patterns — upstream-classification gate, pre-expensive-step gate Validation-rules list updated to enumerate approval/cancel constraints (message non-empty, on_reject.max_attempts range 1-10, cancel reason non-empty), plus a forward note that script: joins the mutually-exclusive set once PR coleam00#1362 lands. Placement in both files is after the Loop section and before the validation section, so this commit stays additive with respect to PR coleam00#1362's Script node insertion between Bash and Loop — rebase is clean. * feat(skill): document workflow-level fields beyond name/provider/model The skill's Schema section previously showed only name, description, provider, and model at the workflow level — which is most of a stub. Agents asked to "use the 1M-context Claude beta" or "run this under a network sandbox" or "add a fallback model in case Opus rate-limits" had no way to discover that any of these fields existed at the workflow level. Adds a comprehensive Workflow-Level Fields section covering: - Core: name, description, provider, model, interactive (with explicit callout that interactive: true is REQUIRED for approval/loop gates on web UI — a common footgun) - Isolation: worktree.enabled for pin-on/pin-off (the only worktree field at workflow level; baseBranch/copyFiles/path/initSubmodules are config.yaml only, so a cross-reference points there) - Claude SDK advanced: effort, thinking, fallbackModel, betas, sandbox, with explicit per-node-only exceptions (maxBudgetUsd, systemPrompt) - Codex-specific: modelReasoningEffort (with note that it's NOT the same as Claude's effort — this has confused users), webSearchMode, additionalDirectories - A complete worked example combining sandbox + approval + interactive All fields cross-referenced against packages/workflows/src/schemas/workflow.ts and packages/workflows/src/schemas/dag-node.ts. * feat(skill/loop): document interactive loops and gate_message Interactive loop nodes pause between iterations for human feedback via /workflow approve — used by archon-piv-loop and archon-interactive-prd. The skill's Loop Nodes section previously omitted both interactive: true and gate_message entirely, so an agent writing a guided-refinement workflow wouldn't know the feature exists or that gate_message is required at parse time. Adds: - interactive and gate_message rows to the config table (marking gate_message as required when interactive: true — enforced by the loader's superRefine) - A dedicated "Interactive Loops" subsection explaining the 6-step iterate-pause-approve-resume flow - Explicit call-out that $LOOP_USER_INPUT populates ONLY on the first iteration of a resumed session — easy to miss and a common surprise - Workflow-level interactive: true requirement for web UI delivery (loader warning otherwise) so the full-flow example is complete - Note that until_bash substitution DOES shell-quote $nodeId.output (unlike script bodies) — called out since the audit surfaced this inconsistency * fix(skill/cli): complete the CLI command reference with missing lifecycle commands The CLI reference previously documented only list, run, cleanup, validate, complete, version, setup, and chat — missing nearly every workflow lifecycle command an agent needs to operate a paused, failed, or stuck run. The interactive-workflows reference assumed these commands existed without actually documenting them. Adds full documentation for: - archon workflow status — show running workflow(s) - archon workflow approve <run-id> [comment] — resume approval gate (also populates $LOOP_USER_INPUT on interactive loops and the gate node's output when capture_response: true) - archon workflow reject <run-id> [reason] — reject gate; cancels or triggers on_reject rework depending on node config - archon workflow cancel <run-id> — terminate running/paused with in-flight subprocess kill - archon workflow abandon <run-id> — mark stuck row cancelled without subprocess kill (for orphan-cleanup after server crashes — matches the coleam00#1216 precedent) - archon workflow resume <run-id> [message] — force-resume specific run (auto-resume is default; this is for explicit override) - archon workflow cleanup [days] — disk hygiene for old terminal runs (with explicit callout that it does NOT transition 'running' rows, a common confusion) - archon workflow event emit — used inside loop prompts for state signalling; documented so agents don't invent their own mechanism - archon continue <branch> [flags] [msg] — iterative-session entry point with --workflow and --no-context flags Also: - Adds --allow-env-keys flag to the `workflow run` flag table with audit-log context and the env-leak-gate remediation use case - Adds an "Auto-resume without --resume" note disambiguating when --resume is needed vs. when auto-resume handles it - Adds --include-closed flag to `isolation cleanup`, which was previously missing; converts the flag list to a structured table - Explains the cancel/abandon distinction (live subprocess vs. orphan) All grounded in packages/cli/src/commands/workflow.ts, continue.ts, and isolation.ts. * feat(skill/repo-init): add scripts/ and state/, three-path env model, per-project env injection The repo-init reference was missing two first-class .archon/ directories (scripts/ since v0.3.3, state/ since the workflow-state feature) and had nothing to say about env — the coleam00#1 thing a user hits on first-run when their repo has a .env file with API keys. Directory tree updates: - Adds .archon/scripts/ with the extension->runtime rule (.ts/.js -> bun, .py -> uv) so agents know where to put named scripts referenced by script: nodes. - Adds .archon/state/ with explicit "always gitignore" callout — these are runtime artifacts, not source. Previously undocumented in the skill. - Adds .archon/.env (repo-scoped Archon env) and distinguishes it from the target repo's top-level .env. - Adds a "What each directory is for" list so the structure isn't just a tree with no narrative. .gitignore guidance: - state/ and .env added as must-gitignore (state/ matches CLAUDE.md and reference/archon-directories.md — skill was lagging). - mcp/ demoted to conditional — gitignore only if you hardcode secrets. New "Three-Path Env Model" section: - ~/.archon/.env (trusted, user), <cwd>/.archon/.env (trusted, repo), <cwd>/.env (UNTRUSTED, target project — stripped from subprocess env). - Precedence (override: true across archon-owned paths) and the observable [archon] loaded N keys / stripped K keys log lines so operators can verify what actually happened. - Decision tree for where to put API keys vs. target-project env vs. things Archon shouldn't touch. - Links to archon setup --scope home|project with --force for writing to the right file with timestamped backups. New "Per-Project Env Injection" section: - Documents both managed surfaces: .archon/config.yaml env: block (git-committed, $REF expansion) and Web UI Settings → Projects → Env Vars (DB-stored, never returned over API). - Names every execution surface that receives the injected vars: Claude/Codex/Pi subprocess, bash: nodes, script: nodes, and direct codebase-scoped chat. - Documents the env-leak gate with all 5 remediation paths so an agent hitting "Cannot register: env has sensitive keys" knows the options. Grounded in CHANGELOG v0.3.7 (three-path env + setup flags), v0.3.0 (env-leak gate), and reference/security.md on the docs site. * fix(skill/authoring-commands): correct override paths and add home-scoped commands The file-location and discovery sections described an override layout that does not match the actual resolver. It showed: .archon/commands/defaults/archon-assist.md # Overrides the bundled and claimed `.archon/commands/defaults/` was where repo-level overrides lived. In fact the resolver (executor-shared.ts:152-200 + command- validation.ts) walks `.archon/commands/` 1 level deep and uses basename matching — putting `archon-assist.md` at the top of `.archon/commands/` is the canonical way to override the bundled version. The `defaults/` subfolder is a Archon-internal convention for shipping bundled defaults, not a user-facing override pattern. Also, home-scoped commands (`~/.archon/commands/`, shipped in v0.3.7) were completely absent — agents authoring personal helpers wouldn't know they could live at the user level and be shared across every repo. Changes: - File Location section now shows all three discovery scopes (repo, home, bundled) with precedence ordering and 1-level subfolder rules - Duplicate-basename rule documented as a user error surface - Discovery and Priority section rewritten with accurate 3-step lookup order — no more references to the nonexistent defaults/ override path - Adds the Web UI "Global (~/.archon/commands/)" palette label note so users authoring helpers for the builder know what to expect No code changes — this is a pure fix of stale/incorrect skill reference material. * feat(skill): add workflow good-practices and troubleshooting reference pages Closes two gaps from the audit. The skill previously had zero guidance on designing multi-node workflows (what to avoid, what to reach for first, how to structure artifact chains) and zero guidance on where to look when things go wrong (log paths, env-leak gate remediations, orphan-row cleanup, resume semantics). New references/good-practices.md (9 Good Practices + 7 Anti-Patterns): - Use deterministic nodes (bash:/script:) for deterministic work, AI for reasoning — the single biggest quality lever - output_format required whenever downstream when: reads a field — the most common source of "workflow silently routes wrong" - trigger_rule: none_failed_min_one_success after conditional branches — the classic bug where all_success fails because a skipped when:-gated branch doesn't count as a success - context: fresh requires artifacts for state passing — commands must explicitly "read $ARTIFACTS_DIR/..." when downstream of fresh - Cheap models (haiku) for glue, strong for substance - Workflow descriptions as routing affordances - Validate (archon validate workflows) + smoke-run before shipping - Artifact-chain-first design - worktree.enabled: true for code-changing workflows (reversibility) - Anti-patterns with before/after YAML examples for each (AI-for-tests, free-form when: matching, context: fresh without artifacts, long flat AI-node layers, secrets in YAML, retry on loop nodes, tiny max_iterations, missing workflow-level interactive:, tool-restricted MCP nodes) New references/troubleshooting.md: - Log location (~/.archon/workspaces/<owner>/<repo>/logs/<run-id>.jsonl) with jq recipes for common queries (last assistant message, failed events, full stream) - Artifact location for cross-node handoff debugging - 9 Common Failure Modes, each with root cause + concrete fix: - $BASE_BRANCH unresolvable - Env-leak gate (5 remediations) - Claude/Codex binary not found (compiled-binary-only) - "running" forever (AI working / orphan / idle_timeout) - Mid-workflow failure and auto-resume semantics - Approval gate missing on web UI (workflow-level interactive:) - MCP plugin connection noise (filtered by design) - Empty $nodeId.output / field access (4 causes) - Diagnostic command cheat sheet (list, status, isolation list, validate, tail-log, --verbose, LOG_LEVEL=debug) - Escalation protocol (version + validate + log tail + CHANGELOG + issue) SKILL.md routing table now dispatches "Workflow good practices / anti-patterns" and "Troubleshoot a failing / stuck workflow" to the new references so an agent can find them without having to know they exist. * docs(book): update node-types coverage from four to all seven The book is the curated first-contact reading path (landing page → "Get Started" → /book/). Both dag-workflows.md and quick-reference.md were stuck on "four node types" — missing script, approval, and cancel. A user reading the book as their first introduction would form an incomplete mental model, then find three more node types in the reference section later with no explanation of when they arrived. book/dag-workflows.md: - "four node types" → "seven node types. Exactly one mode field is required per node" - Table now lists Command, Prompt, Bash, Script, Loop, Approval, Cancel with one-line "when to use" for each, and cross-links to the dedicated guide pages for Script / Loop / Approval - New sections below the table for Script (inline + named examples with runtime and deps), Approval (with the interactive: true workflow-level note that's easy to miss), and Cancel (guarded-exit pattern) — keeping the existing narrative shape for Bash and Loop book/quick-reference.md: - Node Options table now includes script, approval, cancel rows - agents row added (inline sub-agents, Claude-only) - New "Script-specific fields" and "Approval-specific fields" subsections so the cheat-sheet is actually complete rather than pointing users elsewhere for the required constraints - Retry row callout that loop nodes hard-error on retry — previously omitted - bash timeout note widened to cover script timeout (same semantics) Both files are docs-web content; the CI build on the docs-script-nodes PR (coleam00#1362) previously validated the Starlight build path with a similar table addition, so this should render clean. * fix(skill/cli): remove nonexistent \`archon workflow cancel\`, fix workflow status jq recipe Two accuracy issues from the PR code-reviewer (comment 4311243858). C1: \`archon workflow cancel <run-id>\` does NOT exist as a CLI subcommand. The switch at packages/cli/src/cli.ts:318-485 dispatches on list / run / status / resume / abandon / approve / reject / cleanup / event — running \`archon workflow cancel\` hits the default case and exits with "Unknown workflow subcommand: cancel" (cli.ts:478-484). Active cancellation is only available via: - /workflow cancel <run-id> chat slash command (all platforms) - Cancel button on the Web UI dashboard - POST /api/workflows/runs/{runId}/cancel REST endpoint cli-commands.md: removed the \`### archon workflow cancel <run-id>\` subsection; kept the \`abandon\` subsection but made it explicit that abandon does NOT kill a subprocess. Added a call-out box at the bottom of the abandon section explaining where to go for actual cancellation. troubleshooting.md "running forever" section: split the original cancel-vs-abandon advice into three bullets — Web UI / CLI abandon (for orphans, no subprocess kill) / chat \`/workflow cancel\` (for live runs that need interruption). Added an explicit "there is no archon workflow cancel CLI subcommand" parenthetical since the wrong command was being suggested in flow. I1: the \`archon workflow list --json\` diagnostic used an incorrect jq filter. workflow list's --json output (workflow.ts:185-219) has shape { workflows: [{ name, description, provider?, model?, ... }], errors: [...] } with no \`runs\` field — \`jq '.workflows[] | select(.runs)'\` returns empty unconditionally. Replaced with \`archon workflow status --json | jq '.runs[]'\`, which matches the actual shape of workflowStatusCommand at workflow.ts:852+ ({ runs: WorkflowRun[] }). Also tightened the narration to distinguish JSON from human-readable status output. No change to the commit history in this PR — these are follow-up fixes to claims I introduced in earlier commits of this branch (f10b989 for C1, 66d2b86 for I1). * fix(skill): remove env-leak gate references (feature was removed in provider extraction) C2 from the PR code-reviewer (comment 4311243858). The pre-spawn env-leak gate was removed from the codebase during the provider-extraction refactor — see TODO(coleam00#1135) at packages/providers/src/claude/provider.ts:908. Zero hits for --allow-env-keys / allowEnvKeys / allow_env_keys / allow_target_repo_keys across packages/. The CLI's parseArgs (cli.ts:182-208) has no --allow-env-keys option, and because parseArgs uses strict: false, an unknown --allow-env-keys would be silently ignored rather than error. What remains accurate and is NOT touched: - Three-Path Env Model section (user/repo archon-owned envs are loaded; target repo <cwd>/.env keys are stripped from process.env at boot) still correctly describes current behavior, grounded in packages/paths/src/strip-cwd-env.ts + env-integration.test.ts - Per-Project Env Injection section (Option 1: .archon/config.yaml env: block; Option 2: Web UI Settings → Projects → Env Vars) is unchanged — both remain the sanctioned way to get env vars into subprocesses Removed claims (all three files): - cli-commands.md: --allow-env-keys flag row in the workflow run flags table - repo-init.md: the "Env-leak gate" subsection at the end of Per-Project Env Injection listing 5 remediations (all of which reference UI/CLI/ config surfaces that don't exist). Replaced with a succinct callout that explains the actual current behavior — target repo .env keys are stripped, workflows that need those values should use managed injection — so the reader still gets the "where to put my env vars" answer - troubleshooting.md: the "Cannot register: codebase has sensitive env keys" section (error message that can no longer be emitted) If the env-leak gate is ever resurrected per TODO(coleam00#1135), the docs can be re-added then. The CHANGELOG v0.3.0 entry describing the gate is a historical record of past behavior and does not need to be rewritten. * fix(skill/troubleshooting): correct JSONL event type names and field name C3 from the PR code-reviewer (comment 4311243858). The troubleshooting reference's event-types table used _started / _completed / _failed suffixes, but packages/workflows/src/logger.ts:19-30 shows the actual WorkflowEvent.type enum is: workflow_start | workflow_complete | workflow_error | assistant | tool | validation | node_start | node_complete | node_skipped | node_error The second jq recipe also queried `.event` but the discriminator is `.type`. Fixes: - Event table: renamed columns (_started → _start, _completed → _complete, _failed → _error). Explicitly called out the field name as `type` so the reader knows what jq selector to use - Replaced the "tool_use / tool_result" row with a single `tool` row and listed its actual payload fields (tool_name, tool_input, duration_ms, tokens) — tool_use/tool_result are SDK message kinds that appear within the AI stream, not top-level log event types - Added a `validation` row (was missing; it's emitted by workflow-level validation calls with `check` and `result` fields) - Removed `retry_attempt` row — this event type is not emitted to the JSONL file. Retry bookkeeping goes through pino logs, not the workflow log file - Added an explicit callout that loop_iteration_started / loop_iteration_completed (and other emitter-only events) go through the workflow event emitter + DB workflow_events table, NOT the JSONL file. Pointed readers to the DB or Web UI for loop-level detail. This distinguishes the two parallel event systems — easy to conflate (store.ts:11-17 uses _started/_completed/_failed for the DB side, logger.ts uses _start/_complete/_error for JSONL) - Fixed the "all failed events" jq recipe: .event → .type and _failed → _error - Minor cleanup: the inline "tool_use events" mention in the "running forever" section said the wrong event name — updated to "tool or assistant events in the tail" Grounded in packages/workflows/src/logger.ts (canonical JSONL event shape) and packages/workflows/src/store.ts (the parallel DB event naming, which the reviewer correctly flagged as different and worth keeping distinct). * fix(skill): two stragglers from the code-reviewer audit Cleanup of two references that slipped through the earlier C1 and C3 fixes: - references/troubleshooting.md:126: \`node_failed\` → \`node_error\` (the "Node output is empty" diagnostics section references the JSONL log, which uses the logger.ts enum — not the DB workflow_events table which does use \`node_failed\`). The C3 fix corrected the event table and one jq recipe but missed this inline mention. - references/interactive-workflows.md:106: removed \`archon workflow cancel <run-id>\` (nonexistent CLI subcommand) from the troubleshooting bullet. This was pre-existing before the hardening PR but fell within the C1 remediation scope. Replaced with the correct triage: reject (approval gate only) vs abandon (orphan cleanup, no subprocess kill) vs chat /workflow cancel (actual subprocess termination). Grounded in the same sources as the earlier C1/C3 commits: packages/cli/src/cli.ts:318-485 (no cancel case) and packages/workflows/src/logger.ts:19-30 (JSONL type enum). * feat(skill): point to archon.diy as the canonical docs source The skill had no reference to archon.diy (the live docs site built from packages/docs-web/). Several reference files said "see the docs site" without naming the URL, leaving the agent to guess or grep the repo for the hostname. An agent with the skill loaded should know that when the distilled reference pages don't cover a case, the full canonical docs are one WebFetch away. SKILL.md: new "Richer Context: archon.diy" section between Routing and Running Workflows. Covers: - When to reach for the live docs (longer examples, tutorial framing, features the skill only mentions in passing, "where's that documented?" user questions) - URL map — 13 starting points covering getting-started, book (tutorial series), guides/ (authoring + per-node-type + per-node-feature), reference/ (variables, CLI, security, architecture, configuration, troubleshooting), adapters/, deployment/ - Precedence: skill refs first (context-cheap, tuned for agents), docs site as escalation. Prevents agents defaulting to WebFetch when a local skill ref already covers the answer Also upgrades the 5 existing generic "docs site" mentions across reference files to concrete archon.diy URLs with anchor fragments where helpful: - good-practices.md: Inline sub-agents pattern → archon.diy/guides/ authoring-workflows/#inline-sub-agents - troubleshooting.md: "Install page on the docs site" → archon.diy/ getting-started/installation/ - workflow-dag.md: "Workflow Description Best Practices" → anchor link; sandbox schema reference → archon.diy/guides/authoring-workflows/ #claude-sdk-advanced-options - repo-init.md: Security Model reference → archon.diy/reference/ security/#target-repo-env-isolation (deep-link into the section that covers the <cwd>/.env strip behavior) URL source of truth: astro.config.mjs:5 (site: 'https://archon.diy'). URL structure mirrors packages/docs-web/src/content/docs/<section>/ <page>.md — verified by the 62 pages the docs build produces.
…#1395) Anthropic's Opus 4.7 landed 2026-04-16; on the Anthropic API, opus / opus[1m] now resolve to 4.7 with a 1M context window at standard pricing. Using the alias instead of the hard-pinned claude-opus-4-6[1m] lets bundled default workflows auto-track the recommended Opus version. No explicit effort is set, so nodes inherit the per-model default (xhigh on 4.7, high on 4.6).
…m00#1398) * fix(workflow): migrate piv-loop plan handoff to $ARTIFACTS_DIR (coleam00#1380) The create-plan node used a relative path (.claude/archon/plans/{slug}.plan.md) that the AI agent would sometimes write to a different location, breaking all downstream nodes that glob for the plan file. Migrated all plan/progress file references to $ARTIFACTS_DIR/plan.md and $ARTIFACTS_DIR/progress.txt, matching the pattern used by archon-fix-github-issue and other workflows. Changes: - Replace slug-based plan path with $ARTIFACTS_DIR/plan.md in create-plan node - Replace ls -t glob discovery with direct $ARTIFACTS_DIR/plan.md reads in refine-plan, code-review, and fix-feedback nodes - Replace empty-string guard with file-existence check in implement-setup bash - Migrate progress.txt references in implement loop to $ARTIFACTS_DIR/ - Add explicit plan/progress paths in finalize node - Regenerated bundled-defaults.generated.ts Fixes coleam00#1380 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(workflow): address review findings in archon-piv-loop - Rename 'Step 2: Write the Plan' to 'Step 2: Plan File Location' to eliminate the duplicate heading that collided with Step 3's identical title in the create-plan node - Guard implement-setup against a 0-task plan file: exit 1 with a clear error when no '### Task N:' sections are found, preventing a silent no-op implement loop - Remove 2>/dev/null from code-review commit so pre-commit hook failures and other stderr are visible to the agent instead of silently swallowed - Replace '|| true' on git push in finalize with an explicit WARNING echo so push failures (auth, upstream conflict, no remote) surface to the agent rather than being silently ignored - Regenerate bundled-defaults.generated.ts Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * chore(workflows): regenerate bundled defaults to match opus[1m] alias The bundle was stale relative to the YAML sources after coleam00#1395 merged — check:bundled was failing CI. Regenerated; no YAML edits. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…cutor (coleam00#1403) PIV Task 1: Adds three new tests in a dedicated describe block 'executeDagWorkflow -- final status derivation' covering the anyFailed branch (dag-executor.ts ~line 2956) that previously had no direct test: - one success + one independent failure calls failWorkflowRun (not completeWorkflowRun) - multiple successes + one failure calls failWorkflowRun (not completeWorkflowRun) - trigger_rule: none_failed skips dependent node but anyFailed still marks run failed Fixes coleam00#1381.
New reference for the archon skill: a single-glance lookup of which parameter works on which node type, an intent-based "how do I..." table, a consolidated silent-failure catalog, and an inline agents: section (previously only referenced via archon.diy). Purpose is complementary, not duplicative: - workflow-dag.md remains the authoring guide - dag-advanced.md remains the hooks/MCP/skills/retry deep-dive - good-practices.md remains the patterns and anti-patterns - parameter-matrix.md is the grep-this-first lookup when you know the outcome you want but not which field gets you there Also registers the new reference in SKILL.md routing table.
|
Hi @shaun0927 — thanks for opening this PR. This repository uses a PR template at
Could you fill those out (even briefly)? The template helps reviewers understand scope, risk, and rollback — it speeds up review significantly. If a section genuinely doesn't apply, just write "N/A" in it rather than leaving it blank. |
Add explicit references to .github/PULL_REQUEST_TEMPLATE.md in both CONTRIBUTING.md and CLAUDE.md, plus a reminder to link issues with Closes/Fixes/Resolves so they auto-close on merge. Repo-triage runs were flagging dozens of partially-filled or unlinked PRs each cycle.
…riage (coleam00#1428) * feat(workflows): add maintainer-standup workflow for daily PR/issue triage Daily morning briefing that pulls origin/dev, triages all open PRs and assigned issues against direction.md, and surfaces progress vs. the previous run. Designed for live-checkout use (worktree.enabled: false) so it can read its own state. Layout under .archon/maintainer-standup/: - direction.md (committed) — project north-star: what Archon IS / IS NOT. Drives PR P4 polite-decline classification with cited clauses. - README.md / profile.md.example — setup docs and template for new maintainers. - profile.md, state.json, briefs/YYYY-MM-DD.md — gitignored, per-maintainer. Engine: - 3 parallel gather scripts in .archon/scripts/maintainer-standup-*.ts (git-status, gh-data, read-context) — bun runtime, JSON stdout. - Synthesis node: command file with output_format schema for { brief_markdown, next_state }. - Persist node: tiny inline bun script writes both to disk. Run-to-run continuity: state.json carries observed_prs/issues snapshots, so the next run can detect what merged, what closed, what the maintainer shipped, and which carry-over items aged past N days. Also adds .archon/** to the ESLint global ignore list (matches the existing .claude/skills/** pattern) since .archon/ is user content and not part of any tsconfig project. * fix(maintainer-standup): address CodeRabbit review on coleam00#1428 - gh-data: bump --limit 100 → 1000 on all_open_prs and warn loudly when the cap is hit; preserves the observed_prs invariant the next-run "resolved since last run" diff depends on. (CodeRabbit critical) - maintainer-standup.md: clarify P1 CI signal — the gathered payload only carries mergeStateStatus, not statusCheckRollup; for borderline P1s, drill in via `gh pr checks <n>`. (CodeRabbit minor) - workflow.yaml persist: write briefs under local YYYY-MM-DD (sv-SE locale) instead of UTC ISO date, so an evening run doesn't file tomorrow's brief and break recent_briefs lookups. (CodeRabbit minor) - workflow.yaml persist: wrap state/brief writes in try/catch; on failure dump brief_markdown and next_state to stderr so a 5-minute Sonnet synthesis isn't lost to a transient disk error. (CodeRabbit minor) - gh-data + git-status: switch from execSync (shell-string) to execFileSync (argv array) for git/gh invocations. Defense-in-depth against shell metacharacters in values that pass through (esp. the gh_handle from profile.md). (CodeRabbit nitpick)
Add optional `tags: string[]` to `workflowBaseSchema`. Explicit values take precedence over keyword inference; `tags: []` suppresses inference end-to-end; omitting the field falls back to inference (backwards compatible). Non-array values warn-and-ignore matching the sibling `worktree`/`additionalDirectories` patterns.
…ows under maintainer/ (coleam00#1430) * feat(workflows): add maintainer-review-pr and group maintainer workflows under .archon/workflows/maintainer/ Adds the maintainer-review-pr workflow — a Pi/Minimax-based PR triage flow that gates on direction alignment, scope focus, and PR-template quality before doing any deep review. If the gate clears, runs the five review aspects (code/error-handling/test-coverage/comment-quality/ docs-impact) as parallel Archon nodes and auto-posts a synthesized review comment. If the gate fails (direction conflict, multiple concerns, sprawling scope), drafts a polite-decline comment and pauses for the maintainer's approval before posting. Reorganizes the existing maintainer-standup workflow into the same subfolder so all maintainer-facing workflows live together. Subfolder grouping is supported by the workflow loader (1 level deep, resolution by filename). What lands: - .archon/workflows/maintainer/maintainer-standup.yaml (moved from .archon/workflows/maintainer-standup.yaml) - .archon/workflows/maintainer/maintainer-review-pr.yaml (new) - .archon/commands/maintainer-review-{gate,code-review,error-handling, test-coverage,comment-quality,docs-impact,synthesize,report}.md (new, Pi-tuned variants of the existing review-agent commands so they avoid Claude-only Task / sub-agent patterns) Pi/Minimax integration: - Uses provider: pi, model: minimax/MiniMax-M2.7 — verified via the e2e-minimax-smoke test that Pi correctly routes to Minimax (session jsonl confirms provider=minimax) and that Pi's best-effort output_format parser handles the gate's nested schema. - Two test runs landed real comments: a direction-decline on PR coleam00#1335 and a deep-review on PR coleam00#1369. Both were posted to GitHub via the workflow's gh pr comment node. * chore(workflows): also group repo-triage under .archon/workflows/maintainer/ repo-triage is the third maintainer-facing workflow alongside maintainer-standup and maintainer-review-pr; group it in the same subfolder for consistency. Subfolder resolution is by filename so the workflow name is unchanged.
…r unmapped providers (coleam00#1284) Closes coleam00#1096. - Switch Pi provider model lookup from pi-ai's getModel() (static catalog only) to ModelRegistry.create(authStorage).find() so user-configured custom models in ~/.pi/agent/models.json (LM Studio, ollama, llamacpp, custom OpenAI-compatible endpoints) are discoverable. - Remove the local lookupPiModel helper. - For env-var-mapped providers (anthropic, openai, etc.) still throw with a pi /login hint when credentials are missing. For unmapped providers, log pi.auth_missing at info and continue so local models that don't need credentials work without ceremony. - Surface modelRegistry.getError() in the not-found message and emit pi.model_not_found so users debugging custom-provider configs see the real cause (e.g. missing baseUrl in models.json). - Guard AuthStorage.create() and ModelRegistry.create() with try/catch so a malformed ~/.pi/agent/auth.json surfaces with Pi-framed context instead of a raw SDK stack trace. - Document the credential-free path for local providers in ai-assistants.md. Co-authored-by: Matt Chapman <Matt@NinjitsuWeb.com>
…add e2e-minimax-smoke (coleam00#1431) * chore(workflows): group all smoke-test workflows under .archon/workflows/test-workflows/ Move the 7 existing e2e-*.yaml smoke tests plus the new e2e-minimax-smoke test into a dedicated subfolder. Subfolder grouping is supported by the workflow loader (1 level deep, resolution by filename) so workflow names are unchanged. Mirrors the .archon/workflows/maintainer/ split landing in coleam00#1430. Also adds e2e-minimax-smoke.yaml — a sanity check that Pi correctly routes to Minimax M2.7 via the user's local pi auth, and that Pi's best-effort output_format parser handles a small nested schema. Asserts routing by reading the most recent Pi session jsonl rather than asking the model to self-identify (LLMs are unreliable narrators about their own identity, especially when Pi's system prompt mentions other providers as defaults). * fix(e2e-minimax-smoke): address CodeRabbit review on coleam00#1431 - Widen find window from -mmin -3 to -mmin -10. The smoke's three Pi nodes plus the assert can collectively run several minutes on slow networks; 3 minutes was tight enough to false-FAIL on a healthy run. (CodeRabbit minor) - Drop non-deterministic `head -1` over `find` output. find doesn't guarantee any order; on a tie, the wrong file would be picked. Now iterates all matching sessions and breaks on first one carrying the routing signal — any match is sufficient evidence. (CodeRabbit minor) - Replace single-regex `'"provider":"minimax".*"modelId":"MiniMax-M2.7"'` with two separate greps joined by `&&`. JSON field order isn't part of Pi's contract; a future Pi release reordering `provider` and `modelId` in the model_change event would silently false-FAIL the original pattern. The new check is order-independent. (CodeRabbit major)
…oleam00#1432) Six findings, two majors and four minors/nitpicks: - gate.md L17 vs L77: resolved conflicting input-source instructions. Body claimed "all inline, no extra fetch" while a later phase permitted reading PULL_REQUEST_TEMPLATE.md. Now: explicit "one allowed extra read" callout in Phase 1 + matching wording in Gate C. (CodeRabbit major) - gate.md fenced blocks: added missing language identifiers (text/json/ markdown) to satisfy markdownlint MD040. (CodeRabbit minor) - gate.md L155 + read-context.ts: deterministic clock. The 3-day deadline was anchored to prior_state.last_run_at, which can be stale and produce past-dated deadlines. Moved both today and deadline_3d into the read-context.ts output (computed via sv-SE locale → ISO date in local time) and instructed the gate to use $read-context.output.deadline_3d directly. LLMs are unreliable at calendar arithmetic; this avoids it entirely. (CodeRabbit major) - maintainer-review-pr.yaml fetch-diff: dropped 2>/dev/null on gh pr diff so auth / network / deleted-PR failures fail the node instead of feeding an empty diff to the gate. Empty-but-successful diff (PR has no changes) is now an explicit marker the gate can detect. (CodeRabbit minor) - maintainer-review-pr.yaml approve-unclear: added capture_response: true so the maintainer's approve comment flows to the report node. Reject reasoning is already captured by Archon's run record. (CodeRabbit minor) - maintainer-review-pr.yaml post-decline + report.md: the gh pr edit --add-label call previously swallowed all errors with || true and the report still claimed the label was applied. Now writes applied/skipped to $ARTIFACTS_DIR/.label-applied + the gh stderr to .label-error so the report can describe the actual outcome. (CodeRabbit nitpick)
…ume (coleam00#1435) * fix(workflows): approval gate bypass after reject-with-redraft on resume When an approval node was rejected with on_reject.prompt, the synthetic PromptNode built to run the on_reject prompt reused the approval gate's own node ID. executeNodeInternal then wrote a node_completed event with that ID, causing getCompletedDagNodeOutputs to treat the gate as already completed on the next resume — bypassing the human gate entirely. Fix: give the synthetic node the ID `${node.id}:on_reject` so its node_completed event has a distinct step_name that won't match the approval gate slot in priorCompletedNodes. Adds a regression test asserting no node_completed event with the approval gate's ID is written during on_reject execution. Fixes coleam00#1429 * test(workflows): add positive assertion and SSE side-effect comment for on_reject synthetic node Add complementary positive assertion to the regression test to verify that node_completed is written exactly once with step_name 'review:on_reject', ensuring future refactors that suppress the event entirely would be caught. Add inline comment in executeApprovalNode documenting the known SSE side-effect: node_started/node_completed events with nodeId='review:on_reject' flow through the SSE pipeline into the web UI, resulting in a transient phantom node in the execution view. This is cosmetic-only — the human gate contract is preserved. * simplify: reduce duplicate cast pattern in on_reject test assertions
…e checkout (coleam00#1438) * feat(workflows): add mutates_checkout field to skip path-lock for concurrent runs Add `mutates_checkout: boolean` (optional, default true) to the workflow schema. When set to false, the executor skips the path-exclusive lock that serializes all runs on the same working path, allowing N concurrent runs on the same live checkout. The primary use case is `maintainer-review-pr`, which reads shared state but writes only to per-run artifact paths and GitHub PR comments — two parallel reviews of different PRs should not fail with "Workflow already active on this path". Changes: - `schemas/workflow.ts`: add optional `mutates_checkout` field - `loader.ts`: parse and propagate the field (warn-and-ignore on invalid values) - `executor.ts`: wrap path-lock guard in `if (workflow.mutates_checkout !== false)` - `executor.test.ts`: two new tests in the concurrent-run guard suite - `maintainer-review-pr.yaml`: opt in with `mutates_checkout: false` * test(workflows): add loader tests for mutates_checkout parsing - Add 5 tests covering false, true, omitted, and invalid (string "yes") values - Invalid non-boolean values are silently dropped with warn — now explicitly tested - Remove the // end mutates_checkout guard trailing comment (no precedent in file) - Clarify loader comment: "parse/warn pattern" not "warn-and-ignore pattern" to avoid implying the return style matches interactive * simplify: collapse nodeType/aiFields pair into single nonAiNode object in parseDagNode
…es (coleam00#1434) * docs: replace String.raw with direct assignment in script node examples String.raw`$nodeId.output` fails silently when substituted output contains a backtick, terminating the template literal early and producing cryptic parse errors. JSON is valid JS expression syntax, so direct assignment is safe for all valid JSON values including those with backticks. - Replace String.raw pattern in dag-workflow.yaml example - Replace String.raw pattern in archon-workflow-builder.yaml template - Add CAUTION bullet in workflow-dag.md Script Node section - Add Silent Failures item coleam00#14 in parameter-matrix.md - Add Starlight caution aside in script-nodes.md - Extend script bodies bullet in variables.md - Regenerate bundled-defaults.generated.ts Fixes coleam00#1427 * docs: fix Rule 6 in generate-yaml prompt to distinguish bun vs uv patterns Rule 6 still referenced JSON.parse after the example was updated to direct assignment, creating a contradiction for the AI code generator. Update the prose to explicitly distinguish TypeScript/bun (direct assignment) from Python/uv (json.loads), matching the updated embedded example.
There was a problem hiding this comment.
♻️ Duplicate comments (2)
packages/providers/src/codex/binary-resolver.test.ts (1)
200-240:⚠️ Potential issue | 🟡 Minor
symlinkSyncwillEPERMon non-elevated Windows — guard both new symlink tests.
detects swapped symlink target via logical-path pinning(200-226) andrefuses symlinked trust pin paths(228-240) both callsymlinkSyncunconditionally. On Windows without Developer Mode or elevation this throwsEPERMbefore the assertions run, which would make the suite non-deterministic on that platform. If Windows is a supported CI target for this package, gate both tests withtest.skipIf(process.platform === 'win32')(or wrapsymlinkSyncin a try/catch that early-returns when symlink creation itself fails).🧪 Suggested guard
- test('detects swapped symlink target via logical-path pinning', () => { + test.skipIf(process.platform === 'win32')('detects swapped symlink target via logical-path pinning', () => {- test('refuses symlinked trust pin paths', () => { + test.skipIf(process.platform === 'win32')('refuses symlinked trust pin paths', () => {As per coding guidelines: "keep tests deterministic without flaky timing or network dependence; ensure local validation commands (
bun run validate) map directly to CI expectations".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/codex/binary-resolver.test.ts` around lines 200 - 240, Both tests ('detects swapped symlink target via logical-path pinning' and 'refuses symlinked trust pin paths') call symlinkSync unguarded which can throw EPERM on non-elevated Windows; update the tests that exercise symlinkSync and resolver.verifyOrPinBinaryHash to be deterministic by skipping or short-circuiting on Windows. Either wrap the symlinkSync calls in a try/catch that returns early when symlink creation fails, or mark the tests with test.skipIf(process.platform === 'win32'); ensure you reference the tests by name and keep the calls to symlinkSync and resolver.verifyOrPinBinaryHash unchanged otherwise.packages/providers/src/codex/binary-resolver.ts (1)
102-122:⚠️ Potential issue | 🟡 Minor
O_NOFOLLOWsilently degrades to no guard on Windows — pre-check withlstatSyncto mirror the read path.
fsConstants.O_NOFOLLOWis undefined on Windows, so?? 0removes the symlink protection on first-use pinning even thoughreadPinnedHashrejects symlinks vialstatSync. An attacker with write access to~/.archon/trust/codex/could pre-plant a symlink at the computed pin path before first use and have the hash written through it. Add alstatSyncguard beforeopenSync(or onlyORinO_NOFOLLOWwhen defined) so write-path symlink handling matches the read path.🛡️ Suggested fix
mkdirSync(dirname(hashPath), { recursive: true, mode: 0o700 }); + // Defense-in-depth: O_NOFOLLOW is undefined on Windows, so explicitly + // reject a pre-planted symlink before opening the pin path. + try { + const preStats = lstatSync(hashPath); + if (preStats.isSymbolicLink()) { + throw new Error(`Codex binary hash pin path is a symlink: ${hashPath}`); + } + } catch (e) { + if ((e as NodeJS.ErrnoException).code !== 'ENOENT') throw e; + } const fd = openSync( hashPath, fsConstants.O_CREAT | fsConstants.O_EXCL | fsConstants.O_WRONLY | (fsConstants.O_NOFOLLOW ?? 0), 0o600 );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/providers/src/codex/binary-resolver.ts` around lines 102 - 122, The write path currently uses (fsConstants.O_NOFOLLOW ?? 0) which is undefined on Windows and removes the symlink guard; update the code around openSync(hashPath, ...) in the binary pinning block so it mirrors readPinnedHash: call lstatSync(dirname? or hashPath?)/lstatSync(hashPath) before openSync and throw or skip writing if lstatSync indicates a symlink (S_ISLNK) or the path exists as a symlink, and also only OR in fsConstants.O_NOFOLLOW when it is defined (i.e., check typeof fsConstants.O_NOFOLLOW !== 'undefined' before adding it) so symlink protection is enforced on platforms that support it while preventing silent degradation on Windows; reference symbols: hashPath, openSync, lstatSync, readPinnedHash, fsConstants.O_NOFOLLOW.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@packages/providers/src/codex/binary-resolver.test.ts`:
- Around line 200-240: Both tests ('detects swapped symlink target via
logical-path pinning' and 'refuses symlinked trust pin paths') call symlinkSync
unguarded which can throw EPERM on non-elevated Windows; update the tests that
exercise symlinkSync and resolver.verifyOrPinBinaryHash to be deterministic by
skipping or short-circuiting on Windows. Either wrap the symlinkSync calls in a
try/catch that returns early when symlink creation fails, or mark the tests with
test.skipIf(process.platform === 'win32'); ensure you reference the tests by
name and keep the calls to symlinkSync and resolver.verifyOrPinBinaryHash
unchanged otherwise.
In `@packages/providers/src/codex/binary-resolver.ts`:
- Around line 102-122: The write path currently uses (fsConstants.O_NOFOLLOW ??
0) which is undefined on Windows and removes the symlink guard; update the code
around openSync(hashPath, ...) in the binary pinning block so it mirrors
readPinnedHash: call lstatSync(dirname? or hashPath?)/lstatSync(hashPath) before
openSync and throw or skip writing if lstatSync indicates a symlink (S_ISLNK) or
the path exists as a symlink, and also only OR in fsConstants.O_NOFOLLOW when it
is defined (i.e., check typeof fsConstants.O_NOFOLLOW !== 'undefined' before
adding it) so symlink protection is enforced on platforms that support it while
preventing silent degradation on Windows; reference symbols: hashPath, openSync,
lstatSync, readPinnedHash, fsConstants.O_NOFOLLOW.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 74ddab3e-361b-4972-9df8-6017ea601856
📒 Files selected for processing (2)
packages/providers/src/codex/binary-resolver.test.tspackages/providers/src/codex/binary-resolver.ts
|
Pushed What changed in this commit:
Other items on this PR are already in place from the prior follow-ups (
Local validation: |
9a9122f to
63d4232
Compare
|
Rebased onto current While I was in there I picked up the two Minor findings from the latest CodeRabbit pass:
One note worth surfacing: while I was resolving the conflict, I noticed that the new tier-4 autodetect resolution path added in #1430-era doesn't currently call Validation:
|
…s/experimental/ Move two repo-scoped workflows that were sitting untracked at the workflow root into a dedicated subfolder. Subfolder grouping is supported by the loader (1 level deep, resolution by filename), so workflow names are unchanged and the /release skill still resolves archon-release correctly. Files moved: - archon-fix-github-issue-experimental.yaml — Path-A variant of the issue-fix workflow used today to land coleam00#1434, coleam00#1435, coleam00#1438. - archon-release.yaml — the live release workflow used by the /release skill end-to-end (validate -> binary smoke -> version bump -> changelog -> approval -> commit -> PR -> tag -> Homebrew formula update).
…des (coleam00#1387) executeBashNode previously only merged explicit envVars on top of process.env. The three well-known workflow directories (artifactsDir, logDir, baseBranch) were passed as function parameters and used for compile-time substitution of $ARTIFACTS_DIR / $LOG_DIR / $BASE_BRANCH in the script body, but were never added to the subprocess environment. As a result, any script that relied on shell-runtime expansion — e.g. JSON_FILE="${ARTIFACTS_DIR}/foo.output.json" inside a heredoc, an inherited helper script, or a `bash -c` subshell — saw the variable unset and silently fell back to its default (typically an empty string or "."), writing artifacts to the workflow cwd instead of the nominal artifacts directory. Always build subprocessEnv from process.env plus the three well-known directories, then allow explicit envVars to override. Compile-time substitution behavior is unchanged; existing scripts that do not reference these variables are unaffected; user-supplied envVars still win on conflict.
…oleam00#1426) * fix(workflow): substitute \$nodeId.output refs in approval messages Approval node messages were emitted as raw strings, bypassing the substituteNodeOutputRefs() pass that prompt/bash/loop/cancel nodes all run. This made interactive workflows like atlas-onboard show literal "\$gather-context.output.repo_name" placeholders to humans at HITL gates, leaving them unable to know what they were approving. Fix: rendered the approval.message through substituteNodeOutputRefs once at the top of the standard approval gate path, then used the resolved string in all 4 emission sites (safeSendMessage, createWorkflowEvent, pauseWorkflowRun, event-emitter). Test: new dag-executor.test case wires a structured-output upstream node into an approval node and asserts pauseWorkflowRun receives the substituted message ("Repo: hcr-els | App: CCELS | Port: 3012") rather than the literal placeholders. Repro: any workflow with an approval node whose message references \$nodeId.output[.field]. Observed in the wild on atlas-onboard's confirm-context HITL gate. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(workflow): extend approval-substitution test to cover all 4 emission sites Per CodeRabbit review: the original test only verified pauseWorkflowRun received the substituted message, but the fix touches 4 emission sites. A future regression at safeSendMessage / createWorkflowEvent / event-emitter would silently leave the test passing while users still saw raw $node.output placeholders. Adds two additional assertions: - platform.sendMessage prompt contains substituted message + does NOT contain literal $gather-context.output placeholders - The persisted approval_requested workflow event's data.message is substituted Event-emitter assertion deferred (no existing pattern for spying on the global emitter in this test file). Two of three secondary surfaces covered closes the practical regression risk — both are user-visible (chat prompt + audit-log event); the emitter is internal only. Test count: 7 pass / 22 expect() (was 18). Full suite 193 pass / 353 expect() — no regressions. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…m00#1286) (coleam00#1367) * feat(workflows): expose $LOOP_PREV_OUTPUT in loop node prompts (coleam00#1286) Adds a new substitution variable that carries the previous loop iteration's cleaned output into the next iteration's prompt. Empty on iteration 1; the prior iteration's output (after stripCompletionTags) on iteration 2+. Why: fresh_context: true loops have no way to reference what the previous pass produced or why it failed without dragging the full session forward. $LOOP_PREV_OUTPUT closes that gap with zero session-cost — same trust boundary as $nodeId.output, no new external surface. Changes: - packages/workflows/src/executor-shared.ts: substituteWorkflowVariables accepts a 10th positional loopPrevOutput arg and substitutes $LOOP_PREV_OUTPUT (defaults to ''). - packages/workflows/src/dag-executor.ts: executeLoopNode passes lastIterationOutput on iteration 2+ (and explicit '' on iteration 1 / the first iteration of an interactive resume, since lastIterationOutput is a per-call variable that does not survive resume metadata). - Unit tests: 3 new cases in executor-shared.test.ts. - Integration tests: 2 new cases in dag-executor.test.ts verifying the prompt sent to the AI on iter 1 vs iter 2, and that the value reflects cleaned output (no <promise> tags). - Docs: variables.md, loop-nodes.md (new "Retry-on-failure" pattern), CLAUDE.md variable reference. Backward compatibility: prompts that don't reference $LOOP_PREV_OUTPUT are unaffected. All 843 workflow tests + type-check + lint + format:check + bun run validate pass locally. * docs: address coderabbit review on variables/loop-nodes - variables.md: include $LOOP_PREV_OUTPUT in substitution-order list and availability table to match the new variable row at line 30 - loop-nodes.md: document the interactive-resume exception where the first iteration after an approval-gate resume still receives an empty $LOOP_PREV_OUTPUT regardless of iteration number (per dag-executor.ts L1781-1783 where i === startIteration always clears prev output) * docs(changelog): add Unreleased entry for $LOOP_PREV_OUTPUT (coleam00#1367 review) * test(loop): add resume-from-approval integration test for $LOOP_PREV_OUTPUT (coleam00#1367 review) Per maintainer-review-pr suggestion (Wirasm): two-call integration test covering the resume-from-approval scenario. - Call 1: fresh interactive loop pauses at the gate after iteration 1 and asserts $LOOP_PREV_OUTPUT substitutes to empty on iter 1 (no prior output) plus the gate pause is recorded. - Call 2: resumed run with metadata.approval populated. The first resumed iteration must substitute $LOOP_PREV_OUTPUT to '', NOT to the paused run's iter-1 output (which lived in a different process and is not persisted). $LOOP_USER_INPUT still flows through as normal. Locks the documented invariant at dag-executor.ts:1769-1772. --------- Co-authored-by: voidborne-d <DottyEstradalco@allergist.com>
|
Pushed an update — addresses CR's Major finding plus the related test changes:
Also rebased onto current Local: |
63d4232 to
123b85a
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 123b85a974
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| } catch { | ||
| cacheKey = resolvePath(binaryPath); | ||
| } | ||
| if (verifiedRealpaths.has(cacheKey)) return; |
There was a problem hiding this comment.
Re-verify cached binaries before each execution
verifyOrPinBinaryHash returns immediately once a realpath is in verifiedRealpaths, so integrity is only checked once per process. In long-lived contexts (e.g., archon serve), if an attacker can modify the binary in place after the first successful check, the path/realpath stay the same and subsequent resolutions skip hashing entirely, allowing the tampered binary to execute until restart. This bypass is introduced by the new memoization layer and undermines the stated post-install tamper detection model for writable binary directories.
Useful? React with 👍 / 👎.
|
Pushed an update — picks up Codex's P1 follow-up:
Local: |
123b85a to
e3f5861
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: e3f58614ad
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const cached = verifiedFingerprints.get(cacheKey); | ||
| if (cached?.mtimeMs === stat.mtimeMs && cached.size === stat.size) return; |
There was a problem hiding this comment.
Strengthen cache fingerprint beyond mtime and size
The memoization fast path skips hashing when mtimeMs and size match, but in the stated threat model (attacker can write the binary directory in a long-lived process), an attacker can replace the binary with a same-length payload and restore the prior mtime, causing this branch to return and execute tampered code without re-verification. This undermines the tamper-detection guarantee for post-start modifications; the fingerprint should include harder-to-forge metadata (e.g., ctime/inode) or avoid skipping verification solely on mtime+size.
Useful? React with 👍 / 👎.
| const vendorBinaryPath = join(archonHome, CODEX_VENDOR_DIR, binaryName); | ||
|
|
||
| if (fileExists(vendorBinaryPath)) { | ||
| verifyOrPinBinaryHash(vendorBinaryPath); |
There was a problem hiding this comment.
Verify autodetected binary paths before returning
This change adds verifyOrPinBinaryHash for env/config/vendor resolution, but the autodetect branch still returns probePath without pinning or verification. Any deployment that resolves via ~/.npm-global/bin/codex, /usr/local/bin/codex, or platform equivalents therefore gets no integrity check at all, creating a full bypass of the new protection for that commonly used path source.
Useful? React with 👍 / 👎.
e3f5861 to
a2d8d9b
Compare
|
Pushed an update — picks up Codex's two latest P1s:
Local: |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a2d8d9b7c3
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| ) { | ||
| return; |
There was a problem hiding this comment.
Verify pin for each logical path before cache return
The realpath-based memoization returns early here before getHashPinPath(binaryPath) runs, so a second logical path (e.g., a different CODEX_BIN_PATH/config value or symlink) that resolves to the same realpath never gets its own trust pin created or checked. In a long-lived process, if path A is verified first and path B (same target) later hits this cache, an attacker can subsequently repoint path B to a different binary and Archon will treat it as first-use for B (no prior pin) instead of detecting tampering. This defeats the logical-path pinning guarantee the new hardening is trying to enforce.
Useful? React with 👍 / 👎.
a2d8d9b to
c56bf0c
Compare
|
Pushed an update — picks up Codex's latest P1:
Local: |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c56bf0c4c7
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| } | ||
|
|
||
| const hashPath = getHashPinPath(binaryPath); | ||
| const actual = createHash('sha256').update(readFile(binaryPath)).digest('hex'); |
There was a problem hiding this comment.
Stream binary hashing instead of reading whole file
verifyOrPinBinaryHash computes the digest with readFile(binaryPath), which loads the entire target into memory in one shot. In the same writable-directory threat model this patch addresses, replacing codex with a very large file can force huge allocations and crash or stall Archon before the mismatch path is reached, turning the integrity check into a denial-of-service vector. Hashing should be done via a streaming read (or guarded by strict file-type/size limits) to keep verification bounded.
Useful? React with 👍 / 👎.
Summary
packages/providers/src/codex/binary-resolver.tsresolved the Codex CLI binary from three sources (env, config,~/.archon/vendor/codex/) with onlyfileExists()as a trust gate — no hash verification, signature check, or directory permission validation. If~/.archon/vendor/codex/was writable by another local user (shared workstation, misconfigured container, prior malware), the next Archon workflow using Codex would execute the attacker's binary with the user's full privileges.archon servedeployments on shared infrastructure, Docker images with shared volumes, and CI runners where the vendor dir persists across jobs.verifyOrPinBinaryHash(): on first resolution, computes SHA-256 of the binary and writes it to a.sha256sidecar file next to the binary. On subsequent loads, verifies the hash matches — throws with a clear diagnostic on mismatch. Applied to all three resolution paths (env, config, vendor). Pin-write failure is non-fatal (warn, don't block — graceful degradation for read-only filesystems). AddedreadFilewrapper for test spyability (same pattern asfileExists). Updated existing tests to mockverifyOrPinBinaryHashso resolution tests remain filesystem-independent.fileExistsguard (still the first check — hash verification runs after existence is confirmed). Dev mode behavior (BUNDLED_IS_BINARY=false→ returns undefined). Error messages for missing binaries. The.sha256file is not security-critical itself (it's a TOFU anchor, not a signed attestation) — but it catches post-installation tampering, which is the primary threat.UX Journey
Before
After
Files Changed
packages/providers/src/codex/binary-resolver.ts— (+45 −1):readFilewrapper,verifyOrPinBinaryHash, integrated into all 3 resolve pathspackages/providers/src/codex/binary-resolver.test.ts— (+5 −0): mockverifyOrPinBinaryHashin existing testsTesting
bun run type-check— all 10 packages cleanbun run lint— zero warningsbun test packages/providers/src/codex/binary-resolver.test.ts— 7 pass, 0 failCloses #1245
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Architecture Diagram
Before
Threat: in the writable-vendor-dir model this PR addresses, an attacker who swaps the symlink target at the expected binary location resolves to a different realpath, misses the existing pin, and silently re-pins as if it were the first run — defeating the pinning guarantee. Pin files also lived in the writable directory itself, leaving them open to symlink-overwrite tricks.
After
Connection inventory:
binary-resolver.ts~/.archon/trust/codex/(Archon-owned, 0700)getHashPinPathpath.resolve(binaryPath)realpathSync— pin keyed by where Archon expects the binaryreadPinnedHashlstatsymlink check + sha256-format validationopenSync(... O_NOFOLLOW, 0600)getLog().error(... 'codex.binary_hash_mismatch')provider.tsre-wraps the errorLabel Snapshot
risk: high(security)size: Mprovidersproviders:codex/binary-resolverChange Metadata
securityproviderspackages/providers/src/codex/binary-resolver.ts,binary-resolver.test.ts)Linked Issue
Validation Evidence
Test coverage now includes:
codex.binary_hash_mismatchstructured log;Security Impact
O_EXCL | O_NOFOLLOW. Pin storage is no longer in any attacker-writable directory.~/.archon/vendor/codex/cannot (a) skip pinning by swapping the binary's symlink target, (b) coerce error messages to leak arbitrary file contents, or (c) plant a symlinked sidecar to overwrite an arbitrary user-readable file.Compatibility / Migration
*.sha256sidecars next to the binary are now ignored (no longer read or written). Cleanup is optional.Human Verification
/var → /private/var) handled — the test helper mirrors the production logical-path keying.openSync(O_EXCL|O_NOFOLLOW)refuses; resolver logscodex.binary_hash_pin_write_failedand the binary still loads (intentional non-fatal fallback for the write path).O_NOFOLLOW(Node falls back to default semantics; tests skip the symlink case there ifsymlinkSyncrequires admin).Side Effects / Blast Radius
codex.binary_hash_pinned,codex.binary_hash_verified,codex.binary_hash_mismatch,codex.binary_hash_pin_write_failed) make every state change observable.Rollback Plan
codex.binary_hash_mismatcherrors after a routine binary upgrade would indicate operators need the documented "delete pin and re-run" flow surfaced more prominently.Risks and Mitigations
infolog per pin.O_NOFOLLOWsemantics on a future platform. Mitigation: symlink rejection is layered (lstat + O_NOFOLLOW + format validation); any single layer breaking still leaves the others.