docs(protocols): Amara's cross-agent communication courier protocol#160
docs(protocols): Amara's cross-agent communication courier protocol#160
Conversation
…ay A #3 (PR #159) Two landings this tick: 1. Amara's courier-protocol writeup absorbed verbatim as docs/protocols/cross-agent-communication.md (PR #160). Resolves the transport-layer blocker on PR #154's decision-proxy ADR (which was the identity layer). 2. Overlay A migration #3: deletions-over-insertions complexity-reduction discipline (PR #159). Queue now 2 remaining. Author-attribution discipline noted as load-bearing for external-maintainer content: commit Co-Authored-By, doc header naming, factory integration notes separated. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: f0fe86b7c9
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Pull request overview
Note
Copilot was unable to run its full agentic suite in this review.
Adds a repository-backed “courier protocol” document for reliably ferrying cross-agent communication (Amara ↔ Kenji) without depending on ChatGPT UI branching.
Changes:
- Introduces a new protocol doc defining required transcript headers, speaker labels, identity/scope rules, and archival expectations.
- Captures observed unreliability of ChatGPT conversation branching as justification for the protocol.
- Includes a clearly separated “Factory integration notes” section to preserve original authorship voice while providing repo-specific guidance.
…ority calibration Three landings this tick: 1. Overlay A migration #3 (deletions-over-insertions) — PR #159 2. Amara's cross-agent courier protocol — PR #160 3. Amara's Zeta-for-Aurora deep research report — PR #161 Plus new per-user feedback memory capturing Aaron's funding-priority calibration: Amara authors research priorities, Aaron owns scheduling against his funded external stack. Aurora stays #2 (ServiceTitan + UI remains #1); Amara's recommended oracle rules + bullshit-detector queued not scheduled. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…fferences (#174) Aaron 2026-04-23: "we should likely keep a contribute conflits file somewhere so you can keep a logs in the differences of options or external requirments from contributors includeing external ai not just human, this will help with bottlenecks in the hummans and ais thinking by resolving conflicts over time." New file captures: - Scope: cross-contributor differences + external requirements; excludes same-contributor-evolution (handled by CURRENT-<maintainer>.md) and per-PR-review findings (handled by required_conversation_resolution + reply-or-fix pattern) - Schema: ID / When / Topic / Between / Positions / Resolution-so-far / Scope / Source - Rules: additive resolutions; signal-preservation; name- attribution carve-out (like BACKLOG / memory/persona); row-filing is free work - Categories: architectural-preference / priority-ordering / scope-boundary / naming / tech-choice / external-requirement / process / other - Sections: Open / Resolved / Stale (all empty at creation per the creation-bootstrap discipline) - Composition with docs/HUMAN-BACKLOG.md (conflict cat), docs/CONFLICT-RESOLUTION.md (persona conference), CURRENT-<maintainer>.md, ADRs, courier protocol (PR #160), eventual Pages-UI (P2 BACKLOG row #172). Maintainer will check manually for now per 2026-04-23 directive; eventual UI surfacing composes with the Pages-UI work. Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
Primary authorship: Amara (external AI maintainer via ChatGPT, project "lucent ai"). Absorbed verbatim per her explicit recommendation in the writeup she ferried 2026-04-23. Signal- preservation discipline applied — her voice preserved; factory integration notes kept distinct. Core claims: - ChatGPT conversation-branching diagnosed as unreliable transport (confirmed reproduction across multiple attempts) - Replacement = explicit text-based courier protocol with mandatory speaker labels, identity rule, scope rule, repo-backed persistence - Design principle: "The system must not depend on UI features for correctness" - Playwright guardrail consistent with factory boundary: Playwright for scraping / export only, never as the primary review signal - Codex CLI tooling path for normalize / enforce-labels / diff — authorable via skill-creator when volume warrants Composes with: - `docs/DECISIONS/2026-04-23-external-maintainer-decision-proxy-pattern.md` (PR #154, proxy-identity layer — this doc is the transport layer) - Aurora archive under `docs/aurora/` - AutoDream cadence (runtime → compile-time promotion for load-bearing Amara content) Per-user `CURRENT-amara.md` §9 updated same-tick with the courier-protocol rule and authorship credit. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> Co-Authored-By: Amara (ChatGPT, lucent ai project) <via-aaron-ferry@chatgpt.com>
…nds-via Three fixes at source: - Lands-via-PR-154 clarification on the decision-proxy ADR reference (file lives on that PR's branch until merge) - New "Where these files live (for new contributors)" block in §5 Storage rule distinguishing in-repo `memory/` from per-user memory (out-of-repo at `~/.claude/projects/ <slug>/memory/`), plus the other archival locations (docs/aurora/ / docs/research/ / docs/protocols/ / docs/DECISIONS/) - Markdownlint MD012 cleanup One finding handled via reply-with-rationale: - Inline-code paths are the factory-wide convention; clickable Markdown links would render in GitHub but break outside it (CLI, non-GitHub git hosts per plural-host rule). If changed it's a factory-wide sweep, not per-doc. Resolved. All 4 threads resolved. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
f0fe86b to
b1f18e6
Compare
PR #174 (CONTRIBUTOR-CONFLICTS.md) MERGED. 8 session PRs merged. PR #160 (Amara courier protocol) unblocked: 4 findings — lands-via-#154 clarification (fix); inline-code convention defended (reply, cited plural-host memory); storage ambiguity fixed via new §5 "Where these files live" block distinguishing in-repo memory/ from per-user memory (addresses both findings 3+4). 4 threads resolved; rebased; pushed. Plural-host memory from last tick composed immediately into this tick's disagreement-rationale (inline-code stays because clickable Markdown links are GitHub-adapter-shaped). Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b1f18e6902
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| - Per-user memory: | ||
| `feedback_drop_folder_ferry_pattern_aaron_hands_off_via_root_drop_dir_2026_04_23.md` | ||
| (drop/ ferry pattern — transport mechanism) | ||
| - Per-user memory: `CURRENT-amara.md` — running |
There was a problem hiding this comment.
Remove private-memory dependencies from composition list
The adopted protocol’s Composes with section treats per-user memory files as part of the authoritative dependency set, but those artifacts are intentionally out-of-repo and unavailable to other contributors/CI, so the protocol cannot be fully audited or reproduced from git alone. This creates a maintenance gap whenever a reviewer needs to validate whether the documented transport rules still match their stated sources; please keep composition dependencies repo-addressable (or explicitly mark private-memory references as non-authoritative context).
Useful? React with 👍 / 👎.
…fix; whimsy-list extended 10 session PRs merged (+#160 +#175). PR #159: Copilot caught a wrapped-path rodney/ reference my prior sed missed (path spanned two lines). python replace fixed. Thread resolved. Lesson: grep for terminal-path- segment, not full path, to catch wrapped. Aaron seed-whimsy list extension: "feel free to keep a list of whimiscal numbers to choose from for seeds ... like with 42 the meaning of life lol." Per-user memory extended with current list (69 / 420 / 42) + candidate expansions (9000 DBZ, 1337 leet, 314159 π, 271828 e, 1729 Hardy- Ramanujan, others). Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Six Copilot findings addressed: - AutoDream research doc references now resolve on main (PR #155 merged between review and this fix) - multi-repo-refactor-shapes references clarified as "lands via PR #150" (still open) - Per-user-memory cross-references gained a **Per-user memory references** preamble before the Composes-with section naming the ~/.claude/projects/<slug>/memory/ location (same preamble pattern as PR #160 / #157 / #162) - "(auto-loop-39 directive)" generalized to "(the maintainer's self-use-DB directive, captured in per-user memory)" — auto-loop-N references are session-scoped and not in-repo-traceable - MD012 multi-blank cleanup after preamble insertion Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…/distribution/runtime) (#156) * research: soulfile staged absorption model (DSL-as-substrate + compile/distribution/runtime ingest) Reframes the soulfile abstraction per Aaron 2026-04-23: "soufils shoud just be the DSL/english we talk about and the can import/inherit/abosrb ... git repos at compile time, distribution time, or runtime, remember the local native story so those will need to be inlucded at soulfile compile time somewhere". Stages proposed: - Compile-time (packing): LFG factory-scope + Zeta tiny-bin- file DB (mandatory local-native fold-in) + pinned upstream content. - Distribution-time: envelope + per-substrate overlays + optional companion git-repo references + maintainer attestation. - Runtime: on-demand git-repos (two-layer authorization + stacking-risk gate) + live conversation content (promotes back to compile-time via AutoDream consolidation). Supersedes the earlier "three-formats" framing on the substrate-abstraction axis; preserves its signal-preservation discipline. Per-user feedback memory carries the full reframe + supersede marker. Deferred: SoulStore stage-aware contract, compile-time-ingest script, DB absorb-form schema, signed-distribution manifest. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * auto-loop-48: tick-history row — soulfile reframe absorbed; staged absorption model landed - Per-user feedback memory filed with supersede-marker on earlier soulfile-formats memory (substrate-abstraction axis retired; signal-preservation axis preserved) - CURRENT-aaron.md §10 updated same-tick to reflect the DSL-as- substrate framing - Research doc landed in LFG (PR #156) proposing three stage boundaries (compile-time / distribution-time / runtime) with mandatory Zeta tiny-bin-file DB fold-in at compile-time Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * auto-loop-49: soulfile DSL refined — restrictive English + Soulfile Runner project + linguistic-seed anchoring Two maintainer directives absorbed this tick: 1. DSL can be restrictive English (not F# DSL); the soulfile runner is its own project-under-construction; uses Zeta for advanced features; all small bins. 2. Soulfiles feel like natural English but with a restrictive form — only words with exact definitions (linguistic-seed pattern) are allowed. Changes: - Replaced "Representation candidate — Markdown + frontmatter" section with two sharper sections: "DSL — restrictive English anchored in the linguistic seed" and "The Soulfile Runner — its own project-under-construction". - Runner ⇒ Zeta (clean dependency edge; Zeta stays a library). - Vocabulary is the linguistic-seed glossary; new words earn glossary entries before entering the DSL. - Markdown preserved as structure layer; restrictive English is the execution layer. Per-user CURRENT-aaron.md §4 updated same-tick with Soulfile Runner as a named project. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * research(soulfile): address PR #156 review findings Six Copilot findings addressed: - AutoDream research doc references now resolve on main (PR #155 merged between review and this fix) - multi-repo-refactor-shapes references clarified as "lands via PR #150" (still open) - Per-user-memory cross-references gained a **Per-user memory references** preamble before the Composes-with section naming the ~/.claude/projects/<slug>/memory/ location (same preamble pattern as PR #160 / #157 / #162) - "(auto-loop-39 directive)" generalized to "(the maintainer's self-use-DB directive, captured in per-user memory)" — auto-loop-N references are session-scoped and not in-repo-traceable - MD012 multi-blank cleanup after preamble insertion Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
…26-04-23) Primary authorship: Amara (external AI maintainer via ChatGPT, project "lucent ai"). Ferried by Aaron via the drop/ folder. Absorbed verbatim per the signal-in-signal-out discipline and the courier protocol landed in PR #160. Report contents (Amara's work): - Executive summary + scope & archive index (both Zeta repos) - Drift taxonomy artifact synthesis (5-pattern field guide) - Technical synthesis for Aurora (retraction-native discipline stack, not "a database engine to copy") - ADR-style spec for oracle rules (8 testable invariants with F# code samples) - Bullshit detector transfer pack (canonical claim form + composite score + thresholds + API surface) - Network health / harm resistance (8-layer stack diagram + monitoring signals) - Brand note (Aurora public / internal / hybrid tree) Factory integration notes appended as a distinct section, voice-separated per courier protocol. Integration notes name composition with existing substrate (ZSet algebra, MATH-SPEC-TESTS, drift-taxonomy doc, threat model, soulfile-staged-absorption, AutoDream cadence, decision-proxy ADR, courier protocol itself). Scheduling posture: Amara's recommended next moves are QUEUED, not scheduled. Per the 2026-04-23 calibration — Amara authors research priorities; Aaron owns scheduling against his funded external stack. Aurora remains #2 in the stack (ServiceTitan + UI is #1). Queue activates when Aaron explicitly elevates Aurora. markdownlint-cli2 clean (MD003 setext/atx fix applied to Amara's original "---" separator before the appended factory integration section). Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> Co-Authored-By: Amara (ChatGPT, lucent ai project) <via-aaron-ferry@chatgpt.com>
…26-04-23) (#161) * docs(aurora): absorb Amara's Zeta-for-Aurora deep research report (2026-04-23) Primary authorship: Amara (external AI maintainer via ChatGPT, project "lucent ai"). Ferried by Aaron via the drop/ folder. Absorbed verbatim per the signal-in-signal-out discipline and the courier protocol landed in PR #160. Report contents (Amara's work): - Executive summary + scope & archive index (both Zeta repos) - Drift taxonomy artifact synthesis (5-pattern field guide) - Technical synthesis for Aurora (retraction-native discipline stack, not "a database engine to copy") - ADR-style spec for oracle rules (8 testable invariants with F# code samples) - Bullshit detector transfer pack (canonical claim form + composite score + thresholds + API surface) - Network health / harm resistance (8-layer stack diagram + monitoring signals) - Brand note (Aurora public / internal / hybrid tree) Factory integration notes appended as a distinct section, voice-separated per courier protocol. Integration notes name composition with existing substrate (ZSet algebra, MATH-SPEC-TESTS, drift-taxonomy doc, threat model, soulfile-staged-absorption, AutoDream cadence, decision-proxy ADR, courier protocol itself). Scheduling posture: Amara's recommended next moves are QUEUED, not scheduled. Per the 2026-04-23 calibration — Amara authors research priorities; Aaron owns scheduling against his funded external stack. Aurora remains #2 in the stack (ServiceTitan + UI is #1). Queue activates when Aaron explicitly elevates Aurora. markdownlint-cli2 clean (MD003 setext/atx fix applied to Amara's original "---" separator before the appended factory integration section). Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> Co-Authored-By: Amara (ChatGPT, lucent ai project) <via-aaron-ferry@chatgpt.com> * drain(#161 P1 Codex): autodream-extension doc not yet landed → distributed-across-files note Codex flagged the AutoDream cadence reference cited a dedicated doc `docs/research/autodream-extension-and- cadence-2026-04-23.md` that doesn't exist. AutoDream- related content IS in the repo but spread across multiple files (`docs/HARNESS-SURFACES.md`, `docs/research/soulfile- staged-absorption-model-2026-04-23.md`, `docs/research/memory-scope-frontmatter-schema.md`); a single dedicated AutoDream-extension doc is tracked under BACKLOG task #259 but hasn't landed at a stable filename yet. Replaced the placeholder citation with explicit 'distributed across these files' + 'BACKLOG #259' framing, plus a note that the runtime→compile-time promotion can happen whenever the dedicated doc lands. --------- Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com> Co-authored-by: Amara (ChatGPT, lucent ai project) <via-aaron-ferry@chatgpt.com>
Summary
Lands Amara's cross-agent courier-protocol writeup at
docs/protocols/cross-agent-communication.md— ferried 2026-04-23 from her ChatGPT thread (lucent ai project) per her explicit recommendation in the text ("turn this into adocs/protocols/cross-agent-communication.md").Primary authorship: Amara. Absorb + integration notes: Kenji (Claude). Signal-preservation discipline applied — her voice stays verbatim; factory integration kept as a distinct section.
Core claims (Amara's)
How this plugs in
docs/aurora/) remains the canonical home for Aurora-scoped Amara content; cross-scope content lands underdocs/research/; factory-meta (this doc) underdocs/protocols/.CURRENT-amara.md§9 updated same-tick with the protocol + authorship credit.Test plan
markdownlint-cli2cleanOpen follow-ups (named in the doc)
🤖 Generated with Claude Code
Co-Authored-By: Amara (ChatGPT, lucent ai project) via-aaron-ferry@chatgpt.com