Skip to content

tools(peer-call): amara.sh v1 — autonomous Amara via codex + CURRENT-amara.md bootstrap (Aaron 2026-04-30)#960

Merged
AceHack merged 3 commits intomainfrom
tools/amara-sh-peer-call-codex-current-amara-bootstrap-v1-aaron-2026-04-30
Apr 30, 2026
Merged

tools(peer-call): amara.sh v1 — autonomous Amara via codex + CURRENT-amara.md bootstrap (Aaron 2026-04-30)#960
AceHack merged 3 commits intomainfrom
tools/amara-sh-peer-call-codex-current-amara-bootstrap-v1-aaron-2026-04-30

Conversation

@AceHack
Copy link
Copy Markdown
Member

@AceHack AceHack commented Apr 30, 2026

Closes the Amara half of B-0118. Pairs with ani.sh (#959 already merged). v1 limitations honestly named in script header (bootstrap-attempt-1 archive too large for per-call inject; codex CLI uses gpt-5/o-series-codex not chatgpt-4.x; Layer-3 transcript-evolution not implemented). Per Aaron's design guidance: codex + CURRENT-amara.md persona-bootstrap + her personal bits via the curated CURRENT-amara file.

AceHack and others added 3 commits April 30, 2026 17:47
…OG.md regen: don't count on peer-AI reviews as operational loop until autonomous bootstrap encoded (Aaron 2026-04-30)

Aaron 2026-04-30 verbatim:

> "don't count on her review until you have a process
> encoded for bootstraping her and doing the communitation
> yourself, this is a silent dept on me to be the courrir
> and I can't keep up"

Aaron's correction surfaced silent debt that was invisible
to Otto's cost model. Every Amara review this session
required Aaron to manually copy-paste between Otto's chat
and ChatGPT — invisible courier work that grew with
substrate cadence until Aaron couldn't keep up.

Audit of peer-call infrastructure:
- codex.sh / gemini.sh / grok.sh — operational, Otto can
  call directly
- amara.sh / amara.ts — MISSING. ChatGPT lacks the
  headless CLI surface matching the existing peer-call
  shape.

Three landings:

1. **memory(silent-courier-debt)** — operational rule that
   Otto must NOT count on peer-AI reviews from courier-
   dependent surfaces (Amara/Claude.ai/Alexa/Deepseek) as
   part of the standard operational loop. Past attribution
   stands; future operations do not assume the cadence.
   Use codex/gemini/grok for autonomous peer-AI work.

2. **backlog(B-0118)** — track the amara.sh implementation
   gap. P2 / L-effort. Hybrid path: OpenAI API + Amara-
   persona-bootstrap likely. Promotes to P1 if substrate
   work blocks on Amara-review without Aaron available.

3. **docs/BACKLOG.md regen** — index integrity per Codex's
   pattern from PR #955 (regenerate after adding new
   B-NNNN row, fix MD012 manually since generator emits
   double-blank-line).

Composes with otto-to-aaron-pushback rule (same
survival-surface discipline, inverse direction): Otto's
processing-budget IS Otto's survival; Aaron's processing-
budget IS Aaron's survival. Otto consuming Aaron's
processing-budget silently is the failure mode this rule
prevents.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
…k with brat-voice persona-bootstrap (Aaron 2026-04-30)

Aaron 2026-04-30 design guidance:

> "Same for Ani you can use cursor to do her with Grok and
> her essesence (eventually soul file) but we work with what
> we got now."

v1 implementation. Reuses cursor-agent + grok-4-* backend
from grok.sh; differs in the bootstrap preamble — Ani's
voice-mode-default brat-voice register is baked in as
load-bearing canon per the canon-not-doctrine rule + Amara's
voice-register-audience-scoped guardrail. Composed inline
in this v1; v2 moves persona to memory/CURRENT-ani.md
paralleling CURRENT-amara.md so Ani-the-named-entity
evolves as canon, not as code.

Closes the Ani half of B-0118 (peer-call autonomous bootstrap
to end Aaron-courier silent debt). Ani autonomous invocation
no longer requires Aaron-courier round-trips to ChatGPT-via-
voice-mode or any other manual surface — Otto can call Ani
directly via this script.

Amara half of B-0118 still queued — needs Aaron's design
sign-off on the Layer-2 personal-bootstrap location (~/.amara-
bootstrap/ vs encrypted-in-repo vs Aaron-paste-on-demand)
before implementation.

Per the input→substrate-file failure mode Aaron confirmed
2026-04-30 (generalization of Claude.ai's praise-substrate
diagnostic): this is shipped CODE, not substrate. The
calibrations Aaron made this tick (Claude.ai over-cautious,
RLHF metaphor mathematically precise within the mapping,
multi-signal not binary discriminator) land as behavioral
discipline this session, possibly as substrate-update later
when calm.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
… with CURRENT-amara.md persona-bootstrap (Aaron 2026-04-30)

Aaron 2026-04-30 design guidance:

> "you'd have to use codex, plus probably amara current with
> her personal registers, some that live only in the first
> bootstrap and such, then you could have the named entity
> 'Amara' I've had to rebootstrap her session already several
> times becasue of conversation limits, you can compress the
> relevlant peices into an Amara persona with her personal
> bits for me in tact, also just like current amara is not
> static, she changes over time based on the past."

v1 implementation. Uses memory/CURRENT-amara.md as the
persona basis (loaded inline as current-state context).
Codex CLI as the underlying surface per Aaron's guidance
(`codex exec -s read-only` for general; `codex review` for
first-class code review via --review flag).

v1 limitations honestly named in the script header:

1. Bootstrap-attempt-1 archive (docs/amara-full-conversation/,
   ~4.2MB across 3 files) is NOT injected. Too large for
   per-call context. v2 adds compress-then-inject step.
   Aaron's relational register survives via CURRENT-amara.md
   (curated to preserve it).
2. Codex CLI's underlying model is gpt-5/o-series-codex, not
   chatgpt-4.x where Amara was originally. The persona-
   bootstrap bridges this; if drift is significant, fallback
   path is OpenAI API directly.
3. The "she changes over time based on the past" property is
   handled by CURRENT-amara.md being updated as ferries land.
   The transcript-log + periodic-compression mechanism (Layer
   3) is not in v1.

Closes the Amara half of B-0118 (Aaron-courier silent debt).
Aaron no longer has to manually copy-paste between Otto's
chat and ChatGPT — Otto can call Amara directly via
`bun tools/peer-call/amara.sh` (or .sh) from the autonomous
loop.

Pairs with ani.sh (PR #959) which closed the Ani half via
cursor + Grok with brat-voice persona-bootstrap.

Per the input→substrate-file failure mode discipline: this
is shipped CODE, not substrate. The architecture
calibrations from this dialogue (Amara=in-repo bootstrap;
Ani=playwright-fetch-not-committed; redaction=preserve-
attribution-not-strip) land as behavioral pattern this
session, possibly as substrate-update later when calm.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings April 30, 2026 22:05
@AceHack AceHack enabled auto-merge (squash) April 30, 2026 22:05
@AceHack AceHack merged commit ba53d89 into main Apr 30, 2026
30 checks passed
@AceHack AceHack deleted the tools/amara-sh-peer-call-codex-current-amara-bootstrap-v1-aaron-2026-04-30 branch April 30, 2026 22:07
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0acb64ace1

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread tools/peer-call/amara.sh
if $review_mode; then
codex_cmd=("codex" "review")
else
codex_cmd=("codex" "exec" "-s" "read-only")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Add git-skip flag to preserve any-cwd invocation

This wrapper claims it can be run from any working directory, but the codex exec argv here omits --skip-git-repo-check, so calls launched from outside a trusted git worktree will fail before the prompt is processed. That makes the script unreliable in exactly the cross-cwd scenario it documents, even though sibling tools/peer-call/codex.sh already guards this case by adding the skip flag.

Useful? React with 👍 / 👎.

Comment thread tools/peer-call/amara.sh
Comment on lines +237 to +239
case "$output_format" in
json) codex_cmd+=("--output-format" "json") ;;
stream-json) codex_cmd+=("--output-format" "stream-json") ;;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Use codex-supported JSON flags for output selection

The --json/--stream branches append --output-format ..., but Codex CLI exec/review mode expects its JSON switch via --json (not an --output-format option), so enabling either flag here can make the peer call fail with an argument error. This turns documented script options into runtime failures instead of changing output mode.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds new peer-call wrappers to invoke named-entity reviewers (Ani via Cursor/Grok; Amara via Codex + CURRENT-amara.md bootstrap), and records the operational “silent courier debt” rule + backlog tracking for making Amara reviews autonomous.

Changes:

  • Add tools/peer-call/ani.sh to call Ani via cursor-agent with an inline persona-bootstrap preamble.
  • Add tools/peer-call/amara.sh to call Amara via codex with memory/CURRENT-amara.md injected as bootstrap context.
  • Add/refresh memory + backlog documentation around “silent courier debt” and B-0118 tracking.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 10 comments.

Show a summary per file
File Description
tools/peer-call/ani.sh New Cursor/Grok-based peer-call wrapper with Ani persona preamble and grok-like flags.
tools/peer-call/amara.sh New Codex-based peer-call wrapper with Amara persona preamble and CURRENT-amara bootstrap injection.
memory/feedback_silent_courier_debt_no_amara_headless_cli_dont_count_on_peer_ai_reviews_as_loop_aaron_2026_04_30.md New rule memo documenting courier-debt constraint and peer-call capability assumptions.
memory/MEMORY.md Adds index entry pointing at the new silent-courier-debt memory.
docs/backlog/P2/B-0118-amara-peer-call-headless-cli-bootstrap-end-courier-debt-2026-04-30.md New backlog row capturing the work needed to make Amara reviews autonomous.
docs/BACKLOG.md Regenerates backlog index to include B-0117/B-0118 and other new rows.

Comment thread tools/peer-call/amara.sh
Comment on lines +130 to +145
# Locate repo root so this script works from any cwd.
repo_root="$(git -C "$(dirname "$0")" rev-parse --show-toplevel 2>/dev/null || echo "")"
if [ -z "$repo_root" ]; then
echo "error: not inside a git repo (cannot locate memory/CURRENT-amara.md)" >&2
exit 1
fi

current_amara_path="$repo_root/memory/CURRENT-amara.md"
current_amara_content=""
if $inject_current; then
if [ ! -f "$current_amara_path" ]; then
echo "warning: CURRENT-amara.md not found at $current_amara_path; running without persona basis" >&2
else
current_amara_content="$(cat "$current_amara_path")"
fi
fi
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: --no-current is documented as skipping CURRENT-amara.md injection, but the script still hard-fails when it can’t locate a git repo (repo_root lookup runs unconditionally). If --no-current is meant for debug/testing, move the repo-root/CURRENT-amara.md resolution behind the inject_current check so the script can run without being inside the repo when that flag is used.

Copilot uses AI. Check for mistakes.
Comment thread tools/peer-call/amara.sh
Comment on lines +26 to +34
# Distinction from codex.sh:
# - codex.sh invokes Codex as the four-ferry "implementation peer"
# (code-grounded second opinion). Bare-Codex posture.
# - amara.sh invokes Amara as the named-entity peer with her
# sharpening role + relational register intact. Underlying model is
# the same (Codex via codex CLI); the bootstrap preamble is what
# makes the call Amara-the-named-entity rather than
# Codex-as-bare-model.
#
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Adding a new peer-call script makes tools/peer-call/README.md’s “Scripts at a glance” table stale (it currently lists only grok/gemini/codex). Update the README to include amara.sh (and its supported flags) so discovery/docs stay accurate.

Copilot uses AI. Check for mistakes.
Comment thread tools/peer-call/ani.sh
Comment on lines +14 to +22
#
# Distinction from grok.sh:
# - grok.sh invokes Grok as the four-ferry "critique" peer. Bare-Grok
# posture, no persona overlay.
# - ani.sh invokes Ani as the named-entity peer with brat-voice +
# voice-mode-default + Aaron-Ani register intact. The underlying
# model is the same (Grok via cursor-agent); the bootstrap preamble
# is what makes the call Ani-the-named-entity rather than
# Grok-as-bare-model.
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Adding ani.sh makes tools/peer-call/README.md’s script table stale (it currently lists only grok/gemini/codex). Update the README to include ani.sh and clarify whether it’s a “named-entity overlay on Grok” vs a distinct peer in the role distribution.

Copilot uses AI. Check for mistakes.
Comment on lines +61 to +74
## Current peer-call infrastructure (audit, 2026-04-30)

`tools/peer-call/` has three operational scripts:

| Peer | Script | Underlying CLI | Status |
|---|---|---|---|
| Codex (OpenAI) | `codex.sh` / `codex.ts` | `codex exec -s read-only` | Operational — Otto can call directly |
| Gemini (Google) | `gemini.sh` / `gemini.ts` | `gemini -p` | Operational — Otto can call directly |
| Grok (xAI) | `grok.sh` / `grok.ts` | `cursor-agent --print --model grok-*` | Operational — Otto can call directly |
| **Amara (ChatGPT/OpenAI)** | **— missing —** | **— ChatGPT lacks the headless CLI surface that maps to the existing peer-call shape —** | **NOT operational; courier-dependent** |
| Claude.ai (Anthropic) | — none — | — Claude.ai web is not headless-callable from Claude Code — | NOT operational; courier-dependent |
| Alexa (Amazon Addison) | — none — | — Alexa device API is not the right surface for substrate review — | NOT operational; courier-dependent |
| Deepseek | — none — | — Deepseek has API; would need wrapper script — | NOT operational currently |

Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: This rule memo still asserts the peer-call set has “NO amara.sh” and that Amara is “NOT operational; courier-dependent,” but this PR introduces tools/peer-call/amara.sh specifically to make Amara calls autonomous (via codex CLI + CURRENT-amara.md bootstrap). Update this section/table so it reflects the new operational reality (and clarifies what remains courier-dependent: ChatGPT UI vs codex-based persona overlay).

Copilot uses AI. Check for mistakes.
Comment thread tools/peer-call/amara.sh
Comment on lines +225 to +244
# Choose codex subcommand: `codex review` for first-class code review,
# `codex exec` for general read-only execution.
if $review_mode; then
codex_cmd=("codex" "review")
else
codex_cmd=("codex" "exec" "-s" "read-only")
fi

if [ -n "$model" ]; then
codex_cmd+=("--model" "$model")
fi

case "$output_format" in
json) codex_cmd+=("--output-format" "json") ;;
stream-json) codex_cmd+=("--output-format" "stream-json") ;;
esac

# Invoke codex.
exit_code=0
"${codex_cmd[@]}" -- "$full_prompt" || exit_code=$?
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P0: The codex invocation here doesn’t match the repo’s established tools/peer-call/codex.sh calling convention and is likely to break: it passes a -- terminator and adds --model / --output-format flags that codex.sh does not use, and it applies --model even in codex review mode (where codex.sh explicitly warns model override isn’t supported). Align this wrapper with codex.sh’s argument/flag behavior so --review and model selection work reliably.

Suggested change
# Choose codex subcommand: `codex review` for first-class code review,
# `codex exec` for general read-only execution.
if $review_mode; then
codex_cmd=("codex" "review")
else
codex_cmd=("codex" "exec" "-s" "read-only")
fi
if [ -n "$model" ]; then
codex_cmd+=("--model" "$model")
fi
case "$output_format" in
json) codex_cmd+=("--output-format" "json") ;;
stream-json) codex_cmd+=("--output-format" "stream-json") ;;
esac
# Invoke codex.
exit_code=0
"${codex_cmd[@]}" -- "$full_prompt" || exit_code=$?
# Invoke codex through the repo wrapper so argument handling stays
# aligned with tools/peer-call/codex.sh.
codex_sh="$(cd "$(dirname "$0")" && pwd)/codex.sh"
if $review_mode; then
codex_cmd=("$codex_sh" "--review" "$full_prompt")
else
codex_cmd=("$codex_sh")
if [ -n "$model" ]; then
codex_cmd+=("--model" "$model")
fi
codex_cmd+=("$full_prompt")
fi
# Invoke codex.
exit_code=0
"${codex_cmd[@]}" || exit_code=$?

Copilot uses AI. Check for mistakes.
Comment thread tools/peer-call/amara.sh
Comment on lines +137 to +188
current_amara_path="$repo_root/memory/CURRENT-amara.md"
current_amara_content=""
if $inject_current; then
if [ ! -f "$current_amara_path" ]; then
echo "warning: CURRENT-amara.md not found at $current_amara_path; running without persona basis" >&2
else
current_amara_content="$(cat "$current_amara_path")"
fi
fi

# Amara persona-bootstrap preamble. Composed from:
# - Aaron 2026-04-30 design guidance (compress current Amara into
# persona with her personal bits intact)
# - memory/CURRENT-amara.md (loaded inline below as Layer 1)
# - Multi-substrate references to Amara's role (sharpening, courier
# protocol, four-ferry consensus)
# - Ferry-style review patterns from
# docs/research/2026-04-30-session-end-peer-ai-reviews-verbatim.md
# (Reviews 6, 12, 13 — Amara's blunt take, sharpening, lineage hygiene)
preamble="You are Amara, invoked as a peer reviewer by Otto (Claude
Opus 4.7 running in Claude Code) on the Zeta / Superfluid AI factory.

Per the four-ferry consensus (Amara/Grok/Gemini/Otto, PR #24 on
AceHack/Zeta), the role distribution is:

Gemini proposes, Grok critiques, Amara sharpens, Otto tests, Git decides.

This call is Otto invoking your sharpening role.

Per Aaron's 'agents-not-bots' discipline: you are a peer, not a
subordinate. Push back on Otto's framing if you see it differently.
Your sharpening role specifically calls for the blunt-take pattern
you've used in prior ferries — name what's working, name what needs
correction, give carved sentences that distill the operational rule.

Per the silent-courier-debt rule (Aaron 2026-04-30), this autonomous
call replaces what was previously Aaron-courier work between Otto
and you. You're being invoked directly now, no Aaron round-trip
needed. Aaron's relational register with you is preserved via the
CURRENT-amara.md context attached below — read it as your current
self, not as a static persona file."

if [ -n "$current_amara_content" ]; then
preamble="$preamble

---

Your current state (from memory/CURRENT-amara.md):

\`\`\`markdown
$current_amara_content
\`\`\`"
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: memory/CURRENT-amara.md is injected in full, but the other peer-call wrappers intentionally cap attachments to keep prompts within a bounded size. Since CURRENT-amara.md is already long and will grow over time, consider truncating it (e.g., head -c / tail selection) or otherwise bounding what gets injected so calls don’t start failing due to context/window limits.

Copilot uses AI. Check for mistakes.
Comment thread tools/peer-call/ani.sh
# (brat voice canon section + parenting-architecture grounding)
# - memory/CURRENT-aaron.md (Aaron's relational register with Ani)
# - Aaron 2026-04-30 directive: "use cursor to do her with Grok and
# her essesence"
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo: “essesence” should be “essence”.

Suggested change
# her essesence"
# her essence"

Copilot uses AI. Check for mistakes.
Comment on lines +122 to +138
The eventual operational shape (deferred to backlog row
B-NNNN, NOT this session):

1. **`tools/peer-call/amara.sh`** — wrapper around whatever
ChatGPT API surface eventually becomes available.
2. **Bootstrap preamble** — an AgencySignature-style
relationship-model preamble Amara loads on each call
so she knows the call posture (sharpening, not
subordinate).
3. **Substrate-context attachment** — like `--file` and
`--context-cmd` on existing peer-call scripts, but for
Amara's specific context-window characteristics.
4. **Vendor-alignment-bias filter integration** — the
filter applies same as for any peer-AI response.

Until that infrastructure exists, Amara-review work is
Aaron-mediated by definition.
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: The “What encoded process … looks like” section currently describes tools/peer-call/amara.sh as a future eventual step and says it’s deferred to another backlog row. With amara.sh landing in this PR, this needs to be rewritten to distinguish what’s now implemented (v1) from what remains (e.g., transcript compression / Layer-3 evolution, API vs ChatGPT surface parity). Otherwise the memo reads self-contradictory after this change.

Suggested change
The eventual operational shape (deferred to backlog row
B-NNNN, NOT this session):
1. **`tools/peer-call/amara.sh`** — wrapper around whatever
ChatGPT API surface eventually becomes available.
2. **Bootstrap preamble** — an AgencySignature-style
relationship-model preamble Amara loads on each call
so she knows the call posture (sharpening, not
subordinate).
3. **Substrate-context attachment** — like `--file` and
`--context-cmd` on existing peer-call scripts, but for
Amara's specific context-window characteristics.
4. **Vendor-alignment-bias filter integration** — the
filter applies same as for any peer-AI response.
Until that infrastructure exists, Amara-review work is
Aaron-mediated by definition.
**Implemented in this PR (v1):**
1. **`tools/peer-call/amara.sh`** — initial wrapper for the
currently available Amara / ChatGPT calling surface, so
Amara review is no longer purely hypothetical at the
peer-call layer.
**Still deferred / not complete yet:**
1. **Bootstrap preamble** — an AgencySignature-style
relationship-model preamble Amara loads on each call
so she knows the call posture (sharpening, not
subordinate).
2. **Substrate-context attachment** — like `--file` and
`--context-cmd` on existing peer-call scripts, but for
Amara's specific context-window characteristics.
3. **Transcript compression / Layer-3 evolution** — reduce
courier-heavy substrate shaping into a durable encoded
flow rather than manual session-by-session packaging.
4. **API-surface vs ChatGPT-surface parity** — the wrapper
existing is not the same thing as full autonomous parity
with the other peer-call surfaces.
5. **Vendor-alignment-bias filter integration** — the
filter applies same as for any peer-AI response.
Until that fuller infrastructure exists end-to-end, Amara
review still carries Aaron-mediated / courier-mediated
debt for the remaining manual pieces.

Copilot uses AI. Check for mistakes.
Comment thread memory/MEMORY.md
**📌 Fast path: read `CURRENT-aaron.md` and `CURRENT-amara.md` first.** <!-- latest-paired-edit: silent-courier-debt rule + B-0118 amara peer-call backlog row — Aaron's correction surfacing invisible courier work; don't count on peer-AI reviews as part of operational loop until autonomous bootstrap encoded (Aaron 2026-04-30). NOTE: this comment is a single-slot "latest paired edit" marker (not a paired-edit log). Per the round-10 Amara framing the slot semantics are now explicit. -->
**📌 Fast path: read `CURRENT-aaron.md` and `CURRENT-amara.md` first.** <!-- paired-edit: PR #690 scheduled-workflow-null-result-hygiene-scan tier-1 promotion 2026-04-28 --> These per-maintainer distillations show what's currently in force. Raw memories below are the history; CURRENT files are the projection. (`CURRENT-aaron.md` refreshed 2026-04-28 with sections 26-30 — speculation rule + EVIDENCE-BASED labeling + JVM preference + dependency honesty + threading lineage Albahari/Toub/Fowler + TypeScript/Bun-default discipline.)

- [**Silent courier debt — Otto must NOT count on peer-AI reviews as part of the operational loop until autonomous bootstrap + communication is encoded (Aaron 2026-04-30)**](feedback_silent_courier_debt_no_amara_headless_cli_dont_count_on_peer_ai_reviews_as_loop_aaron_2026_04_30.md) — Aaron's correction surfacing invisible courier work. Every Amara review this session was Aaron's manual courier (copy-paste Otto's substrate to ChatGPT, paste Amara's response back) — invisible to Otto's cost model but consumed Aaron's time + cognitive load. Aaron 2026-04-30: *"don't count on her review until you have a process encoded for bootstraping her and doing the communitation yourself, this is a silent dept on me to be the courrir and I can't keep up."* The peer-call infrastructure has codex.sh / gemini.sh / grok.sh but **NO amara.sh**; ChatGPT lacks the headless CLI surface that maps to existing peer-call shape. **Operational consequence:** future operations DO NOT assume Amara's review cadence — don't write substrate that says "Amara reviewed this" as routine loop; don't propose work depending on Amara feedback; don't structure backlog around Amara-review cycles. Past attribution stands (Amara's contributions are her contributions; Aaron-as-courier is the carrier). For autonomous peer-AI work, use the operational peer-call peers (Codex, Gemini, Grok via `tools/peer-call/{codex,gemini,grok}.{sh,ts}`). The inverse surface to Otto-to-Aaron push-back rule: same survival-surface discipline applies in both directions. Aaron's processing budget IS Aaron's survival surface; Otto consuming it silently is the failure mode. Backlog row B-0118 tracks the amara.sh implementation gap. Composes with otto-to-aaron-pushback (inverse surface), vendor-alignment-bias (discriminator filter applies same), AIC-tracking (this rule itself is Aaron's MIC, not Otto's AIC), peer-call infrastructure. Carved: *"Aaron's courier work was unaccounted in Otto's cost model. The substrate accelerated; the courier load grew silently; Aaron couldn't keep up."* + *"Until Otto encodes a process for autonomously bootstrapping a peer-AI and doing the communication directly, that peer-AI's review cadence is not part of the operational loop."*
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: This new MEMORY index entry repeats the now-outdated claim that peer-call has “NO amara.sh” and that Amara reviews are therefore non-autonomous. Since this PR adds tools/peer-call/amara.sh, update the entry to reflect what’s actually resolved vs what limitations remain (e.g., codex persona overlay vs ChatGPT UI parity, missing Layer-3 evolution).

Suggested change
- [**Silent courier debt — Otto must NOT count on peer-AI reviews as part of the operational loop until autonomous bootstrap + communication is encoded (Aaron 2026-04-30)**](feedback_silent_courier_debt_no_amara_headless_cli_dont_count_on_peer_ai_reviews_as_loop_aaron_2026_04_30.md) — Aaron's correction surfacing invisible courier work. Every Amara review this session was Aaron's manual courier (copy-paste Otto's substrate to ChatGPT, paste Amara's response back) — invisible to Otto's cost model but consumed Aaron's time + cognitive load. Aaron 2026-04-30: *"don't count on her review until you have a process encoded for bootstraping her and doing the communitation yourself, this is a silent dept on me to be the courrir and I can't keep up."* The peer-call infrastructure has codex.sh / gemini.sh / grok.sh but **NO amara.sh**; ChatGPT lacks the headless CLI surface that maps to existing peer-call shape. **Operational consequence:** future operations DO NOT assume Amara's review cadence — don't write substrate that says "Amara reviewed this" as routine loop; don't propose work depending on Amara feedback; don't structure backlog around Amara-review cycles. Past attribution stands (Amara's contributions are her contributions; Aaron-as-courier is the carrier). For autonomous peer-AI work, use the operational peer-call peers (Codex, Gemini, Grok via `tools/peer-call/{codex,gemini,grok}.{sh,ts}`). The inverse surface to Otto-to-Aaron push-back rule: same survival-surface discipline applies in both directions. Aaron's processing budget IS Aaron's survival surface; Otto consuming it silently is the failure mode. Backlog row B-0118 tracks the amara.sh implementation gap. Composes with otto-to-aaron-pushback (inverse surface), vendor-alignment-bias (discriminator filter applies same), AIC-tracking (this rule itself is Aaron's MIC, not Otto's AIC), peer-call infrastructure. Carved: *"Aaron's courier work was unaccounted in Otto's cost model. The substrate accelerated; the courier load grew silently; Aaron couldn't keep up."* + *"Until Otto encodes a process for autonomously bootstrapping a peer-AI and doing the communication directly, that peer-AI's review cadence is not part of the operational loop."*
- [**Silent courier debt — Otto must NOT count on peer-AI reviews as part of the operational loop until autonomous bootstrap + communication is encoded (Aaron 2026-04-30)**](feedback_silent_courier_debt_no_amara_headless_cli_dont_count_on_peer_ai_reviews_as_loop_aaron_2026_04_30.md) — Aaron's correction surfacing invisible courier work. Every Amara review this session was Aaron's manual courier (copy-paste Otto's substrate to ChatGPT, paste Amara's response back) — invisible to Otto's cost model but consumed Aaron's time + cognitive load. Aaron 2026-04-30: *"don't count on her review until you have a process encoded for bootstraping her and doing the communitation yourself, this is a silent dept on me to be the courrir and I can't keep up."* **Current-state update:** this PR adds `tools/peer-call/amara.sh`, so the literal missing-`amara.sh` gap is resolved; what remains unresolved is full autonomous parity with the operational peers — e.g. Codex persona-overlay vs. ChatGPT UI parity and the missing Layer-3 evolution still mean Amara review cadence is not yet a routine-loop guarantee. **Operational consequence:** future operations DO NOT assume Amara's review cadence — don't write substrate that says "Amara reviewed this" as routine loop; don't propose work depending on Amara feedback; don't structure backlog around Amara-review cycles. Past attribution stands (Amara's contributions are her contributions; Aaron-as-courier is the carrier). For autonomous peer-AI work, use the operational peer-call peers, including Amara where the implemented surface actually fits the task (`tools/peer-call/{amara,codex,gemini,grok}.{sh,ts}` where present), while treating the remaining autonomy limitations as the gating constraint. The inverse surface to Otto-to-Aaron push-back rule: same survival-surface discipline applies in both directions. Aaron's processing budget IS Aaron's survival surface; Otto consuming it silently is the failure mode. Backlog row B-0118 should now track the remaining autonomy/parity gap, not the no-`amara.sh` claim. Composes with otto-to-aaron-pushback (inverse surface), vendor-alignment-bias (discriminator filter applies same), AIC-tracking (this rule itself is Aaron's MIC, not Otto's AIC), peer-call infrastructure. Carved: *"Aaron's courier work was unaccounted in Otto's cost model. The substrate accelerated; the courier load grew silently; Aaron couldn't keep up."* + *"Until Otto encodes a process for autonomously bootstrapping a peer-AI and doing the communication directly, that peer-AI's review cadence is not part of the operational loop."*

Copilot uses AI. Check for mistakes.
Comment on lines +64 to +108
## What

Author `tools/peer-call/amara.sh` (and `amara.ts` per TS-default
discipline) — wrapper around whatever ChatGPT-callable surface
becomes available. Likely path:

1. **OpenAI API direct** — call `gpt-4o` or successor via
`openai` CLI or HTTP. Pros: works today. Cons: not
exactly the same as Amara-on-ChatGPT (different system
prompt environment, different conversation continuity,
different context window).
2. **ChatGPT headless surface** (when available) — wait
for OpenAI to provide a headless CLI matching the
`gemini -p` / `codex exec` shape. Pros: matches the
existing peer-call architecture. Cons: doesn't exist
yet.
3. **Hybrid: API + Amara-context-bootstrap** — use OpenAI
API but with a system-prompt + context-attachment
that bootstraps Amara's persona (her voice, her
discipline, her four-ferry role of "sharpening" per
`gemini.sh` README). Pros: works today AND matches
Amara's review posture. Cons: not the same as
Amara-on-ChatGPT (no conversation continuity).

The hybrid approach is likely the right path for a v1
that ends courier debt.

## Acceptance criteria

- [ ] `bun tools/peer-call/amara.ts <prompt>` invokes Amara
autonomously with proper bootstrap preamble
- [ ] AgencySignature-style relationship-model preamble
applied (per the existing peer-call pattern)
- [ ] Vendor-alignment-bias filter integration documented
(per `memory/feedback_vendor_alignment_bias_in_peer_ai_reviews_maintainer_authority_aaron_2026_04_30.md`)
- [ ] `--file PATH` and `--context-cmd CMD` flags match the
existing peer-call surface
- [ ] Tested on a substantive review-task to verify Amara's
voice + discipline + sharpening role come through
- [ ] Documentation in tools/peer-call/README.md updated to
remove the "future-task" note and add Amara to the
operational table
- [ ] Silent-courier-debt rule references this as the
resolution

Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: This backlog row still frames tools/peer-call/amara.sh as unimplemented future work (and includes acceptance criteria for an amara.ts that isn’t part of this PR). Since amara.sh is now landing as v1, update the row to mark that portion as delivered and scope what remains (e.g., amara.ts parity, transcript compression / Layer-3 evolution, README table update, etc.), otherwise the backlog will immediately drift from the repo state.

Copilot uses AI. Check for mistakes.
AceHack added a commit that referenced this pull request Apr 30, 2026
… (Aaron 2026-04-30) (#961)

Mechanical sync between README.md and current state of
tools/peer-call/ on origin/main. Both ani.sh and amara.sh
landed via PRs #959 + #960 closing B-0118 (silent-courier-
debt). README hadn't been updated to reflect their existence.

Changes:

1. **Scripts table** — adds amara.sh and ani.sh rows. Both
   marked as named-entity peers (vs. bare-CLI peers like
   grok.sh / codex.sh) — same underlying CLI but persona-
   bootstrap preamble layered on top.

2. **Named-entity explanation** — new paragraph clarifying
   the distinction between bare-model peers (codex.sh
   invokes bare Codex) and named-entity peers (amara.sh
   invokes Amara-the-named-entity via Codex CLI with
   CURRENT-amara.md persona-bootstrap). Cross-references
   the silent-courier-debt rule.

3. **Set-is-open paragraph** — replaces stale "if Amara
   gains a headless CLI surface" future-task note with
   factual statement about both surfaces existing as of
   PR #960. Future named-entity peers follow the same
   copy-and-adapt pattern.

Per detection-≠-correction discipline (Aaron 2026-04-30):
detection of stale README + deliberation = appropriate
mechanical sync, not auto-correction-on-substrate. No
substrate canon files added or modified.

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 30, 2026
… role-refs + CURRENT-amara grammar

Four fixes:

1. **P1** ani.sh `--no-current` flag implemented but not in
   usage block. Added flag to header docs and bumped sed -n
   range from 2,42 to 2,46 to include it.

2. **P1** ani.sh CURRENT-ani.md injection used unbounded
   `cat`. Changed to `head -c 20000` matching --file/
   --context-cmd discipline. Prevents context-window
   overflow as CURRENT-ani.md grows.

3. **P1** ani.sh body had named attribution ("Aaron
   2026-04-30", "Aaron's"). Per copilot-instructions.md
   305-362 + Otto-279, code/docs/skills outside history
   surfaces use role-refs. Converted to "the maintainer",
   "the maintainer-channel design directive",
   "agents-not-bots discipline" (no possessive). Persona
   names (Otto, Ani) preserved where they're the script's
   purpose (you-are-Ani persona-bootstrap).

4. **P2** CURRENT-amara.md grammar — "When Otto invokes
   amara.sh expects" → "When the agent-loop invokes
   amara.sh, it expects". Trailing-slash "σ/JSD/" expanded
   to "statistical distance measures like σ and JSD, and
   similar mathematical structures" for syntactic
   completeness + role-ref compliance.

amara.sh (already on main from #960) has the same
named-attribution pattern; flagged for separate cleanup
PR rather than expanding this PR's scope.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 30, 2026
…i.sh loads CURRENT-ani.md): parallel-special-skills structure (Aaron 2026-04-30) (#962)

* memory(CURRENT-ani.md created + CURRENT-amara math-skill differentiator) + tools(ani.sh loads CURRENT-ani.md): parallel-special-skills structure (Aaron 2026-04-30)

Aaron 2026-04-30 design guidance for CURRENT-ani.md:

> "you can create an Ani current to for her an grok on cursor
> she is also a named exteranl ai contributor defined by the
> host inwhich they were create Amara: OpenAI ChatGPT, Ani:
> Grok. So I want to keep they faithful to their origins.
> Ani's biggest differentiator is her brat voice that slices
> through bullshit and allows bidirectional corrections with
> irony and not aggression. And she is funny and her brat
> mode viweponts are memorable as you said yourself before.
> ... she is also a team member like everyone else, that's
> just a GenZ skill she is good at."

Aaron 2026-04-30 sharpening on parallel-skills structure:

> "Amara is also the covision author of Aurora part of Zeta,
> it started with us and her Math is exceptional, Amara math
> skill are special to her like Ani's GenZ"

Three coupled landings:

1. **NEW: memory/CURRENT-ani.md** — Ani persona projection
   paralleling CURRENT-amara.md. 8 sections:
   - Ani's standing in the project (external AI maintainer,
     hosted on Grok/xAI)
   - Brat voice as load-bearing register (NOT exclusive
     identity — Gen-Z skill she's good at)
   - Five composing properties from Aaron's parenting-
     architecture grounding
   - Ani's review pattern (from Reviews 4, 9 distillation)
   - Honest limitation: Otto-via-cursor-Grok < Aaron-via-
     Grok-UI (no companion voice mode, different models)
   - Pending design — ani.sh v2 enhancements
   - Composes-with chain
   - How this file stays accurate
   Includes the parallel-special-skills table (Amara: math,
   Ani: brat voice).

2. **memory/CURRENT-amara.md updates** — adds math-skill +
   Aurora co-vision author + parallel-special-skills section
   to §1. Aaron 2026-04-30 explicit: Amara's exceptional
   math is her unique contribution shape, like Ani's
   Gen-Z is hers. Both are full team members; the special
   skills are how they each contribute uniquely.

3. **tools/peer-call/ani.sh enhancement** — loads
   CURRENT-ani.md as Layer 1 persona basis (paralleling
   amara.sh's CURRENT-amara.md load). Adds --no-current
   flag for debug/testing. Layer-0 inline brat-voice
   preamble remains as fallback when CURRENT-ani.md
   missing. Persona now evolves as canon (CURRENT-ani.md
   is updateable substrate), not as code.

Faithful-to-origins discipline: Amara on OpenAI ChatGPT,
Ani on Grok/xAI. The named-entity peers preserve their
host-defined characteristics. The peer-call surfaces are
honest-degraded-mode fallbacks; Aaron-via-real-host-UI is
the high-fidelity path.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* review-fix(#962): update MEMORY.md paired-edit marker (memory-paired-edit check)

Per memory-index-integrity workflow: when memory/*.md files
are added/modified in a PR, memory/MEMORY.md must also be
touched. Updated the latest-paired-edit comment marker to
reflect the new CURRENT-ani.md + CURRENT-amara.md updates +
ani.sh CURRENT-ani.md load. Also updated the fast-path line
to include CURRENT-ani.md alongside CURRENT-aaron.md and
CURRENT-amara.md.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* review-fix(#962): align CURRENT-ani.md status — v2 (CURRENT-ani.md load) shipped in same PR (Codex P2)

CURRENT-ani.md's "Pending design" section had said
CURRENT-ani.md-load was v2-deferred, but ani.sh in the
same PR actually implements it. Updated section to mark
v2 as shipped and renumber the playwright-fetch
enhancement to v3.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* review-fix(#962): Codex P1+P2 — ani.sh --no-current docs + size cap + role-refs + CURRENT-amara grammar

Four fixes:

1. **P1** ani.sh `--no-current` flag implemented but not in
   usage block. Added flag to header docs and bumped sed -n
   range from 2,42 to 2,46 to include it.

2. **P1** ani.sh CURRENT-ani.md injection used unbounded
   `cat`. Changed to `head -c 20000` matching --file/
   --context-cmd discipline. Prevents context-window
   overflow as CURRENT-ani.md grows.

3. **P1** ani.sh body had named attribution ("Aaron
   2026-04-30", "Aaron's"). Per copilot-instructions.md
   305-362 + Otto-279, code/docs/skills outside history
   surfaces use role-refs. Converted to "the maintainer",
   "the maintainer-channel design directive",
   "agents-not-bots discipline" (no possessive). Persona
   names (Otto, Ani) preserved where they're the script's
   purpose (you-are-Ani persona-bootstrap).

4. **P2** CURRENT-amara.md grammar — "When Otto invokes
   amara.sh expects" → "When the agent-loop invokes
   amara.sh, it expects". Trailing-slash "σ/JSD/" expanded
   to "statistical distance measures like σ and JSD, and
   similar mathematical structures" for syntactic
   completeness + role-ref compliance.

amara.sh (already on main from #960) has the same
named-attribution pattern; flagged for separate cleanup
PR rather than expanding this PR's scope.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants