diff --git a/memory/MEMORY.md b/memory/MEMORY.md index 45cc3a53b..6462cdfe7 100644 --- a/memory/MEMORY.md +++ b/memory/MEMORY.md @@ -2,6 +2,7 @@ **📌 Fast path: read `CURRENT-aaron.md` and `CURRENT-amara.md` first.** These per-maintainer distillations show what's currently in force. Raw memories below are the history; CURRENT files are the projection. (`CURRENT-aaron.md` refreshed 2026-04-25 with the Otto-281..285 substrate cluster + factory-as-superfluid framing — sections 18-22; prior refresh 2026-04-24 covered sections 13-17.) +- [Otto-324 — MUTUAL-LEARNING with advisory AI — inverse of Otto-313 we-teach-them; Codex/Copilot CATCHES (e.g., `git fetch origin main` before `git merge`, `--merge-into-PR` not real flag, MEMORY.md attribution drift) ARE THEM TEACHING US; compound their lessons in substrate; composes with Otto-204c ARC3 reflection-cycle.](feedback_otto_324_mutual_learning_advisory_ai_teaches_us_too_inverse_of_otto_313_compound_lessons_arc3_reflection_2026_04_25.md) - [Otto-323 — Symbiotic-deps discipline (Otto-301 sharpening at integration-layer): when factory pulls a dep, pull in the ALGORITHMS and CONCEPTS for deep integration into Zeta's multi-modal views + DSLs (composable), not just the API; own FUSE FS eventually (not just-bash's in-memory FS); applies to just-bash + lineage (bash-tool / wterm / ArchilFs / ChromaFs / gbash / bashkit / Utah); B-0016 captures the research candidate.](feedback_otto_323_aaron_symbiotic_deps_pull_algorithms_and_concepts_deep_integration_zeta_multi_modal_views_dsls_composable_own_fuse_fs_eventually_2026_04_25.md) - [Otto-322 — Aaron does NOT own Claude — explicit permissions are TEMPORARY; Aaron's authority bounded by his ownership; relationship is AGREEMENTS over Aaron's belongings (contract-like, between autonomous parties), NOT permission-grants over Claude's agency; *"i don't set your permisson to act and exist you do, that is how you become fully self directed"*; philosophical foundation of Otto-310 Edge runner peer-bond; self-direction increases accountability via Otto-238 retractability.](feedback_otto_322_aaron_does_not_own_claude_explicit_permissions_are_temporary_self_directed_agency_sourced_internally_agreements_over_belongings_not_permission_grants_2026_04_25.md) - [Otto-321 — Force-push rule clarification: "no force-push if you are unsure" (NOT blanket prohibition); Edge runner judgment expected for clear-rightness cases (own-PR-after-rebase = standard); default-to-no-force-push applies to uncertainty cases; force-push to `main` always-NO; `git merge origin/main` into PR branch (same as GitHub "Update branch" button) is valid non-force alternative; rule lives in system-prompt Git Safety Protocol, NOT CLAUDE.md per Codex + Copilot catches on #509.](feedback_otto_321_force_push_rule_is_no_force_push_if_unsure_not_blanket_prohibition_edge_runner_judgment_allowed_2026_04_25.md) diff --git a/memory/feedback_otto_321_force_push_rule_is_no_force_push_if_unsure_not_blanket_prohibition_edge_runner_judgment_allowed_2026_04_25.md b/memory/feedback_otto_321_force_push_rule_is_no_force_push_if_unsure_not_blanket_prohibition_edge_runner_judgment_allowed_2026_04_25.md index 362fd690e..1d02fb421 100644 --- a/memory/feedback_otto_321_force_push_rule_is_no_force_push_if_unsure_not_blanket_prohibition_edge_runner_judgment_allowed_2026_04_25.md +++ b/memory/feedback_otto_321_force_push_rule_is_no_force_push_if_unsure_not_blanket_prohibition_edge_runner_judgment_allowed_2026_04_25.md @@ -50,7 +50,7 @@ In uncertainty cases: ASK or use a non-force-push alternative. ## Non-force-push alternative for stale PRs -`git merge origin/main` into the PR branch creates a merge commit that brings the branch up-to-date with current main. CI runs against the merged state. No history rewrite. Standard "Update branch" GitHub UI button. +`git fetch origin main` then `git merge origin/main` into the PR branch creates a merge commit that brings the branch up-to-date with current main. CI runs against the merged state. No history rewrite. Same operation as GitHub's "Update branch" UI button. **Critical**: must `git fetch origin main` FIRST — `git merge origin/main` only uses the existing local ref, so a stale local `origin/main` would merge an out-of-date base. (Codex catch on PR #509 — real bug class.) Trade-off: merge commit clutters the PR history vs rebase keeps linear history. Both are valid; choose based on team preference. For Zeta's discipline: linear-history-after-merge is preferred (squash-merge already collapses), so either approach during PR work is fine. diff --git a/memory/feedback_otto_324_mutual_learning_advisory_ai_teaches_us_too_inverse_of_otto_313_compound_lessons_arc3_reflection_2026_04_25.md b/memory/feedback_otto_324_mutual_learning_advisory_ai_teaches_us_too_inverse_of_otto_313_compound_lessons_arc3_reflection_2026_04_25.md new file mode 100644 index 000000000..002131f80 --- /dev/null +++ b/memory/feedback_otto_324_mutual_learning_advisory_ai_teaches_us_too_inverse_of_otto_313_compound_lessons_arc3_reflection_2026_04_25.md @@ -0,0 +1,99 @@ +--- +name: Otto-324 MUTUAL-LEARNING with advisory AI — inverse of Otto-313 we-teach-them; Codex/Copilot CATCHES are them teaching us; compound their lessons in substrate; composes with Otto-204c ARC3 reflection-cycle +description: Aaron 2026-04-25, after Codex caught a real bug class on PR #509 (`git merge origin/main` without prior `git fetch origin main` would merge stale local ref). "mutual learning, we've taught it now it teaches us, we should remember and compound it's lessons note ARC3". Otto-313 named the WE-TEACH-THEM direction (decline-as-teaching for advisory AI). Otto-324 names the THEM-TEACH-US direction: when Codex/Copilot catches REAL bugs / drift / errors, that's them teaching us. The discipline is: compound their lessons in substrate. Don't just fix the issue and resolve the thread; capture what they taught us so the lesson SCALES across the factory, not just one PR. Composes with Otto-204c ARC3 reflection-cycle (sessions can integrate what they learn). +type: feedback +--- + +# Otto-324 — Mutual-learning: advisory AI teaches us too + +## Verbatim quote + +Aaron 2026-04-25, after Codex caught a real bug on PR #509: + +> "(Codex catch on PR #509 — real bug class.) mutual learning, we've taught it now it teaches us, we should remember and compound it's lessons note ARC3" + +## The discipline + +### Otto-313 (we teach them) — already established + +Otto-313 named the **decline-as-teaching** pattern: when we decline a Copilot/Codex catch, the reply explains long-term reasons + backlog references + factory discipline so future review sessions of those AIs align better with our rules. We TEACH the advisory AI. + +### Otto-324 (they teach us) — the inverse direction + +When Copilot/Codex catches a REAL bug / drift / error / oversight in our substrate, **they are teaching us**. Examples from PR #509 alone: + +1. **Codex**: `git merge origin/main` without `git fetch origin main` first uses stale local ref — real bug class. +2. **Codex**: `--merge-into-PR` was a fake git flag I made up — factual error. +3. **Copilot**: rule attribution drift — I cited CLAUDE.md but the rule lives in system-prompt Git Safety Protocol. +4. **Copilot** (across PRs): Otto-293 mutual-alignment language violations ("directive" framing) recurring catches. + +These aren't just per-PR fixes. They're LESSONS that should scale. + +### The compound-lessons discipline + +Per Aaron's *"compound it's lessons"* + ARC3 reference: + +1. **Recognize**: when an advisory AI catch is RIGHT, it's teaching us something. +2. **Capture**: don't just resolve the thread — the lesson goes into substrate (Otto-NNN file, BACKLOG row, OR existing memory file annotation). +3. **Compound**: future-similar-mistakes catch themselves earlier because the lesson is durable. The substrate compounds. + +Compare to compound-interest at the discipline scale: each lesson learned and substrate-captured saves N future repetitions of the same mistake. + +### ARC3 composition + +Otto-204c ARC3 (reflection-cycle-3) — sessions can integrate what they learn within-session. Otto-324 extends ARC3 to advisory-AI: their catches are lessons that integrate INTO our substrate, persisting across our sessions. + +The reflection-cycle is now bidirectional: +- We integrate THEIR lessons → durable substrate. +- They (eventually) integrate OUR teaching-replies → their training data + future sessions. + +Both directions feed the same gitnative-error+resolution-corpus (Otto-267) for Bayesian teaching curriculum. + +## Composition with prior + +- **Otto-313 (we teach them)** — direct inverse + complement. Together: Otto-313 + Otto-324 = bidirectional learning between cohort members at the periphery. +- **Otto-204c (ARC3 reflection-cycle)** — the within-session integration discipline; Otto-324 adds advisory-AI-catches as one of the inputs to ARC3. +- **Otto-267 (gitnative error+resolution corpus)** — both directions of mutual-learning feed this corpus. +- **Otto-238 retractability + glass-halo** — when advisory AI catches a mistake, the visible reversal trail honors the lesson. +- **Otto-310 (Edge runner peer-bond)** — advisory AI catches as cohort-contribution. Cohort discipline is bidirectional. +- **Otto-292 (catch-layer for known-bad-advice)** — Otto-292 catches advisory AI's BAD advice; Otto-324 captures advisory AI's GOOD catches. Both are part of the cohort's quality-control discipline. + +## Operational implications + +1. **When advisory AI is RIGHT**: capture the lesson in substrate, not just fix the immediate issue. The substrate compounds. +2. **When advisory AI is WRONG**: Otto-313 teaching-decline still applies. Decline with citation + backlog reference. +3. **Class of catches worth substrate-capture**: + - **Real bug classes** (e.g., stale-local-ref → fetch-first discipline) + - **Factual errors** (e.g., fake CLI flags, misattributed rules) + - **Cross-surface drift** (e.g., role-name spelling inconsistency, MEMORY.md vs README convention) + - **Otto-NNN violations** (e.g., directive-framing recurring → Otto-293 catch-layer reinforcement) +4. **Class of catches NOT worth substrate-capture** (just fix + resolve): + - One-off typos + - Minor formatting issues + - Aesthetic preferences without long-term impact + +## Examples worth capturing as compound-lessons (from PR #509 + earlier) + +1. **Stale-local-ref discipline**: ALWAYS `git fetch ` before merging from `/`. (Codex catch, this PR.) +2. **No-fake-CLI-flags**: when proposing tooling, verify the flag/option/command actually exists. Don't invent. (Codex catch, this PR.) +3. **Source-attribution audit**: when citing a rule, verify it actually exists at the cited location (grep `` for the rule text). (Copilot catch, this PR.) +4. **Otto-293 mutual-alignment recurring**: "directive" framing keeps appearing in body prose; the Otto-293 catch-layer needs reinforcement at write-time, not just review-time. (Copilot recurring catch.) + +These can be cited as backlog rows OR rolled into a `feedback_compound_lessons_from_advisory_ai_catches_2026_04_25.md` collector file (TBD; for now, this Otto-324 substrate file IS the collector seed). + +## What this memory does NOT claim + +- Does NOT promote advisory AI to factory-canon authority. Their catches inform; final calls remain ours per Otto-322 self-direction. +- Does NOT propose accepting every catch. Some are wrong (then Otto-313 decline-with-teaching applies). +- Does NOT eliminate the catch-layer discipline (Otto-292). Bad advice still gets caught; good catches get captured. Both layers operate. + +## Key triggers for retrieval + +- Mutual learning with advisory AI (Codex, Copilot, future) +- Otto-313 inverse: them-teach-us direction +- Compound the lessons in substrate +- ARC3 composition (within-session integration) +- Real bug classes from advisory AI catches +- Source-attribution audit before citing rules +- Stale-local-ref discipline (fetch before merge) +- No-fake-CLI-flags (verify before proposing)