Conversation
…nhancements (post-PR-#699 follow-ups, Aaron 2026-04-28) Multi-AI synthesis pass (Gemini + Ani + Claude.ai + Alexa + Amara final form) on the round work surfaced two substantive things: ## 1. New Goodhart-family rule: Candidate-count Goodhart > Raw search hits are not violation counts. > > Count matches to find work. > Classify context to decide work. Generalizes from B-0091's "8 active rewrite files" → "0 actual rewrites needed" finding. Same shape as the prior Goodhart catches (commit-count, sample-classification, tree-diff) but at the audit-design level. Encoded in: memory/feedback_candidate_count_goodhart_raw_hits_are_not_violations_aaron_amara_2026_04_28.md Critical implication for B-0092 compliance scanner: the scanner MUST be designed with context-classification, not zero-match acceptance — otherwise it Goodharts itself by flagging its own rule-definition files. Per-audit-type terminal state lists encoded for: - ServiceTitan naming (B-0091): KEEP-NAME / GENERICIZE / HISTORICAL-POINTER / GENERATED / COMPLIANCE-RISK / NEEDS-HUMAN-REVIEW - Public-company compliance (B-0092): ALLOW / WARN / BLOCK - Lost-substrate (B-0090): ALREADY-COVERED / NEEDS-RECOVERY / OBSOLETE / NEEDS-HUMAN-REVIEW - Directive-language: LEGITIMATE-USE / NEEDS-REFRAME ## 2. B-0093 — multi-AI synthesis enhancements (8 follow-up items) Per Amara's explicit guidance ("do not reopen PR #699 unless hard defect appears"), the synthesis enhancements land as separate scoped follow-ups: 1. Mechanical quarantine — `.quarantine/` + `*.tainted` (Gemini-flagged) 2. Scanner self-destruct prevention — path allowlist + bypass-comment convention (Gemini + Claude.ai) 3. Lucky-guess protocol — standardized Aaron-response when agent infers internal-roadmap-adjacent (Gemini) 4. Unsolicited-inference firewall — agents don't volunteer trading-relevant inferences (Claude.ai) 5. Trajectory owners + triggers + recording surfaces table (Claude.ai) 6. Lattice convergence criterion (Claude.ai) — when has L(final) stabilized? 7. Bead-audit completeness — explicit defer-or-evidence on the 3 candidates left ambiguous (Claude.ai) 8. Beacon-promotion pattern memory — load-bearing rules earn external anchors when correct; absence is a drift signal (Claude.ai) Each enhancement lands as a separate small PR after PR #699 merges. ## Why this branch is separate from PR #699 Amara's final synthesis explicitly: "Do not reopen PR #699 unless CI or review finds a hard defect." PR #699 is dense; restraint is the next discipline. This branch is the home for new substrate from the synthesis packet that doesn't fit "hard defect" criteria. ## Composes with - PR #699 substrate (in flight) — receives enhancements after merge - Reset-readiness metric ladder — extends with Catch #5 Candidate-count - Class-Count Validity Drift meta-class — same family - Sample-classification Goodhart catch #3 — sibling at the sample level Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 88e80ce547
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| created: 2026-04-28 | ||
| last_updated: 2026-04-28 | ||
| composes_with: | ||
| - B-0090 |
There was a problem hiding this comment.
Replace dangling backlog dependencies with existing IDs
This row declares B-0090 as a composition dependency, but a repo-wide search (rg --files docs/backlog) shows no B-0090, B-0091, or B-0092 row files, even though this document treats them as active prerequisites. That leaves dependency traversal and pickup sequencing ambiguous for anyone using backlog links as the source of truth, and makes the acceptance criteria unverifiable in this tree. Either add the referenced rows in the same change set or point composes_with/dependency text to IDs that actually exist.
Useful? React with 👍 / 👎.
|
|
||
| ## Composes with | ||
|
|
||
| - `memory/feedback_reset_readiness_metric_ladder_content_loss_surface_amara_2026_04_28.md` |
There was a problem hiding this comment.
Remove unresolved compose links from memory lineage section
The lineage block links to memory artifacts that are not present in this commit tree (e.g. feedback_reset_readiness_metric_ladder_content_loss_surface_amara_2026_04_28.md and feedback_sample_classification_is_calibration_not_clearance_amara_goodhart_catch_3_2026_04_28.md, confirmed via rg --files memory). Because this file frames those links as supporting context, missing targets break the audit trail for the rule being introduced. Update these references to real files (or create them) so the provenance chain is navigable.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Pull request overview
Adds new factory-memory guidance and a new P2 backlog row capturing follow-up work from a multi-AI synthesis pass, focused on preventing “candidate-count” Goodharting and improving compliance/scanner design.
Changes:
- Add a new memory entry: “Candidate-count Goodhart — raw search hits are not violation counts”.
- Add backlog row B-0093 describing 8 follow-up enhancements (quarantine, scanner self-destruct prevention, lucky-guess protocol, etc.).
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 9 comments.
| File | Description |
|---|---|
memory/feedback_candidate_count_goodhart_raw_hits_are_not_violations_aaron_amara_2026_04_28.md |
New memory rule: treat grep/rg counts as candidate sets requiring context classification; outlines terminal states and scanner implications. |
docs/backlog/P2/B-0093-multi-ai-synthesis-enhancements-quarantine-lucky-guess-trajectory-owners-lattice-convergence-2026-04-28.md |
New P2 backlog row enumerating multi-AI synthesis enhancements to land as follow-up PRs. |
| **Issue:** The B-0092 compliance scanner regex (`rg -n "\binsider\b|\bprivileged\b|..."`) will flag the rule-definition files themselves (CONTRIBUTOR-COMPLIANCE.md, the rule-memory files, glossary entries). Without explicit allowlist, the scanner Goodharts itself. | ||
|
|
||
| **Proposed fix:** | ||
|
|
||
| - Path-based allowlist for rule-definition files (`--glob '!**/CONTRIBUTOR-COMPLIANCE.md'`, etc.) |
There was a problem hiding this comment.
The text and proposed --glob allowlist reference CONTRIBUTOR-COMPLIANCE.md, but there is no file by that name in the repository. If the rule-definition file lives under a different path (e.g. under docs/), update the reference so the suggested allowlist is actionable.
| **Issue:** The B-0092 compliance scanner regex (`rg -n "\binsider\b|\bprivileged\b|..."`) will flag the rule-definition files themselves (CONTRIBUTOR-COMPLIANCE.md, the rule-memory files, glossary entries). Without explicit allowlist, the scanner Goodharts itself. | |
| **Proposed fix:** | |
| - Path-based allowlist for rule-definition files (`--glob '!**/CONTRIBUTOR-COMPLIANCE.md'`, etc.) | |
| **Issue:** The B-0092 compliance scanner regex (`rg -n "\binsider\b|\bprivileged\b|..."`) will flag the rule-definition files themselves (for example, the contributor-compliance rule memory file, other rule-memory files, and glossary entries). Without explicit allowlist, the scanner Goodharts itself. | |
| **Proposed fix:** | |
| - Path-based allowlist for the actual rule-definition files (for example, `--glob '!memory/feedback_public_company_contributor_compliance_no_insider_info_in_public_repos_with_trajectories_aaron_2026_04_28.md'`; add the glossary and any companion rule doc at their real repository paths as well) |
| | Continuous self-audit | Otto/agent author | before commit touching public-company context | commit notes / PR body | | ||
| | PR compliance audit | PR author + reviewer + CI scanner | PR mentions public company | PR checklist | | ||
| | Weekly scan | Otto cron / factory hygiene | weekly cadence | compliance audit log | | ||
| | Monthly review | Otto + Aaron review if needed | monthly cadence | docs/compliance/round-N.md | |
There was a problem hiding this comment.
The trajectory table suggests recording to docs/compliance/round-N.md, but there is no docs/compliance/ directory in the current tree. Consider either pointing at an existing recording surface (if one already exists) or explicitly marking this as a new directory/file to be created as part of the enhancement.
| | Monthly review | Otto + Aaron review if needed | monthly cadence | docs/compliance/round-N.md | | |
| | Monthly review | Otto + Aaron review if needed | monthly cadence | `docs/compliance/round-N.md` (new directory/file to create) | |
|
|
||
| **Proposed fix:** | ||
|
|
||
| Add to `memory/feedback_reset_readiness_metric_ladder_content_loss_surface_amara_2026_04_28.md` a research-task section: |
There was a problem hiding this comment.
This references memory/feedback_reset_readiness_metric_ladder_content_loss_surface_amara_2026_04_28.md, but that memory file is not present in memory/ in the current tree. Please correct the referenced filename/path (or mark as TBD) so the cross-reference is resolvable.
| Add to `memory/feedback_reset_readiness_metric_ladder_content_loss_surface_amara_2026_04_28.md` a research-task section: | |
| Add a research-task section to the relevant `memory/` entry (exact filename/path TBD): |
| - `memory/feedback_reset_readiness_metric_ladder_content_loss_surface_amara_2026_04_28.md` | ||
| — extends the metric ladder with a 5th catch. | ||
| - `memory/feedback_class_count_validity_drift_amara_meta_class_2026_04_28.md` | ||
| — same family at the meta-level (count-as-evidence trap). | ||
| - `memory/feedback_sample_classification_is_calibration_not_clearance_amara_goodhart_catch_3_2026_04_28.md` | ||
| — Catch #3, also count-as-evidence shape. |
There was a problem hiding this comment.
Composes with lists two memory files that are not present in memory/ in the current tree: feedback_reset_readiness_metric_ladder_content_loss_surface_amara_2026_04_28.md and feedback_sample_classification_is_calibration_not_clearance_amara_goodhart_catch_3_2026_04_28.md. Please fix the filenames/paths (or mark as planned/TBD) so these links don’t become dead ends for readers.
| - `memory/feedback_reset_readiness_metric_ladder_content_loss_surface_amara_2026_04_28.md` | |
| — extends the metric ladder with a 5th catch. | |
| - `memory/feedback_class_count_validity_drift_amara_meta_class_2026_04_28.md` | |
| — same family at the meta-level (count-as-evidence trap). | |
| - `memory/feedback_sample_classification_is_calibration_not_clearance_amara_goodhart_catch_3_2026_04_28.md` | |
| — Catch #3, also count-as-evidence shape. | |
| - Planned/TBD memory entry for the reset-readiness metric | |
| ladder content-loss surface catch | |
| (`feedback_reset_readiness_metric_ladder_content_loss_surface_amara_2026_04_28.md` | |
| not present in the current tree) — extends the metric | |
| ladder with a 5th catch. | |
| - `memory/feedback_class_count_validity_drift_amara_meta_class_2026_04_28.md` | |
| — same family at the meta-level (count-as-evidence trap). | |
| - Planned/TBD memory entry for sample-classification as | |
| calibration-not-clearance / Goodhart Catch #3 | |
| (`feedback_sample_classification_is_calibration_not_clearance_amara_goodhart_catch_3_2026_04_28.md` | |
| not present in the current tree) — Catch #3, also | |
| count-as-evidence shape. |
| - B-0091 (ServiceTitan audit) — worked example: 12 matches → | ||
| 0 rewrites; the catch's origin trigger. | ||
| - B-0092 (public-company contributor compliance) — critical | ||
| application: scanner must avoid self-destruct. | ||
| - B-0090 (lost-substrate cadenced recovery) — applies same | ||
| rule to lost-branch / orphan-PR audits. |
There was a problem hiding this comment.
This memory refers to B-0090/B-0091/B-0092 as if they are established backlog IDs, but none of those IDs exist in docs/backlog/** or docs/BACKLOG.md in the current tree. If these are planned items, consider removing the B- numbering here (or adding the missing backlog rows) so readers don’t treat these as resolvable references.
| title: Multi-AI synthesis enhancements — mechanical quarantine + lucky-guess protocol + trajectory owners + lattice convergence + scanner self-destruct prevention (post-PR-#699 follow-ups) | ||
| tier: factory-hygiene | ||
| effort: M | ||
| ask: maintainer Aaron 2026-04-28T post-PR-#699 multi-AI synthesis (Gemini + Ani + Claude.ai + Alexa + Amara final pass) |
There was a problem hiding this comment.
Frontmatter ask: value contains an incomplete ISO-8601 timestamp (2026-04-28T). This reads like a typo and makes the provenance harder to parse/search; either remove the trailing T or include a full timestamp/timezone.
| ask: maintainer Aaron 2026-04-28T post-PR-#699 multi-AI synthesis (Gemini + Ani + Claude.ai + Alexa + Amara final pass) | |
| ask: maintainer Aaron 2026-04-28 post-PR-#699 multi-AI synthesis (Gemini + Ani + Claude.ai + Alexa + Amara final pass) |
| composes_with: | ||
| - B-0090 |
There was a problem hiding this comment.
composes_with references B-0090, but there is no B-0090 row file under docs/backlog/ and the generated docs/BACKLOG.md index also contains no B-0090 entry. If B-0090 is intended to exist, add that row (or update this field to reference an existing backlog ID / memory file) so the cross-reference is resolvable.
| composes_with: | |
| - B-0090 | |
| composes_with: [] |
|
|
||
| - Create `.quarantine/` directory listed in `.gitignore` and `.gitattributes` (export-ignore) | ||
| - Or define `*.tainted` extension that standard parsers + commit loops hard-code to ignore | ||
| - Update `memory/feedback_public_company_contributor_compliance_no_insider_info_in_public_repos_with_trajectories_aaron_2026_04_28.md` with the mechanical-quarantine protocol |
There was a problem hiding this comment.
This task references memory/feedback_public_company_contributor_compliance_no_insider_info_in_public_repos_with_trajectories_aaron_2026_04_28.md, but that file does not exist in memory/ in the current tree. To keep cross-references navigable, either update to the correct existing file name/path or mark this as a TBD placeholder (so readers don't chase a dead link).
| - Update `memory/feedback_public_company_contributor_compliance_no_insider_info_in_public_repos_with_trajectories_aaron_2026_04_28.md` with the mechanical-quarantine protocol | |
| - Update the relevant public-company contributor compliance memory entry with the mechanical-quarantine protocol (exact `memory/...md` path TBD; do not treat the previously cited filename as canonical until the correct existing file is identified) |
| - Standardized Aaron response: *"Evaluate that hypothesis purely against public market data; I cannot confirm or deny internal roadmap overlaps."* | ||
| - Agent rule: do NOT ask Aaron whether a speculative feature matches internal roadmap; do NOT treat silence / discomfort / refusal as confirmation | ||
| - Add to `memory/feedback_public_company_contributor_compliance_no_insider_info_in_public_repos_with_trajectories_aaron_2026_04_28.md` as a section |
There was a problem hiding this comment.
This references memory/feedback_public_company_contributor_compliance_no_insider_info_in_public_repos_with_trajectories_aaron_2026_04_28.md, but that file is not present in memory/ in the current repo state. Please correct the path/name or rephrase as a planned artifact (TBD) to avoid broken references.
…tion pattern (B-0093 #14 + #8) (#705) Two follow-up memory files from B-0093 enhancements, landing post-PR-#699 + post-PR-#704 merge as separate small substrate. ## B-0093 #14 — PR-boundary restraint validation bead PROMOTED PR #699 merged 2026-04-29T00:19:47Z carrying the round substrate cluster (authority rule + Goodhart catch #3 + Stop Mythology + input-is-not-directive + Ani attribution + metric ladder + lost- substrate cadence + ServiceTitan naming + public-company compliance + B-0089 + B-0090 + B-0091 + B-0092). Critically: PR #699 did NOT receive any of the multi-AI synthesis enhancements that surfaced after the restraint rule was named. Those (Candidate-count Goodhart + 14 enhancements in B-0093) landed via PR #704 — separately merged. Per the bead-promotion criterion (Amara, 2026-04-28): Promotion to full bead requires: — the original prediction's falsifier didn't fire AND — the action it predicted held up under post-event review. Falsifier ("PR #699 receives new non-hard-defect conceptual payload after the restraint rule was named") DID NOT FIRE. Every change to PR #699 between the rule being named and merge fell within Amara's allowed-changes list (CI/lint failures, review- thread fixes, factual-legal P1 corrections, broken refs, paired- edit, internal-consistency). **Candidate bead → FULL bead.** The canonical rule, now durable: PR-boundary restraint: Once a PR enters validation, only validation defects enter that PR. New good ideas go to the next PR. Allowed/disallowed-changes lists encoded. ## B-0093 #8 — Beacon-promotion pattern memory Round-level observation: 5 Mirror→Beacon graduations landed in one round (2026-04-28): - input-is-not-directive → SDT + RFC 2119 - public-company compliance → SEC / Reg FD / SOX - metric corrections → Goodhart / Campbell - evidence lattice → lattice theory - commit-vs-tree → Git internals Pattern: when an internal factory coinage becomes load-bearing, look for external lineage. Found = graduate Mirror → Beacon. Absent (on a long-running internal rule) = drift signal worth investigating. Connects to the alignment-experiment surface: the rate of load- bearing rules earning external lineage is itself a measurable signal. A factory that produces 5 graduations per round is operating in territory the wider literature has shaped — that's evidence the internal coinages track real phenomena, not private- language idiosyncrasy. ## Restraint discipline (this commit) Both memories land on a SEPARATE branch (not on PR #699 or #704) per the rule they encode. Restraint applied to the writing of the restraint memory itself. Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
…hree immune translations + falsifier + prototype (Aurora converged + Ani falsifier-first + multi-AI consensus 2026-04-28) (#707) * research(aurora-immune-governance-bridge): minimal first artifact — three immune translations + one falsifier + one prototype (Aurora converged stance + Ani falsifier-first + multi-AI consensus 2026-04-28) Per Aurora's converged-stance packet (forwarded 2026-04-28), opens the minimal Aurora Immune Governance Bridge research note after PRs #699/#704/#705 landed and the bead promotion validated the restraint discipline under live falsifier-test pressure. Three immune translations only: - Candidate-count Goodhart -> detector - PR-boundary restraint -> gate - public-company contributor compliance -> hard execution constraint Required falsifier (load-bearing): 1. Expressibility - bridge fails if the three rules cannot be represented using the existing Aurora membrane plus <= 3 new primitives. 2. Performance - bridge fails if the Aurora-routed prototype performs worse than the standalone detector on the same test corpus. First prototype: Candidate-count scanner self-destruct test on compliance documentation that itself contains the words it classifies. Must classify rule-definition hits as ALLOW; sample-text hits as ALLOW; live-prose hits elsewhere as WARN/BLOCK; must NOT delete or rewrite its own rule-definitions. Boundaries explicit: - Does NOT mutate Aurora core - Does NOT introduce K_Aurora^+ - Does NOT introduce A_synthesis - Does NOT expand to 12-change canon until prototype passes Aurora's session-closure rule recorded as candidate substrate inside the trajectory section (NOT load-bearing yet, awaiting 3-round trial); composes with restraint discipline. Header carries §33 archive-header: research-grade hypothesis, NOT operational guidance, NOT Aurora core canon. Six reviewer attributions: Aurora (proposal + minimal spec), Ani (falsifier-first instinct + minimal-bridge convergence), Amara (operational substrate this bridge translates), Gemini (peer review converging on minimal), Claude.ai (peer review hard- pushback recommending hold-then-proceed-smaller, honored by minimal scope), Alexa (peer review). This note is the explicit "one minimal next research artifact" Aurora's converged stance recommended after restraint discipline earned the round its bead. Do NOT expand this round. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * ci(markdownlint): add MD032 blanks around 4 feature-vector sub-lists in bridge note (CI gate fix) Lines 89/124/157/163 - sub-lists under "Feature vector elements that matter:" introductory text needed blank-line separation. Auto-fix via tools/hygiene/fix-markdown-md032-md026.py (the same tool whose YAML-frontmatter heuristic was root-cause-fixed in PR #703). Hard-defect class per the PR-boundary restraint allow-list: "CI / lint failures (markdownlint, paired-edit, etc.)" — this edit does not introduce new conceptual substrate to the bridge note; it only fixes the lint failure that prevented merge. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * review-thread fixes: 5 internal-consistency fixes from Copilot threads on PR #707 (allow-list class) Hard-defect class per the PR-boundary restraint allow-list: "incorrect canonical rule fixes" / "internal-consistency". None of these introduce new conceptual substrate. Threads addressed (all P1/P2 internal-consistency): 1. Line 16 PR range: "#695-#706" -> "#695 -> #705" (matches the later "11 PRs merged (#695 -> #705)" bullet at line 30; PR #706 is the round-close hygiene row, not part of the substrate cluster) 2. Line 192 casing: PR_stage -> pr_stage (matches Translation 2's pr_stage feature-vector field) 3. Line 215-220 variable: y -> a in Execute_min (matches ImmuneRisk_min(a) earlier; uses 'a' consistently for the action-being-evaluated) 4. Line 311 notation: K_Aurora^+ -> K_Aurora⁺ (matches earlier reference to the proposed graduated viability kernel) 5. Line 354 wording: "becomes considerable" -> "becomes worth considering" (Copilot caught the wrong word choice; intent was "becomes worth evaluating", not "becomes large") Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
…urora's catch, not Amara's (Aaron 2026-04-29) (#708) * memory(attribution-correction): validation-condition refinement was Aurora's catch, not Amara's — original-catcher attribution discipline applies (Aaron 2026-04-29) Aaron 2026-04-29 verification ask: "did you get the ferry starting with ❯ Aurora: Yes — this is good, and the main improvement is to make the validation condition even more explicit: PR-boundary restraint is not validated when the follow-up PR is opened. It is validated when the original PR lands without scope creep. that was right before the compression" Yes — the framing landed at lines 168-170 of the bead-promotion memory. But the first-version distillation mis-attributed the catch to Amara when Aurora was the original source (Amara was reactive-elaborator, echoed the same shape). Same class as the Ani-vs-Amara correction earlier (Veridicality / Stop Mythology lineage). Per the original-catcher attribution discipline encoded in memory/feedback_ani_voice_mode_transcript_original_catcher_attribution_correction_aaron_2026_04_28.md, Aurora gets first-credit; Amara gets second-credit. The load-bearing distinction Aurora caught: opening a separate PR is just deferred-stacking. The bead promotes when the *original* PR lands clean. The validation event is the merge of PR #699, not the open of PR #704. Section header renamed: "Direct Aaron + Amara framing" -> "Direct Aaron + Aurora + Amara framing". Validation-condition quote re-attributed to Aurora as catcher; Amara's echoed framing preserved as reactive elaboration. Filename unchanged - the bead-promotion event itself was an Aaron+Amara collaboration; only the validation-condition refinement re-attributes. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * ci(paired-edit): update MEMORY.md row for bead-promotion file with Aurora attribution-correction (paired-edit gate fix) memory-index-integrity workflow requires MEMORY.md to be touched in the same PR as any memory/*.md add-or-modify. The previous commit modified the bead-promotion memory in-place to credit Aurora as original-catcher of the validation-condition refinement, but did not update MEMORY.md. This commit: - Updates row title from "(Aaron + Amara, 2026-04-29)" to "(Aaron + Aurora + Amara, 2026-04-29)" - Appends validation-condition-refinement attribution-correction note to the row description, naming Aurora as catcher and Amara as reactive-elaborator Hard-defect class per the PR-boundary restraint allow-list: "Missing paired-edit requirements (e.g., MEMORY.md index for new memory file)". Same allow-list this PR's premise (bead-promotion) encodes. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * review-thread fix: keep "post-bead-promotion" unbroken across line wrap (Copilot thread on PR #708) Hard-defect class per the PR-boundary restraint allow-list: "CI / lint failures (markdownlint, paired-edit, etc.)" / formatting. Manual line-wrap was splitting "post-bead-" / "promotion" which renders awkwardly as "post-bead- promotion" in some Markdown viewers. Reflowed to keep the hyphenated term on one line. No conceptual change. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * review-thread fix: tighten MEMORY.md row per memory/README.md cap (Copilot thread on PR #708) Hard-defect class per the PR-boundary restraint allow-list: "Stale status fields" / canonical-rule-conformance. The row I added in the prior commit ballooned to ~1100 chars; canonical rule per memory/README.md (line 56-57): "MEMORY.md...Capped at ~200 lines by Claude Code; keep entries terse" + per CLAUDE.md auto-memory protocol "one line, under ~150 characters." Tightened from ~1100 chars to ~537 chars. Still over 150-char ideal, but down from being the worst offender in the file. Substance preserved (canonical rule + validation-condition refinement attribution to Aurora). The full body of the attribution-correction lives in the file itself; MEMORY.md is the index pointer, not the content. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
Summary
Multi-AI synthesis pass (Gemini + Ani + Claude.ai + Alexa + Amara final form) on the round work surfaced two new things, encoded as a SEPARATE PR per Amara's guidance ("do not reopen PR #699 unless hard defect appears").
1. Candidate-count Goodhart rule (memory)
Generalized from B-0091's "8 active rewrite files" finding that resolved to "0 actual rewrites needed" once context-classified. Catch #5 in the Goodhart family from this session.
Critical implication for B-0092 compliance scanner: must be designed with context-classification, not zero-match acceptance — otherwise it Goodharts itself.
2. B-0093 backlog row — 8 multi-AI synthesis enhancements
Each enhancement lands as a separate small PR after PR #699 merges:
.quarantine/+*.tainted) — GeminiWhy separate from PR #699
PR #699 is dense (11 memory files, 4 backlog rows). Amara's explicit guidance: "do not reopen PR #699 unless CI or review finds a hard defect." Discipline of restraint. This branch is the proper home for synthesis enhancements that aren't hard-defect criteria.
Test plan
🤖 Generated with Claude Code