From 8630c28e57ffc98d2e556c3bff407f83fdfdc5ad Mon Sep 17 00:00:00 2001 From: Aaron Stainback Date: Wed, 29 Apr 2026 10:13:08 -0400 Subject: [PATCH 1/3] =?UTF-8?q?ops(0-0-0):=20post-reset=20cleanup=20?= =?UTF-8?q?=E2=80=94=200/0/0=20achieved=2014:04:50Z,=20stale-prose=20fixed?= =?UTF-8?q?,=20protection-config=20memory=20landed?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Completes Amara's prescribed post-reset cleanup PR: stale-prose drift in active-trajectory.md fixed + protection-config finding documented. ## 0/0/0 ACHIEVED (2026-04-29T14:04:50Z) - AceHack/main = LFG/main = 621aae082d70fcbf36931718ecf1b6d9e149295f - Topology: 0 ahead, 0 behind, 0 file content diff - Old AceHack tip preserved at archive/acehack-main-pre-000-reset-2026-04-29 → 6755081... - Legacy branch protection DELETED per Aaron; rulesets canonical going forward ## Stale-prose fixes (active-trajectory.md) Two paragraphs flipped from pre-reset state to in-force post-reset state: - Line 221: "Currently NOT signoff-eligible" → "0/0/0 ACHIEVED 2026-04-29T14:04:50Z..." - Line 413: "Hard-reset is NOT YET signoff-eligible" → "Hard-reset complete (2026-04-29T14:04:50Z)..." Per Amara's substrate-pass catch (2026-04-29 buddy review): residual prose contradicted the 273/0/0 ledger state. This is Derived-Rollup Drift class. ## Protection-config memory file `memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md` documents: - AceHack/Zeta had BOTH legacy branch protection AND repository rulesets on main - Both layers enforced independently; GitHub UI doesn't surface dual-layer state - Aaron's confirmation: "I knew there were two but I was confused why" - Maintainer call: legacy DELETED; rulesets canonical - Error-code mapping: GH013 = rulesets surface; GH006 = legacy surface - Diagnostic script (gh api commands) for future protection-config audits - Future-protocol note: rulesets non_fast_forward rule still doesn't match CLAUDE.md "force-push to AceHack main is part of protocol" — task #305 home for that decision MEMORY.md index updated with one-line pointer. ## Tick shard 1410Z Records the entire 0/0/0 hard-reset arc: - Triple-check buddy review (Amara approved meaningful-content-loss-free) - Verify-only gate packet (5/5 PASS at 13:39Z) - Aaron's explicit EXECUTE at 13:58Z - Step A (archive ref preservation) succeeded - Step B (force-push) failed twice — GH013 then GH006 — discovered dual-layer protection - Aaron's "leave legacy off" decision - Path 1 v3 succeeded at 14:04:50Z - Trap-restore re-enabled rulesets only ## Per Amara post-reset framework This PR is the small in-lane cleanup. After it merges: - Recovery lane starts in INVENTORY-ONLY mode (per Amara's 7-bucket framework: ALREADY_REACHABLE / OBSOLETE_SUPERSEDED / PRESERVE_REF_ONLY / OPEN_PR_CANDIDATE / EXTRACT_MEMORY_OR_DOC / NEEDS_AARON_DECISION / CORRUPT_OR_UNREADABLE) - 918 branches + 58 worktrees + 7 stashes inventoried at /tmp/recovery-inventory-2026-04-29.tsv - No mutation until classification summary reviewed - Deferred queue P1 consolidation (8 rule candidates) waits per Amara's "trigger after hard-reset success" rule — now satisfied, but lane discipline keeps it in P1 not P0 ## Authority boundary going forward - Reversible + in-lane + PR-reviewed → proceed autonomously - Irreversible / deletion / force-push / authority config / identity canon → ask Aaron - Unclear → stop, report exact uncertainty, propose one safe action --- docs/active-trajectory.md | 4 +- .../hygiene-history/ticks/2026/04/29/1410Z.md | 1 + memory/MEMORY.md | 2 + ...y_deleted_rulesets_canonical_2026_04_29.md | 83 +++++++++++++++++++ 4 files changed, 88 insertions(+), 2 deletions(-) create mode 100644 docs/hygiene-history/ticks/2026/04/29/1410Z.md create mode 100644 memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md diff --git a/docs/active-trajectory.md b/docs/active-trajectory.md index 434a1d93..365c28fe 100644 --- a/docs/active-trajectory.md +++ b/docs/active-trajectory.md @@ -218,7 +218,7 @@ Per multi-AI review 2026-04-29T10:35Z: dry-run push shape verification is added Lease rejection on the real push is NOT a retry condition. It means the remote moved between observation and push — restart the safety gate from the top (re-fetch, recompute content-drift ledger, re-classify if anything moved). -**Currently NOT signoff-eligible**: see the live ledger above (`unclassified_lines`, `HEURISTIC_LFG_DOMINATES` row count). The four-bucket ledger is the single source of truth for classification progress; downstream prose paragraphs are no longer hand-maintained synonyms of the ledger. +**0/0/0 ACHIEVED 2026-04-29T14:04:50Z**: AceHack/main = LFG/main = `621aae082d70fcbf36931718ecf1b6d9e149295f`. Topology: 0 ahead, 0 behind, 0 file content diff. Old AceHack tip `675508187a5e80bd0a8c14a74a9ae80d5346e722` preserved at `archive/acehack-main-pre-000-reset-2026-04-29` on AceHack (named ref, indefinite reachability). Strict gate's classification + operational conditions all satisfied; maintainer signoff received and executed. The four-bucket ledger remains source of truth for any future drift detection. ### 9 infra files (verified 2026-04-29T09:50Z against current git state, NOT against the 16h-old plan) @@ -410,7 +410,7 @@ A peer-call to Grok this session reported the inverse claim ("AceHack has the se ## Next action -**Hard-reset is NOT YET signoff-eligible.** The strict gate above requires `unclassified_lines = 0`. The live four-bucket ledger above is the source of truth for the current count; the remaining files are listed in the `unclassified_lines` composition block. The next agent-owned work is per-file semantic inspection of each remaining file to either promote each to SAFE_TO_RESET_LFG_SUPERSEDES (with named evidence) or downgrade to NEEDS_FORWARD_SYNC. +**Hard-reset complete (2026-04-29T14:04:50Z).** All gate conditions cleared; force-with-lease push executed; 0 ahead / 0 behind / 0 file diff verified. Old AceHack tip preserved at `archive/acehack-main-pre-000-reset-2026-04-29`. Next-action lane is now post-reset cleanup + INVENTORY-ONLY recovery classification of local branches/worktrees/stashes (per Amara's 7-bucket framework). Mutation gates remain on Aaron's authority for irreversible operations. ### Deferred follow-ups (NOT blocking 0/0/0 progress, captured for visibility) diff --git a/docs/hygiene-history/ticks/2026/04/29/1410Z.md b/docs/hygiene-history/ticks/2026/04/29/1410Z.md new file mode 100644 index 00000000..39eeacf9 --- /dev/null +++ b/docs/hygiene-history/ticks/2026/04/29/1410Z.md @@ -0,0 +1 @@ +| 2026-04-29T14:10:00Z | claude-opus-4-7 | b9ccd753 | **🎯 0/0/0 ACHIEVED + post-reset cleanup PR opened.** Sequence: (1) PR #843 (final ledger-flip 235→273 / 38→0) merged 2026-04-29T13:32:27Z; classification phase complete with 273 = 273 + 0 + 0 ledger satisfied. (2) Triple-check buddy review packet ferried to Amara with 6-bucket decomposition of all 273 lines (9 infra files / 5 calibration / Batch 1-3a-3b / option-c migration); Amara approved meaningful-content-loss-free. (3) Verify-only gate packet ran 2026-04-29T13:39Z: fsck/fetch/SHA-match/content-drift/dry-run all 5/5 PASS. (4) Aaron explicit EXECUTE 13:58Z. (5) Step A succeeded — `archive/acehack-main-pre-000-reset-2026-04-29` ref preserves old AceHack tip `6755081...` indefinitely. (6) Step B FAILED first try with GH013 (repository-rules layer); after disabling ruleset, FAILED again with GH006 (legacy branch-protection layer) — discovered AceHack/Zeta had BOTH protection surfaces enforcing independently. (7) Aaron decision: DELETE legacy, leave off, restore only rulesets. (8) Path 1 v3 succeeded 2026-04-29T14:04:50Z: AceHack/main = LFG/main = `621aae082d70fcbf36931718ecf1b6d9e149295f`, 0 ahead, 0 behind, 0 file content diff. Trap-restored ruleset enforcement to active. **The pre-v1 starting line is reached.** This tick: opens post-reset cleanup PR with stale-prose fixes in active-trajectory.md (flip "Currently NOT signoff-eligible" + "Hard-reset is NOT YET signoff-eligible" to in-force 0/0/0-achieved language) + protection-config memory file documenting GH013/GH006 error mapping + legacy-deleted decision. Recovery inventory parked at `/tmp/recovery-inventory-2026-04-29.tsv` (918 branches: 123 ALREADY_REACHABLE / 795 NOT_REACHABLE; 58 worktrees all clean; 7 stashes). Awaiting Amara's recovery-classification framework before any branch/worktree mutation. Authority boundary now: reversible+in-lane → proceed; irreversible/loss/identity → ask Aaron. Cron `b9ccd753` alive. | [PR #843 merged](https://github.com/Lucent-Financial-Group/Zeta/pull/843) → [post-reset cleanup PR (next)](https://github.com/Lucent-Financial-Group/Zeta/pulls) | **Best blade across the session (Amara)**: *"The last file was not easy; it was just well-evidenced."* + *"Cross first; archaeology after."* + *"Buddies review the crossing. Claude walks the lane. Aaron decides irreversible loss."* Six rule candidates earned for post-hard-reset consolidation: Residual-Set Drift, Decision-Resolution Drift, Diff-Direction Identity Drift, Migration Preflight Ledger, Derived-Rollup Drift, Evidence-Tense Discipline + Second-Agent Design Review Gate framework + Aurora Immune Governance Extension (P2 research). Plus newly-validated authority boundary post-0/0/0: Reversible + in-lane + PR-reviewed = proceed autonomously. Irreversible loss / deletion / force-push / authority config / identity canon = ask Aaron. Inventory + provisional classification = proceed. Mutation = wait. **Aaron's quote that anchored the whole post-reset stance**: *"yeah you can relax branch prtection or tell me if you need me to and turn it back on afterwards on AceHack"* — explicit delegation of reversible config-toggle authority. | diff --git a/memory/MEMORY.md b/memory/MEMORY.md index 1ea9fd7b..af84d0c9 100644 --- a/memory/MEMORY.md +++ b/memory/MEMORY.md @@ -2,6 +2,8 @@ **📌 Fast path: read `CURRENT-aaron.md` and `CURRENT-amara.md` first.** +- [**0/0/0 ACHIEVED + AceHack/Zeta protection-config dual-layer surprise — legacy deleted, rulesets canonical (Aaron decision, 2026-04-29T14:04:50Z)**](feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md) — Hard-reset of `acehack/main` to LFG `621aae0...` succeeded after dual-layer protection surprise: AceHack/Zeta had BOTH legacy branch protection AND repository rulesets on `main`; both enforced independently; GitHub UI doesn't surface the dual-layer state. Aaron: *"I knew there were two but I was confused why."* Maintainer call: legacy DELETED, rulesets canonical going forward. Error-code mapping: GH013 = rulesets surface, GH006 = legacy surface. Old AceHack tip preserved at `archive/acehack-main-pre-000-reset-2026-04-29`. + - [**Bare `main` is ambiguous — automation uses explicit refs (Amara, 2026-04-29)**](feedback_bare_main_ambiguity_automation_discipline_explicit_refs_required_amara_2026_04_29.md) — Generic multi-remote-repo automation rule: scripts use `refs/remotes//` (or `refs/heads/`); bare branch names only for interactive humans. Hard-stop on fatal base-ref errors. Caught when bare `git checkout main` was hitting `fatal: matched multiple (2) remote tracking branches` and the loop continued past the failure with wrong downstream state. - [**Cold-readability addendum to Confucius-unfolding pattern (Aaron, 2026-04-29 addendum on 2026-04-25 file)**](feedback_confucius_unfolding_pattern_aaron_compresses_terse_rich_with_implication_claude_unfolds_into_operational_substrate_2026_04_25.md) — Operational addendum 2026-04-29 lands on the existing Confucius-unfolding canonical home (originally a 2026-04-25 file describing the Aaron-compresses + Claude-unfolds dynamic). New angle: when writing durable substrate, expand demonstrative pronouns / in-flight nicknames / implicit time-and-person references / recently-coined jargon inline — future-Claude reads on cold-start with zero shared context. Aaron's correction *"Confucius-unfold you have some existing skill or something for this — it has confucius in the name"* caught the over-eager substrate-creation failure mode (drafted a separate file under a longer name; consolidated into the existing canonical home). Composes with `agent-experience-engineer` skill (audit side) and the verbatim-preservation rule. diff --git a/memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md b/memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md new file mode 100644 index 00000000..abcad371 --- /dev/null +++ b/memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md @@ -0,0 +1,83 @@ +--- +name: AceHack/Zeta protection config — legacy branch protection deleted, repository rulesets canonical (2026-04-29) +description: During 0/0/0 hard-reset on 2026-04-29, AceHack/Zeta was discovered to have BOTH legacy branch protection (`/repos/{owner}/{repo}/branches/main/protection`) AND repository rulesets (`/repos/{owner}/{repo}/rulesets`) configured on `main`. The two layers enforce independently, and GitHub's UI does not surface that they're separate. Per maintainer call 2026-04-29 (Aaron), legacy protection was DELETED and rulesets are now the canonical protection surface for AceHack/Zeta. Future protection-config changes go through rulesets only. +type: feedback +--- + +# AceHack/Zeta protection config — dual-layer surprise + decommission decision + +## What happened (2026-04-29T14:00–14:05Z) + +During the 0/0/0 hard-reset, `git push --force-with-lease` against AceHack/main was rejected with: + +``` +remote: error: GH013: Repository rule violations found for refs/heads/main. +remote: - Cannot force-push to this branch +``` + +After disabling the only rulesets ruleset (id=15524390 "Default", `enforcement: disabled`) and retrying, the push was rejected AGAIN with a **different error code**: + +``` +remote: error: GH006: Protected branch update failed for refs/heads/main. +remote: - Cannot force-push to this branch +``` + +That second rejection came from the **legacy branch protection layer** at `/repos/{owner}/{repo}/branches/main/protection` (with `allow_force_pushes: {enabled: false}`), which is a separate enforcement surface from the rulesets system. + +## Aaron's confirmation + +> *"GH006 (legacy branch protection). i might have had them both turned on"* +> *"I knew there were two but I was confused why, the UI does not make it clear one is legacy, their UI is confusing but I do remember setting it twice."* + +So both layers had been configured at different times, both enforced together, and GitHub's UI does not visually surface that they coexist. + +## Maintainer decision (2026-04-29) + +> *"you could turn off both and leave the legacy off — when you turn back on, just turn back on the rulesets"* + +Executed: +- `gh api -X DELETE repos/AceHack/Zeta/branches/main/protection` → "Branch not protected" (404) +- `gh api -X PUT repos/AceHack/Zeta/rulesets/15524390 --input '{"enforcement": "disabled"}'` (briefly disabled for the push) +- `git push --force-with-lease=...` → succeeded +- `gh api -X PUT repos/AceHack/Zeta/rulesets/15524390 --input '{"enforcement": "active"}'` (re-enabled rulesets) + +Final config: rulesets active, legacy gone. Single source of truth for AceHack/Zeta branch policy. + +## Error-code mapping (load-bearing for future debugging) + +| GitHub error code | Source | Surface | +|---|---|---| +| `GH013` | Rulesets ("Repository rules") | `/repos/{owner}/{repo}/rulesets` | +| `GH006` | Classic / legacy branch protection | `/repos/{owner}/{repo}/branches/{branch}/protection` | + +If a push gets rejected with one error code, disabling that layer alone does NOT guarantee the push will succeed — the OTHER layer may also be enforcing. Always check both surfaces when diagnosing protection-related rejection. + +## How to detect both layers exist on a repo (script) + +```bash +# Legacy branch protection +gh api repos/{owner}/{repo}/branches/{branch}/protection 2>&1 | head -3 +# Returns full config OR "Branch not protected" (404) + +# Repository rulesets +gh api repos/{owner}/{repo}/rulesets --jq '.[] | {id, name, enforcement, target}' +# Returns array of rulesets with enforcement state + +# Status flag (high-level) +gh api repos/{owner}/{repo}/branches/{branch} --jq '.protected' +# true if EITHER layer is active; doesn't tell you which one +``` + +## Why this matters going forward + +1. **Operational diagnosis**: future force-push or branch-policy issues should check BOTH surfaces. Don't trust `branch.protected` flag alone. +2. **Config drift**: future config changes must go through rulesets only; never re-create legacy branch protection on AceHack/Zeta. +3. **Cross-org applicability**: this is a GitHub-wide UI confusion (not specific to AceHack). Other repos in Lucent-Financial-Group / etc. might have the same dual-layer config. Worth checking on cadence. +4. **CLAUDE.md protocol verification**: CLAUDE.md says *"Force-push to AceHack main is part of the protocol"*. The rulesets `non_fast_forward` rule blocks this, which means **the rulesets config still doesn't match the documented protocol**. Either the protocol gets revised (no force-push, only sync via PR) or the ruleset's `non_fast_forward` rule needs a bypass-actor allowlist for the maintainer credential. Task #305-adjacent ("Set up acehack-first development workflow") is the home for that decision. + +## Composes with + +- `memory/feedback_destructive_git_op_5_pre_flight_disciplines_codex_gemini_2026_04_28.md` — pre-flight disciplines for destructive git ops (force-push needs `--force-with-lease=ref:exact-old-sha`) +- `docs/active-trajectory.md` — 0/0/0 hard-reset gate spec + post-reset state +- Task #305 (BACKLOG, pending) — set up acehack-first development workflow; protection-config protocol-vs-ruleset alignment goes here +- `memory/feedback_aaron_visibility_constraint_no_changes_he_cant_see_2026_04_28.md` — Aaron's visibility constraint; this case satisfied it because Aaron was repo admin on AceHack/Zeta and could see the toggles in UI (even if confused by the dual-layer surface) From f6d6a94b6175d2c4bb0e0061d78317b4998a6c71 Mon Sep 17 00:00:00 2001 From: Aaron Stainback Date: Wed, 29 Apr 2026 10:17:29 -0400 Subject: [PATCH 2/3] =?UTF-8?q?ops(0-0-0):=20address=20#844=20Codex=20P2?= =?UTF-8?q?=20=E2=80=94=20fix=20gh=20api=20--input=20syntax=20in=20protect?= =?UTF-8?q?ion-config=20memory?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Codex caught a real defect in the memory file: documented `gh api --input '{json}'` but `--input` takes a FILE PATH, not inline JSON. Future readers copy-pasting would hit failure. Fix: rewrite the Executed section to show the actual heredoc-from-stdin pattern that was used during the live operation: gh api -X PUT ... --input - <<'EOF' {"enforcement": "disabled"} EOF Plus added a clarifying note explaining the gh CLI flag semantics (--input , --input - for stdin, -f/-F for typed inline fields). Memory file is now copy-paste-correct. --- ...y_deleted_rulesets_canonical_2026_04_29.md | 26 ++++++++++++++++--- 1 file changed, 22 insertions(+), 4 deletions(-) diff --git a/memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md b/memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md index abcad371..a9450a99 100644 --- a/memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md +++ b/memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md @@ -36,10 +36,28 @@ So both layers had been configured at different times, both enforced together, a > *"you could turn off both and leave the legacy off — when you turn back on, just turn back on the rulesets"* Executed: -- `gh api -X DELETE repos/AceHack/Zeta/branches/main/protection` → "Branch not protected" (404) -- `gh api -X PUT repos/AceHack/Zeta/rulesets/15524390 --input '{"enforcement": "disabled"}'` (briefly disabled for the push) -- `git push --force-with-lease=...` → succeeded -- `gh api -X PUT repos/AceHack/Zeta/rulesets/15524390 --input '{"enforcement": "active"}'` (re-enabled rulesets) + +```bash +# Delete legacy branch protection +gh api -X DELETE repos/AceHack/Zeta/branches/main/protection +# → "Branch not protected" (404) + +# Disable rulesets enforcement (--input - reads JSON body from stdin via heredoc) +gh api -X PUT repos/AceHack/Zeta/rulesets/15524390 --input - <<'EOF' +{"enforcement": "disabled"} +EOF + +# The destructive force-push (now permitted) +git push --force-with-lease=refs/heads/main:OLD_SHA \ + acehack refs/remotes/origin/main:refs/heads/main + +# Re-enable rulesets enforcement (same heredoc pattern) +gh api -X PUT repos/AceHack/Zeta/rulesets/15524390 --input - <<'EOF' +{"enforcement": "active"} +EOF +``` + +Note on `gh api --input`: it takes a FILE PATH, not inline JSON. Use `--input -` to read from stdin (then pipe / heredoc the JSON body in), or `--input path/to/file.json` for a file. Inline JSON via `--input '{...}'` is not supported syntax — `gh` would treat the JSON string as a filename and fail. Alternative: `-f key=value` for individual fields, or `-F` for typed fields. Final config: rulesets active, legacy gone. Single source of truth for AceHack/Zeta branch policy. From 19f8f0b11cfd18f4227f32a6045a31564321a717 Mon Sep 17 00:00:00 2001 From: Aaron Stainback Date: Wed, 29 Apr 2026 10:25:06 -0400 Subject: [PATCH 3/3] ops(0-0-0): address LFG #844 Copilot threads (3 fixes + 1 wording) + add pr-preservation drain-logs for #844 + #101 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Carry-forward fixes for the 4 unresolved Copilot threads from LFG #844 (closed in favor of canonical AceHack-first reopening as AceHack #101 + this LFG forward-sync). Plus pr-preservation discipline going forward (Aaron 2026-04-29): every PR closed/merged → drain-log on LFG. ## Copilot thread fixes (memory file) 1. **Internal consistency on legacy DELETE response** (Copilot Thread 3) — the 404 came from my POST-DELETE verification GET, not from DELETE itself. DELETE returned rc=0 (success / 204 No Content); subsequent GET returned 404 "Branch not protected". Memory file now reflects the two-step accurately. 2. **"Task #305" wrong reference** (Copilot Thread 4) — should be **task #275** ("Set up acehack-first development workflow") in the in-session TaskList. Updated. Plus added clarifying parenthetical noting in-session-TaskList vs PR-numbers vs backlog-B-#### are distinct namespaces. 3. **Wording nit "the only rulesets ruleset"** (Copilot Thread 5) — adopted suggested rephrasing to "the only ruleset". 4. **`gh api --input` syntax** (Codex Thread 1, already RESOLVED in commit f6d6a94; Copilot Thread 2 is a duplicate finding addressed by the same fix). ## PR-preservation drain-logs (Aaron 2026-04-29 directive: every PR → drain-log on LFG) `docs/pr-preservation/lfg-844-drain-log.md`: - LFG #844 closed not merged (in favor of canonical AceHack-first) - 5 threads total: 1 Codex P2 (RESOLVED-AS-FIXED) + 4 Copilot (UNRESOLVED-CARRIED-FORWARD-AS-FIX, addressed by this commit) - Verbatim reviewer text + my response per thread - Outcome class summary + lessons-for-future `docs/pr-preservation/acehack-101-drain-log.md`: - AceHack #101 merged 14:19:41Z (squash → 5485772) - 0 review threads (AceHack has no Codex/Copilot reviewers + weaker required-status-checks rule) - Notes the **double-hop training-data observation**: AceHack's review surface is sparser than LFG's; the double-hop captures both, including the silence on AceHack as signal about the review-coverage asymmetry ## Going forward Per Aaron 2026-04-29: "we need to go through every PR review thread and make sure all there comments and our responses are saved and backed up git native too, and we should just be doing this everytime form now on going forwoard without fail just like resolving them. lfg should have a home already for this data from forks so it can collection fork specific data for those forks who want to send it, lfg also has a connoncial spot for the repo." Discipline: every PR (closed or merged, AceHack or LFG side) → drain-log file at `docs/pr-preservation/{fork}-{number}-drain-log.md` on LFG. Verbatim reviewer text + responses + outcome class. This collects high-signal training data for the alignment-experiment evaluation surface. Fork-specific naming (`lfg-`/`acehack-`/etc.) disambiguates per-fork numbering collisions. --- docs/pr-preservation/acehack-101-drain-log.md | 32 ++++++ docs/pr-preservation/lfg-844-drain-log.md | 104 ++++++++++++++++++ ...y_deleted_rulesets_canonical_2026_04_29.md | 13 ++- 3 files changed, 144 insertions(+), 5 deletions(-) create mode 100644 docs/pr-preservation/acehack-101-drain-log.md create mode 100644 docs/pr-preservation/lfg-844-drain-log.md diff --git a/docs/pr-preservation/acehack-101-drain-log.md b/docs/pr-preservation/acehack-101-drain-log.md new file mode 100644 index 00000000..f97293af --- /dev/null +++ b/docs/pr-preservation/acehack-101-drain-log.md @@ -0,0 +1,32 @@ +# PR-preservation drain-log — AceHack #101 + +**PR:** AceHack/Zeta#101 +**Title:** ops(0-0-0): post-reset cleanup — stale-prose fixes + protection-config memory +**Opened:** 2026-04-29T14:18Z +**Merged:** 2026-04-29T14:19:41Z (squash; merge commit `5485772e87d74f3b96cdac4f39063cb0e82d7839`) +**Branch:** post-0-0-0-cleanup-2026-04-29 → main +**Status checks:** 17 ran (most short-running), no required-status-checks rule on AceHack so auto-merge fired ~2 min after open + +## Threads (0 review threads, 0 issue-level comments) + +AceHack/Zeta does NOT have Codex or Copilot installed as PR reviewers (or they didn't have time to file before auto-merge fired). Result: zero review-agent feedback was collected on this AceHack PR. The same content went through the LFG side as #844 first, where Codex + Copilot DID review and produce 5 substantive threads (preserved in `lfg-844-drain-log.md`). + +This is the **double-hop training-data observation in practice**: AceHack and LFG produce different review-agent feedback per identical content. AceHack has weaker/no review coverage; LFG has the full Codex + Copilot pass. The double-hop value here is the LFG side, not the AceHack side. Per Aaron's framing, BOTH are valuable training signal — silence on AceHack is also signal (telling us AceHack's review surface is sparser). + +## Outcome class summary + +- 0 threads filed +- 0 issue-level comments +- Outcome class: AUTO-MERGED-NO-REVIEW + +## Lessons for future PRs + +1. **Review-agent coverage asymmetry between forks** is real and worth tracking. AceHack's review surface (Codex/Copilot bot configuration) is weaker than LFG's. The double-hop pattern compensates by routing every PR through LFG's review surface either before (AceHack-first → LFG forward-sync) or after (LFG-first → AceHack mirror). + +2. **AceHack rulesets has no required-status-checks rule.** PRs auto-merge without lint/test gates passing. This is acceptable for the dev-mirror role (rapid iteration) but means AceHack-side PR quality depends on the human author + the eventual LFG forward-sync gate catching anything. + +3. **Documenting the asymmetry**: AceHack's role is "where work lands first for fast iteration"; LFG's role is "where review rigor lands and becomes durable substrate." The training-data corpus collects from BOTH, with the understanding that they capture different things. + +## Relationship to LFG #844 + +This PR is the canonical-direction reopening of LFG #844 (which was opened LFG-first by mistake and closed per Aaron's correction: *"without the double-hop in a few hours we'll be right back to where we started — that's load-bearing to get right"*). The 5 review threads from #844 are preserved at `lfg-844-drain-log.md`; the corresponding fixes are committed to the same branch (`post-0-0-0-cleanup-2026-04-29`) and carried into both this AceHack PR and the upcoming LFG forward-sync PR. diff --git a/docs/pr-preservation/lfg-844-drain-log.md b/docs/pr-preservation/lfg-844-drain-log.md new file mode 100644 index 00000000..0ca9b666 --- /dev/null +++ b/docs/pr-preservation/lfg-844-drain-log.md @@ -0,0 +1,104 @@ +# PR-preservation drain-log — LFG #844 + +**PR:** Lucent-Financial-Group/Zeta#844 +**Title:** ops(0-0-0): post-reset cleanup — stale-prose fixes + protection-config memory +**Opened:** 2026-04-29T14:14Z +**Closed:** 2026-04-29T14:18:52Z (NOT MERGED — closed in favor of canonical AceHack-first reopening) +**Reason for close:** Aaron correction — *"without the double-hop in a few hours we'll be right back to where we started — that's load-bearing to get right"*. PR went LFG-first; canonical pattern is AceHack-first → LFG forward-sync → AceHack absorbs LFG squash-SHA. Reopened on AceHack as #101. + +## Threads (5 total: 1 resolved, 4 unresolved at close) + +### Thread 1 — Codex P2 — `gh api --input` syntax (RESOLVED via commit f6d6a94) + +**Author:** chatgpt-codex-connector +**Path:** memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md +**Filed:** 2026-04-29T14:16:33Z +**Outcome class:** FIX + +> **![P2 Badge](https://img.shields.io/badge/P2-yellow?style=flat) Use supported gh api request-body syntax** +> +> The `Executed` commands document `gh api ... --input '{"enforcement": "disabled"}'`, but `gh api` treats `--input` as a file argument (manual: `--input `, use `-` for stdin), so this form is interpreted as a filename and is not reproducible as written. Because this memory file is positioned as an operational reference for future protection toggles, the current command text can cause failed retries during incident handling; use a real file/stdin body or field flags instead. + +**My response (commit f6d6a94, 2026-04-29T14:17:34Z):** Fixed memory-file documentation. Replaced `gh api --input '{json}'` with the actual heredoc-from-stdin pattern that was used live: `gh api ... --input - <<'EOF'` + JSON body + `EOF`. Added clarifying note about gh CLI flag semantics (`--input `, `--input -` for stdin, `-f`/`-F` for typed inline fields). + +### Thread 2 — Copilot — `gh api --input` (UNRESOLVED at close, addressed in carry-forward to #101 + LFG forward-sync) + +**Author:** copilot-pull-request-reviewer +**Path:** memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md (line 42) +**Filed:** 2026-04-29T14:19:27Z +**Outcome class:** FIX (duplicate finding, same root cause as Codex Thread 1) + +> The `gh api ... --input '{"enforcement": ...}'` invocations won't work as written: `gh api --input` expects a file path (or stdin via `--input -`), not an inline JSON string. Use `-f enforcement=...`/`-F enforcement=...`, or pass JSON via stdin (e.g., `--input -`) so this is copy/paste-correct for future audits. + +**Resolution path:** Already addressed by the same f6d6a94 commit (which Copilot didn't see at filing time). The memory file now shows the correct heredoc-stdin pattern AND mentions `-f`/`-F` as alternatives. Carry-forward to AceHack #101 means the corrected text is already on the branch. + +### Thread 3 — Copilot — internal consistency on legacy DELETE response (UNRESOLVED, FIX in carry-forward) + +**Author:** copilot-pull-request-reviewer +**Path:** memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md (line 43) +**Filed:** 2026-04-29T14:19:28Z +**Outcome class:** FIX + +> The step list appears internally inconsistent: the earlier GH006 rejection indicates legacy branch protection was active, but the subsequent `gh api -X DELETE .../branches/main/protection` line claims a 404 "Branch not protected" response (which typically means there was no classic protection to delete). To keep this memory reliable, please reconcile this (e.g., clarify an intermediate UI/API change, confirm the endpoint/repo/branch used, or update the recorded responses). +> +> ```suggestion +> # Delete legacy branch protection (this removed the GH006 blocker) +> gh api -X DELETE repos/AceHack/Zeta/branches/main/protection +> # → 204 No Content +> ``` + +**Resolution path (commit forthcoming on LFG forward-sync):** Real bug — the 404 came from my POST-DELETE verification GET, not from the DELETE itself. The DELETE returned rc=0 (success / 204 No Content); a subsequent `gh api repos/AceHack/Zeta/branches/main/protection` (GET) returned `{"message":"Branch not protected","status":"404"}` because the branch was no longer protected. The memory file now reflects this two-step accurately: +> ```bash +> # Delete legacy branch protection (this removed the GH006 blocker) +> gh api -X DELETE repos/AceHack/Zeta/branches/main/protection +> # → success (rc=0, no body printed; GitHub returns 204 No Content) +> # Subsequent verification GET returns 404 "Branch not protected": +> # gh api repos/AceHack/Zeta/branches/main/protection +> # → {"message":"Branch not protected", "status":"404"} +> ``` + +### Thread 4 — Copilot — "Task #305" ambiguous reference (UNRESOLVED, FIX in carry-forward) + +**Author:** copilot-pull-request-reviewer +**Path:** memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md (line 82) +**Filed:** 2026-04-29T14:19:28Z +**Outcome class:** FIX (real ambiguity — wrong reference) + +> References to "Task #305" here are ambiguous in-repo (this repo already has a PR #305, and backlog rows use `B-####` IDs under `docs/backlog/**`). Consider replacing "Task #305"/"#305-adjacent" with the actual backlog-row ID (if any) or a direct URL so readers don't chase the wrong artifact. + +**Resolution path (commit forthcoming on LFG forward-sync):** Two-part real fix: +1. The reference was actually wrong — should be **task #275** ("Set up acehack-first development workflow") in the in-session TaskList tracker, not #305. Updated. +2. Added clarifying parenthetical noting the distinction between the in-session TaskList numbers vs PR numbers vs backlog `B-####` row IDs — different namespaces, easy to confuse. + +### Thread 5 — Copilot — wording nit "the only rulesets ruleset" (UNRESOLVED, FIX in carry-forward) + +**Author:** copilot-pull-request-reviewer +**Path:** memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md (line 18) +**Filed:** 2026-04-29T14:19:28Z +**Outcome class:** FIX (small) + +> Minor wording nit: "the only rulesets ruleset" reads like a duplication. Consider rephrasing to "the only ruleset" / "the only rulesets entry" for clarity. + +**Resolution path (commit forthcoming on LFG forward-sync):** Adopted the suggested rephrasing — "the only ruleset (id=15524390 "Default"...)". + +## Issue-level comments + +- **Otto/AceHack 14:17:34Z** — fix-note for Codex P2 thread (commit f6d6a94) +- **Otto/AceHack 14:18:51Z** — close-note explaining double-hop pivot per Aaron's correction + +## Outcome class summary + +- 5 threads total: 1 RESOLVED-AS-FIXED + 4 UNRESOLVED-CARRIED-FORWARD-AS-FIX +- 0 threads classified STALE-RESOLVED-BY-REALITY / OTTO-279-SURFACE-CLASS / DEFERRED-TO-MAINTAINER / VERBATIM-PRESERVATION-DECLINED +- All threads are operational-correctness / documentation-accuracy class; no policy or attribution disputes + +## Lessons for future PRs + +1. **Documented commands need to be runnable as-written.** Memory files act as operational reference at cold-start; bad command syntax becomes muscle-memory hazard. (Echoes _patterns.md FIX class.) +2. **Two-step API operations need explicit step recording** — DELETE vs verification GET responses look different; conflating them creates false internal-inconsistency findings. +3. **Cross-namespace reference ambiguity** — repo has PR numbers, backlog `B-####` rows, in-session TaskList #####, hygiene-history shards, all using different conventions. Always disambiguate when referencing a numbered artifact. +4. **"The only X X"** — duplication patterns in attempted-precise prose. The Codex/Copilot review pass catches these reliably; worth a self-pass-before-commit if writing dense technical doc. + +## Carry-forward to AceHack #101 + LFG forward-sync PR + +All four UNRESOLVED Copilot findings + the one already-RESOLVED Codex finding produced corrections that are now committed to the `post-0-0-0-cleanup-2026-04-29` branch. The branch is on AceHack as #101 (merged 14:19:41Z) and is being repushed to LFG as the forward-sync PR. diff --git a/memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md b/memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md index a9450a99..21397a11 100644 --- a/memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md +++ b/memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md @@ -15,7 +15,7 @@ remote: error: GH013: Repository rule violations found for refs/heads/main. remote: - Cannot force-push to this branch ``` -After disabling the only rulesets ruleset (id=15524390 "Default", `enforcement: disabled`) and retrying, the push was rejected AGAIN with a **different error code**: +After disabling the only ruleset (id=15524390 "Default", `enforcement: disabled`) and retrying, the push was rejected AGAIN with a **different error code**: ``` remote: error: GH006: Protected branch update failed for refs/heads/main. @@ -38,9 +38,12 @@ So both layers had been configured at different times, both enforced together, a Executed: ```bash -# Delete legacy branch protection +# Delete legacy branch protection (this removed the GH006 blocker) gh api -X DELETE repos/AceHack/Zeta/branches/main/protection -# → "Branch not protected" (404) +# → success (rc=0, no body printed; GitHub returns 204 No Content) +# Subsequent verification GET returns 404 "Branch not protected": +# gh api repos/AceHack/Zeta/branches/main/protection +# → {"message":"Branch not protected", "status":"404"} # Disable rulesets enforcement (--input - reads JSON body from stdin via heredoc) gh api -X PUT repos/AceHack/Zeta/rulesets/15524390 --input - <<'EOF' @@ -91,11 +94,11 @@ gh api repos/{owner}/{repo}/branches/{branch} --jq '.protected' 1. **Operational diagnosis**: future force-push or branch-policy issues should check BOTH surfaces. Don't trust `branch.protected` flag alone. 2. **Config drift**: future config changes must go through rulesets only; never re-create legacy branch protection on AceHack/Zeta. 3. **Cross-org applicability**: this is a GitHub-wide UI confusion (not specific to AceHack). Other repos in Lucent-Financial-Group / etc. might have the same dual-layer config. Worth checking on cadence. -4. **CLAUDE.md protocol verification**: CLAUDE.md says *"Force-push to AceHack main is part of the protocol"*. The rulesets `non_fast_forward` rule blocks this, which means **the rulesets config still doesn't match the documented protocol**. Either the protocol gets revised (no force-push, only sync via PR) or the ruleset's `non_fast_forward` rule needs a bypass-actor allowlist for the maintainer credential. Task #305-adjacent ("Set up acehack-first development workflow") is the home for that decision. +4. **CLAUDE.md protocol verification**: CLAUDE.md says *"Force-push to AceHack main is part of the protocol"*. The rulesets `non_fast_forward` rule blocks this, which means **the rulesets config still doesn't match the documented protocol**. Either the protocol gets revised (no force-push, only sync via PR) or the ruleset's `non_fast_forward` rule needs a bypass-actor allowlist for the maintainer credential. task #275-adjacent ("Set up acehack-first development workflow") is the home for that decision. ## Composes with - `memory/feedback_destructive_git_op_5_pre_flight_disciplines_codex_gemini_2026_04_28.md` — pre-flight disciplines for destructive git ops (force-push needs `--force-with-lease=ref:exact-old-sha`) - `docs/active-trajectory.md` — 0/0/0 hard-reset gate spec + post-reset state -- Task #305 (BACKLOG, pending) — set up acehack-first development workflow; protection-config protocol-vs-ruleset alignment goes here +- task #275 (TaskList, pending: "Set up acehack-first development workflow") — protection-config protocol-vs-ruleset alignment goes under that lane. Note: distinct from PR numbers #305/etc. which are unrelated artifacts; this is the in-session TaskCreate/TaskList tracker. - `memory/feedback_aaron_visibility_constraint_no_changes_he_cant_see_2026_04_28.md` — Aaron's visibility constraint; this case satisfied it because Aaron was repo admin on AceHack/Zeta and could see the toggles in UI (even if confused by the dual-layer surface)