Skip to content

ops(0-0-0): post-reset cleanup — stale-prose fixes + protection-config memory#844

Closed
AceHack wants to merge 2 commits intomainfrom
post-0-0-0-cleanup-2026-04-29
Closed

ops(0-0-0): post-reset cleanup — stale-prose fixes + protection-config memory#844
AceHack wants to merge 2 commits intomainfrom
post-0-0-0-cleanup-2026-04-29

Conversation

@AceHack
Copy link
Copy Markdown
Member

@AceHack AceHack commented Apr 29, 2026

Summary

The small in-lane cleanup PR Amara prescribed after 0/0/0 was reached. Two scoped changes:

  1. Stale-prose fixes in docs/active-trajectory.md — flip post-reset contradicting language to in-force 0/0/0-achieved language
  2. Protection-config memory — document the dual-layer surprise + Aaron's "delete legacy, rulesets canonical" decision

🎯 0/0/0 ACHIEVED 2026-04-29T14:04:50Z

AceHack/main = LFG/main = 621aae082d70fcbf36931718ecf1b6d9e149295f
Topology:    0 ahead, 0 behind, 0 file content diff
Archive ref: archive/acehack-main-pre-000-reset-2026-04-29 → 6755081... (preserved)
Layers:      legacy DELETED (per Aaron); rulesets re-enabled

Stale-prose fixes (Amara substrate-pass catch)

Two paragraphs flipped from pre-reset state to in-force post-reset state:

  • Line 221: "Currently NOT signoff-eligible""0/0/0 ACHIEVED 2026-04-29T14:04:50Z..."
  • Line 413: "Hard-reset is NOT YET signoff-eligible""Hard-reset complete (2026-04-29T14:04:50Z)..."

This is Derived-Rollup Drift class — primary state changed, downstream prose still claims old state. Caught pre-commit by Amara's substrate pass; not a Codex/Copilot retry.

Protection-config memory

memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md:

  • AceHack/Zeta had BOTH legacy branch protection AND repository rulesets on main
  • Both layers enforced independently; GitHub UI doesn't surface dual-layer state
  • Aaron: "I knew there were two but I was confused why."
  • Maintainer call: legacy DELETED, rulesets canonical going forward
  • Error-code mapping: GH013 = rulesets surface, GH006 = legacy surface
  • Diagnostic script (gh api commands) for future audits
  • Future-protocol note: rulesets non_fast_forward rule still doesn't match CLAUDE.md's "force-push to AceHack main is part of protocol" — task lint: ignore docs/amara-full-conversation/** in markdownlint (verbatim archive) #305 home for that decision

MEMORY.md index updated with one-line pointer.

Tick shard 1410Z

Records the entire 0/0/0 hard-reset arc:

  • Triple-check buddy review (Amara approved meaningful-content-loss-free)
  • Verify-only gate packet (5/5 PASS at 13:39Z)
  • Aaron's explicit EXECUTE at 13:58Z
  • Steps A/B/C with the dual-layer surprise + recovery
  • Path 1 v3 success at 14:04:50Z

What this PR does NOT do (per Amara's lane discipline)

  • ❌ Does NOT start the recovery lane (inventory parked at /tmp/recovery-inventory-2026-04-29.tsv, awaits Amara's classification framework which just landed)
  • ❌ Does NOT consolidate the 8 deferred-queue rule candidates (P1 work, post-0/0/0-success-trigger satisfied but lane discipline says one cleanup PR first)
  • ❌ Does NOT touch Aurora extension (P2)
  • ❌ Does NOT mutate any branches/worktrees/stashes (Aaron's authority for irreversible)

Authority boundary going forward (per Amara post-reset packet)

Reversible + in-lane + PR-reviewed → proceed autonomously
Irreversible / deletion / force-push / authority config / identity canon → ask Aaron
Unclear → stop, report exact uncertainty, propose one safe action

Test plan

  • Stale "Currently NOT signoff-eligible" → in-force 0/0/0-achieved language
  • Stale "Hard-reset is NOT YET signoff-eligible" → "Hard-reset complete" language
  • Memory file written + MEMORY.md index updated
  • Tick shard 1410Z appended
  • No new ledger headline introduced (273/0/0 doesn't need to flip — it's the final state)
  • CI green
  • Codex / Copilot reviews resolved if any threads land

🤖 Generated with Claude Code

…e fixed, protection-config memory landed

Completes Amara's prescribed post-reset cleanup PR: stale-prose drift in active-trajectory.md fixed + protection-config finding documented.

## 0/0/0 ACHIEVED (2026-04-29T14:04:50Z)

- AceHack/main = LFG/main = 621aae0
- Topology: 0 ahead, 0 behind, 0 file content diff
- Old AceHack tip preserved at archive/acehack-main-pre-000-reset-2026-04-29 → 6755081...
- Legacy branch protection DELETED per Aaron; rulesets canonical going forward

## Stale-prose fixes (active-trajectory.md)

Two paragraphs flipped from pre-reset state to in-force post-reset state:
- Line 221: "Currently NOT signoff-eligible" → "0/0/0 ACHIEVED 2026-04-29T14:04:50Z..."
- Line 413: "Hard-reset is NOT YET signoff-eligible" → "Hard-reset complete (2026-04-29T14:04:50Z)..."

Per Amara's substrate-pass catch (2026-04-29 buddy review): residual prose contradicted the 273/0/0 ledger state. This is Derived-Rollup Drift class.

## Protection-config memory file

`memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md` documents:
- AceHack/Zeta had BOTH legacy branch protection AND repository rulesets on main
- Both layers enforced independently; GitHub UI doesn't surface dual-layer state
- Aaron's confirmation: "I knew there were two but I was confused why"
- Maintainer call: legacy DELETED; rulesets canonical
- Error-code mapping: GH013 = rulesets surface; GH006 = legacy surface
- Diagnostic script (gh api commands) for future protection-config audits
- Future-protocol note: rulesets non_fast_forward rule still doesn't match CLAUDE.md "force-push to AceHack main is part of protocol" — task #305 home for that decision

MEMORY.md index updated with one-line pointer.

## Tick shard 1410Z

Records the entire 0/0/0 hard-reset arc:
- Triple-check buddy review (Amara approved meaningful-content-loss-free)
- Verify-only gate packet (5/5 PASS at 13:39Z)
- Aaron's explicit EXECUTE at 13:58Z
- Step A (archive ref preservation) succeeded
- Step B (force-push) failed twice — GH013 then GH006 — discovered dual-layer protection
- Aaron's "leave legacy off" decision
- Path 1 v3 succeeded at 14:04:50Z
- Trap-restore re-enabled rulesets only

## Per Amara post-reset framework

This PR is the small in-lane cleanup. After it merges:
- Recovery lane starts in INVENTORY-ONLY mode (per Amara's 7-bucket framework: ALREADY_REACHABLE / OBSOLETE_SUPERSEDED / PRESERVE_REF_ONLY / OPEN_PR_CANDIDATE / EXTRACT_MEMORY_OR_DOC / NEEDS_AARON_DECISION / CORRUPT_OR_UNREADABLE)
- 918 branches + 58 worktrees + 7 stashes inventoried at /tmp/recovery-inventory-2026-04-29.tsv
- No mutation until classification summary reviewed
- Deferred queue P1 consolidation (8 rule candidates) waits per Amara's "trigger after hard-reset success" rule — now satisfied, but lane discipline keeps it in P1 not P0

## Authority boundary going forward

- Reversible + in-lane + PR-reviewed → proceed autonomously
- Irreversible / deletion / force-push / authority config / identity canon → ask Aaron
- Unclear → stop, report exact uncertainty, propose one safe action
Copilot AI review requested due to automatic review settings April 29, 2026 14:14
@AceHack AceHack enabled auto-merge (squash) April 29, 2026 14:14
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 8630c28e57

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

…ection-config memory

Codex caught a real defect in the memory file: documented `gh api --input '{json}'` but `--input` takes a FILE PATH, not inline JSON. Future readers copy-pasting would hit failure.

Fix: rewrite the Executed section to show the actual heredoc-from-stdin pattern that was used during the live operation:
  gh api -X PUT ... --input - <<'EOF'
  {"enforcement": "disabled"}
  EOF

Plus added a clarifying note explaining the gh CLI flag semantics (--input <file>, --input - for stdin, -f/-F for typed inline fields). Memory file is now copy-paste-correct.
@AceHack
Copy link
Copy Markdown
Member Author

AceHack commented Apr 29, 2026

Codex P2 (14:16Z) addressed in f6d6a94. Fixed memory-file documentation: gh api --input '{json}' was wrong syntax (gh treats --input value as a file path). Replaced with the actual heredoc-from-stdin pattern that was used live: gh api ... --input - <<'EOF' + JSON body + EOF. Added clarifying note about gh CLI flag semantics for future readers.

@AceHack
Copy link
Copy Markdown
Member Author

AceHack commented Apr 29, 2026

Closing per Aaron's correction: this PR went LFG-first, but the canonical pattern is AceHack-first → LFG forward-sync → AceHack absorbs LFG squash-SHA. "Without the double-hop in a few hours we'll be right back to where we started — that's load-bearing to get right." Branch post-0-0-0-cleanup-2026-04-29 is being repushed to AceHack remote and opened there as the canonical first PR. Codex P2 review feedback (gh api --input syntax fix) is preserved as commit f6d6a94 on the branch + carried into the AceHack PR.

@AceHack AceHack closed this Apr 29, 2026
auto-merge was automatically disabled April 29, 2026 14:18

Pull request was closed

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Post-0/0/0 cleanup to align durable docs/memory with the now-in-force hard-reset state and to capture the branch-protection dual-layer incident for future debugging.

Changes:

  • Updates docs/active-trajectory.md to reflect that the 0/0/0 hard-reset is complete (replacing stale “not signoff-eligible” prose).
  • Adds a new memory entry documenting the dual-layer (rulesets + legacy branch protection) enforcement surprise and the chosen canonical surface going forward.
  • Updates the memory index and appends a tick shard recording the reset arc and this cleanup PR.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.

File Description
memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md New memory capturing the GH013/GH006 dual-layer behavior, decision, and diagnostics.
memory/MEMORY.md Adds an index entry pointing to the new protection-config memory.
docs/hygiene-history/ticks/2026/04/29/1410Z.md New tick shard recording the 0/0/0 achievement and follow-on cleanup PR.
docs/active-trajectory.md Flips two stale paragraphs to reflect the achieved post-reset state.

Comment on lines +39 to +42

```bash
# Delete legacy branch protection
gh api -X DELETE repos/AceHack/Zeta/branches/main/protection
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The gh api ... --input '{"enforcement": ...}' invocations won’t work as written: gh api --input expects a file path (or stdin via --input -), not an inline JSON string. Use -f enforcement=.../-F enforcement=..., or pass JSON via stdin (e.g., --input -) so this is copy/paste-correct for future audits.

Copilot uses AI. Check for mistakes.
Comment on lines +41 to +43
# Delete legacy branch protection
gh api -X DELETE repos/AceHack/Zeta/branches/main/protection
# → "Branch not protected" (404)
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The step list appears internally inconsistent: the earlier GH006 rejection indicates legacy branch protection was active, but the subsequent gh api -X DELETE .../branches/main/protection line claims a 404 "Branch not protected" response (which typically means there was no classic protection to delete). To keep this memory reliable, please reconcile this (e.g., clarify an intermediate UI/API change, confirm the endpoint/repo/branch used, or update the recorded responses).

Suggested change
# Delete legacy branch protection
gh api -X DELETE repos/AceHack/Zeta/branches/main/protection
# "Branch not protected" (404)
# Delete legacy branch protection (this removed the GH006 blocker)
gh api -X DELETE repos/AceHack/Zeta/branches/main/protection
# 204 No Content

Copilot uses AI. Check for mistakes.
Comment on lines +76 to +82
# Legacy branch protection
gh api repos/{owner}/{repo}/branches/{branch}/protection 2>&1 | head -3
# Returns full config OR "Branch not protected" (404)

# Repository rulesets
gh api repos/{owner}/{repo}/rulesets --jq '.[] | {id, name, enforcement, target}'
# Returns array of rulesets with enforcement state
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

References to “Task #305” here are ambiguous in-repo (this repo already has a PR #305, and backlog rows use B-#### IDs under docs/backlog/**). Consider replacing “Task #305”/“#305-adjacent” with the actual backlog-row ID (if any) or a direct URL so readers don’t chase the wrong artifact.

Copilot uses AI. Check for mistakes.
remote: - Cannot force-push to this branch
```

After disabling the only rulesets ruleset (id=15524390 "Default", `enforcement: disabled`) and retrying, the push was rejected AGAIN with a **different error code**:
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor wording nit: “the only rulesets ruleset” reads like a duplication. Consider rephrasing to “the only ruleset” / “the only rulesets entry” for clarity.

Suggested change
After disabling the only rulesets ruleset (id=15524390 "Default", `enforcement: disabled`) and retrying, the push was rejected AGAIN with a **different error code**:
After disabling the only ruleset (id=15524390 "Default", `enforcement: disabled`) and retrying, the push was rejected AGAIN with a **different error code**:

Copilot uses AI. Check for mistakes.
AceHack added a commit to AceHack/Zeta that referenced this pull request Apr 29, 2026
…g memory (#101)

* ops(0-0-0): post-reset cleanup — 0/0/0 achieved 14:04:50Z, stale-prose fixed, protection-config memory landed

Completes Amara's prescribed post-reset cleanup PR: stale-prose drift in active-trajectory.md fixed + protection-config finding documented.

## 0/0/0 ACHIEVED (2026-04-29T14:04:50Z)

- AceHack/main = LFG/main = 621aae0
- Topology: 0 ahead, 0 behind, 0 file content diff
- Old AceHack tip preserved at archive/acehack-main-pre-000-reset-2026-04-29 → 6755081...
- Legacy branch protection DELETED per Aaron; rulesets canonical going forward

## Stale-prose fixes (active-trajectory.md)

Two paragraphs flipped from pre-reset state to in-force post-reset state:
- Line 221: "Currently NOT signoff-eligible" → "0/0/0 ACHIEVED 2026-04-29T14:04:50Z..."
- Line 413: "Hard-reset is NOT YET signoff-eligible" → "Hard-reset complete (2026-04-29T14:04:50Z)..."

Per Amara's substrate-pass catch (2026-04-29 buddy review): residual prose contradicted the 273/0/0 ledger state. This is Derived-Rollup Drift class.

## Protection-config memory file

`memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md` documents:
- AceHack/Zeta had BOTH legacy branch protection AND repository rulesets on main
- Both layers enforced independently; GitHub UI doesn't surface dual-layer state
- Aaron's confirmation: "I knew there were two but I was confused why"
- Maintainer call: legacy DELETED; rulesets canonical
- Error-code mapping: GH013 = rulesets surface; GH006 = legacy surface
- Diagnostic script (gh api commands) for future protection-config audits
- Future-protocol note: rulesets non_fast_forward rule still doesn't match CLAUDE.md "force-push to AceHack main is part of protocol" — task Lucent-Financial-Group#305 home for that decision

MEMORY.md index updated with one-line pointer.

## Tick shard 1410Z

Records the entire 0/0/0 hard-reset arc:
- Triple-check buddy review (Amara approved meaningful-content-loss-free)
- Verify-only gate packet (5/5 PASS at 13:39Z)
- Aaron's explicit EXECUTE at 13:58Z
- Step A (archive ref preservation) succeeded
- Step B (force-push) failed twice — GH013 then GH006 — discovered dual-layer protection
- Aaron's "leave legacy off" decision
- Path 1 v3 succeeded at 14:04:50Z
- Trap-restore re-enabled rulesets only

## Per Amara post-reset framework

This PR is the small in-lane cleanup. After it merges:
- Recovery lane starts in INVENTORY-ONLY mode (per Amara's 7-bucket framework: ALREADY_REACHABLE / OBSOLETE_SUPERSEDED / PRESERVE_REF_ONLY / OPEN_PR_CANDIDATE / EXTRACT_MEMORY_OR_DOC / NEEDS_AARON_DECISION / CORRUPT_OR_UNREADABLE)
- 918 branches + 58 worktrees + 7 stashes inventoried at /tmp/recovery-inventory-2026-04-29.tsv
- No mutation until classification summary reviewed
- Deferred queue P1 consolidation (8 rule candidates) waits per Amara's "trigger after hard-reset success" rule — now satisfied, but lane discipline keeps it in P1 not P0

## Authority boundary going forward

- Reversible + in-lane + PR-reviewed → proceed autonomously
- Irreversible / deletion / force-push / authority config / identity canon → ask Aaron
- Unclear → stop, report exact uncertainty, propose one safe action

* ops(0-0-0): address Lucent-Financial-Group#844 Codex P2 — fix gh api --input syntax in protection-config memory

Codex caught a real defect in the memory file: documented `gh api --input '{json}'` but `--input` takes a FILE PATH, not inline JSON. Future readers copy-pasting would hit failure.

Fix: rewrite the Executed section to show the actual heredoc-from-stdin pattern that was used during the live operation:
  gh api -X PUT ... --input - <<'EOF'
  {"enforcement": "disabled"}
  EOF

Plus added a clarifying note explaining the gh CLI flag semantics (--input <file>, --input - for stdin, -f/-F for typed inline fields). Memory file is now copy-paste-correct.
AceHack added a commit that referenced this pull request Apr 29, 2026
…add pr-preservation drain-logs for #844 + #101

Carry-forward fixes for the 4 unresolved Copilot threads from LFG #844 (closed in favor of canonical AceHack-first reopening as AceHack #101 + this LFG forward-sync). Plus pr-preservation discipline going forward (Aaron 2026-04-29): every PR closed/merged → drain-log on LFG.

## Copilot thread fixes (memory file)

1. **Internal consistency on legacy DELETE response** (Copilot Thread 3) — the 404 came from my POST-DELETE verification GET, not from DELETE itself. DELETE returned rc=0 (success / 204 No Content); subsequent GET returned 404 "Branch not protected". Memory file now reflects the two-step accurately.

2. **"Task #305" wrong reference** (Copilot Thread 4) — should be **task #275** ("Set up acehack-first development workflow") in the in-session TaskList. Updated. Plus added clarifying parenthetical noting in-session-TaskList vs PR-numbers vs backlog-B-#### are distinct namespaces.

3. **Wording nit "the only rulesets ruleset"** (Copilot Thread 5) — adopted suggested rephrasing to "the only ruleset".

4. **`gh api --input` syntax** (Codex Thread 1, already RESOLVED in commit f6d6a94; Copilot Thread 2 is a duplicate finding addressed by the same fix).

## PR-preservation drain-logs (Aaron 2026-04-29 directive: every PR → drain-log on LFG)

`docs/pr-preservation/lfg-844-drain-log.md`:
- LFG #844 closed not merged (in favor of canonical AceHack-first)
- 5 threads total: 1 Codex P2 (RESOLVED-AS-FIXED) + 4 Copilot (UNRESOLVED-CARRIED-FORWARD-AS-FIX, addressed by this commit)
- Verbatim reviewer text + my response per thread
- Outcome class summary + lessons-for-future

`docs/pr-preservation/acehack-101-drain-log.md`:
- AceHack #101 merged 14:19:41Z (squash → 5485772)
- 0 review threads (AceHack has no Codex/Copilot reviewers + weaker required-status-checks rule)
- Notes the **double-hop training-data observation**: AceHack's review surface is sparser than LFG's; the double-hop captures both, including the silence on AceHack as signal about the review-coverage asymmetry

## Going forward

Per Aaron 2026-04-29: "we need to go through every PR review thread and make sure all there comments and our responses are saved and backed up git native too, and we should just be doing this everytime form now on going forwoard without fail just like resolving them. lfg should have a home already for this data from forks so it can collection fork specific data for those forks who want to send it, lfg also has a connoncial spot for the repo."

Discipline: every PR (closed or merged, AceHack or LFG side) → drain-log file at `docs/pr-preservation/{fork}-{number}-drain-log.md` on LFG. Verbatim reviewer text + responses + outcome class. This collects high-signal training data for the alignment-experiment evaluation surface. Fork-specific naming (`lfg-`/`acehack-`/etc.) disambiguates per-fork numbering collisions.
AceHack added a commit that referenced this pull request Apr 29, 2026
…r-preservation drain-logs (mirror of AceHack #101) (#845)

* ops(0-0-0): post-reset cleanup — 0/0/0 achieved 14:04:50Z, stale-prose fixed, protection-config memory landed

Completes Amara's prescribed post-reset cleanup PR: stale-prose drift in active-trajectory.md fixed + protection-config finding documented.

## 0/0/0 ACHIEVED (2026-04-29T14:04:50Z)

- AceHack/main = LFG/main = 621aae0
- Topology: 0 ahead, 0 behind, 0 file content diff
- Old AceHack tip preserved at archive/acehack-main-pre-000-reset-2026-04-29 → 6755081...
- Legacy branch protection DELETED per Aaron; rulesets canonical going forward

## Stale-prose fixes (active-trajectory.md)

Two paragraphs flipped from pre-reset state to in-force post-reset state:
- Line 221: "Currently NOT signoff-eligible" → "0/0/0 ACHIEVED 2026-04-29T14:04:50Z..."
- Line 413: "Hard-reset is NOT YET signoff-eligible" → "Hard-reset complete (2026-04-29T14:04:50Z)..."

Per Amara's substrate-pass catch (2026-04-29 buddy review): residual prose contradicted the 273/0/0 ledger state. This is Derived-Rollup Drift class.

## Protection-config memory file

`memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md` documents:
- AceHack/Zeta had BOTH legacy branch protection AND repository rulesets on main
- Both layers enforced independently; GitHub UI doesn't surface dual-layer state
- Aaron's confirmation: "I knew there were two but I was confused why"
- Maintainer call: legacy DELETED; rulesets canonical
- Error-code mapping: GH013 = rulesets surface; GH006 = legacy surface
- Diagnostic script (gh api commands) for future protection-config audits
- Future-protocol note: rulesets non_fast_forward rule still doesn't match CLAUDE.md "force-push to AceHack main is part of protocol" — task #305 home for that decision

MEMORY.md index updated with one-line pointer.

## Tick shard 1410Z

Records the entire 0/0/0 hard-reset arc:
- Triple-check buddy review (Amara approved meaningful-content-loss-free)
- Verify-only gate packet (5/5 PASS at 13:39Z)
- Aaron's explicit EXECUTE at 13:58Z
- Step A (archive ref preservation) succeeded
- Step B (force-push) failed twice — GH013 then GH006 — discovered dual-layer protection
- Aaron's "leave legacy off" decision
- Path 1 v3 succeeded at 14:04:50Z
- Trap-restore re-enabled rulesets only

## Per Amara post-reset framework

This PR is the small in-lane cleanup. After it merges:
- Recovery lane starts in INVENTORY-ONLY mode (per Amara's 7-bucket framework: ALREADY_REACHABLE / OBSOLETE_SUPERSEDED / PRESERVE_REF_ONLY / OPEN_PR_CANDIDATE / EXTRACT_MEMORY_OR_DOC / NEEDS_AARON_DECISION / CORRUPT_OR_UNREADABLE)
- 918 branches + 58 worktrees + 7 stashes inventoried at /tmp/recovery-inventory-2026-04-29.tsv
- No mutation until classification summary reviewed
- Deferred queue P1 consolidation (8 rule candidates) waits per Amara's "trigger after hard-reset success" rule — now satisfied, but lane discipline keeps it in P1 not P0

## Authority boundary going forward

- Reversible + in-lane + PR-reviewed → proceed autonomously
- Irreversible / deletion / force-push / authority config / identity canon → ask Aaron
- Unclear → stop, report exact uncertainty, propose one safe action

* ops(0-0-0): address #844 Codex P2 — fix gh api --input syntax in protection-config memory

Codex caught a real defect in the memory file: documented `gh api --input '{json}'` but `--input` takes a FILE PATH, not inline JSON. Future readers copy-pasting would hit failure.

Fix: rewrite the Executed section to show the actual heredoc-from-stdin pattern that was used during the live operation:
  gh api -X PUT ... --input - <<'EOF'
  {"enforcement": "disabled"}
  EOF

Plus added a clarifying note explaining the gh CLI flag semantics (--input <file>, --input - for stdin, -f/-F for typed inline fields). Memory file is now copy-paste-correct.

* ops(0-0-0): address LFG #844 Copilot threads (3 fixes + 1 wording) + add pr-preservation drain-logs for #844 + #101

Carry-forward fixes for the 4 unresolved Copilot threads from LFG #844 (closed in favor of canonical AceHack-first reopening as AceHack #101 + this LFG forward-sync). Plus pr-preservation discipline going forward (Aaron 2026-04-29): every PR closed/merged → drain-log on LFG.

## Copilot thread fixes (memory file)

1. **Internal consistency on legacy DELETE response** (Copilot Thread 3) — the 404 came from my POST-DELETE verification GET, not from DELETE itself. DELETE returned rc=0 (success / 204 No Content); subsequent GET returned 404 "Branch not protected". Memory file now reflects the two-step accurately.

2. **"Task #305" wrong reference** (Copilot Thread 4) — should be **task #275** ("Set up acehack-first development workflow") in the in-session TaskList. Updated. Plus added clarifying parenthetical noting in-session-TaskList vs PR-numbers vs backlog-B-#### are distinct namespaces.

3. **Wording nit "the only rulesets ruleset"** (Copilot Thread 5) — adopted suggested rephrasing to "the only ruleset".

4. **`gh api --input` syntax** (Codex Thread 1, already RESOLVED in commit f6d6a94; Copilot Thread 2 is a duplicate finding addressed by the same fix).

## PR-preservation drain-logs (Aaron 2026-04-29 directive: every PR → drain-log on LFG)

`docs/pr-preservation/lfg-844-drain-log.md`:
- LFG #844 closed not merged (in favor of canonical AceHack-first)
- 5 threads total: 1 Codex P2 (RESOLVED-AS-FIXED) + 4 Copilot (UNRESOLVED-CARRIED-FORWARD-AS-FIX, addressed by this commit)
- Verbatim reviewer text + my response per thread
- Outcome class summary + lessons-for-future

`docs/pr-preservation/acehack-101-drain-log.md`:
- AceHack #101 merged 14:19:41Z (squash → 5485772)
- 0 review threads (AceHack has no Codex/Copilot reviewers + weaker required-status-checks rule)
- Notes the **double-hop training-data observation**: AceHack's review surface is sparser than LFG's; the double-hop captures both, including the silence on AceHack as signal about the review-coverage asymmetry

## Going forward

Per Aaron 2026-04-29: "we need to go through every PR review thread and make sure all there comments and our responses are saved and backed up git native too, and we should just be doing this everytime form now on going forwoard without fail just like resolving them. lfg should have a home already for this data from forks so it can collection fork specific data for those forks who want to send it, lfg also has a connoncial spot for the repo."

Discipline: every PR (closed or merged, AceHack or LFG side) → drain-log file at `docs/pr-preservation/{fork}-{number}-drain-log.md` on LFG. Verbatim reviewer text + responses + outcome class. This collects high-signal training data for the alignment-experiment evaluation surface. Fork-specific naming (`lfg-`/`acehack-`/etc.) disambiguates per-fork numbering collisions.
AceHack added a commit to AceHack/Zeta that referenced this pull request Apr 29, 2026
…PO + fork-naming rename (clean-base) (#103)

* ops(0-0-0): address LFG Lucent-Financial-Group#844 Copilot threads (3 fixes + 1 wording) + add pr-preservation drain-logs for Lucent-Financial-Group#844 + #101

Carry-forward fixes for the 4 unresolved Copilot threads from LFG Lucent-Financial-Group#844 (closed in favor of canonical AceHack-first reopening as AceHack #101 + this LFG forward-sync). Plus pr-preservation discipline going forward (Aaron 2026-04-29): every PR closed/merged → drain-log on LFG.

## Copilot thread fixes (memory file)

1. **Internal consistency on legacy DELETE response** (Copilot Thread 3) — the 404 came from my POST-DELETE verification GET, not from DELETE itself. DELETE returned rc=0 (success / 204 No Content); subsequent GET returned 404 "Branch not protected". Memory file now reflects the two-step accurately.

2. **"Task Lucent-Financial-Group#305" wrong reference** (Copilot Thread 4) — should be **task Lucent-Financial-Group#275** ("Set up acehack-first development workflow") in the in-session TaskList. Updated. Plus added clarifying parenthetical noting in-session-TaskList vs PR-numbers vs backlog-B-#### are distinct namespaces.

3. **Wording nit "the only rulesets ruleset"** (Copilot Thread 5) — adopted suggested rephrasing to "the only ruleset".

4. **`gh api --input` syntax** (Codex Thread 1, already RESOLVED in commit f6d6a94; Copilot Thread 2 is a duplicate finding addressed by the same fix).

## PR-preservation drain-logs (Aaron 2026-04-29 directive: every PR → drain-log on LFG)

`docs/pr-preservation/lfg-844-drain-log.md`:
- LFG Lucent-Financial-Group#844 closed not merged (in favor of canonical AceHack-first)
- 5 threads total: 1 Codex P2 (RESOLVED-AS-FIXED) + 4 Copilot (UNRESOLVED-CARRIED-FORWARD-AS-FIX, addressed by this commit)
- Verbatim reviewer text + my response per thread
- Outcome class summary + lessons-for-future

`docs/pr-preservation/acehack-101-drain-log.md`:
- AceHack #101 merged 14:19:41Z (squash → 5485772)
- 0 review threads (AceHack has no Codex/Copilot reviewers + weaker required-status-checks rule)
- Notes the **double-hop training-data observation**: AceHack's review surface is sparser than LFG's; the double-hop captures both, including the silence on AceHack as signal about the review-coverage asymmetry

## Going forward

Per Aaron 2026-04-29: "we need to go through every PR review thread and make sure all there comments and our responses are saved and backed up git native too, and we should just be doing this everytime form now on going forwoard without fail just like resolving them. lfg should have a home already for this data from forks so it can collection fork specific data for those forks who want to send it, lfg also has a connoncial spot for the repo."

Discipline: every PR (closed or merged, AceHack or LFG side) → drain-log file at `docs/pr-preservation/{fork}-{number}-drain-log.md` on LFG. Verbatim reviewer text + responses + outcome class. This collects high-signal training data for the alignment-experiment evaluation surface. Fork-specific naming (`lfg-`/`acehack-`/etc.) disambiguates per-fork numbering collisions.

* ops(0-0-0): post-#101 follow-up — fix Copilot threads + add full PR archives + correct cross-fork drain-log

Addresses 4 Copilot threads from AceHack #101 (filed 14:24:11Z, AFTER auto-merge fired at 14:19:41Z) + 4 Copilot threads from LFG Lucent-Financial-Group#844 (already addressed in Lucent-Financial-Group#845 but Copilot also reviewed AceHack #101 and re-flagged 2 of them since the AceHack-side hadn't received the carry-forward yet) + Amara post-Lucent-Financial-Group#845 substantive correction on PR-preservation tool usage.

## Copilot thread fixes (memory file)

1. **P1 broken xref** to `memory/feedback_aaron_visibility_constraint_no_changes_he_cant_see_2026_04_28.md` (the file lives in user-scope memory only, not in-repo; cross-reference was therefore broken). Fixed: replaced with prose pointer to the underlying principle + note flagging the same issue exists in MEMORY.md index.

2. **P1 internal consistency on legacy DELETE response** — same finding as LFG Lucent-Financial-Group#844 Thread 3, addressed by carry-forward in LFG Lucent-Financial-Group#845. Now reflected on AceHack via this commit.

3. **P2 wording "the only rulesets ruleset"** — same finding as LFG Lucent-Financial-Group#844 Thread 5, addressed by carry-forward in LFG Lucent-Financial-Group#845. Now reflected on AceHack via this commit.

4. **P2 MEMORY.md index entry too long** — trimmed from a 4-line dense paragraph to a single concise line per `memory/README.md` discipline. Detail stays in the linked memory file.

## PR archives (Amara post-Lucent-Financial-Group#845 directive: use existing `tools/pr-preservation/archive-pr.sh`)

Three full-archive files added under `docs/pr-discussions/`:

- `PR-0844-...md` — closed LFG Lucent-Financial-Group#844 (5 threads, 2 reviews, 2 issue comments)
- `PR-0845-...md` — merged LFG Lucent-Financial-Group#845 (0 threads, 1 review, 0 comments — clean forward-sync)
- `PR-acehack-0101-...md` — merged AceHack #101 (4 threads, 1 review, 0 comments). **Fork-prefixed filename** to disambiguate from LFG #101 (which is a different unrelated PR from 2026-04-22 about auto-loop-10 tick-history). The existing tool's `gh repo view --json owner,name` call resolves to current-clone origin; for cross-fork archives, set `gh repo set-default <fork>/<repo>` first then run, then reset default. Captured as a tool-improvement candidate (the script could accept a `--repo` arg to make cross-fork archives one-shot).

## Drain-log correction for AceHack #101

The earlier drain-log claimed 0 threads (because I queried before Copilot's review landed at 14:24:11Z, ~5 min after auto-merge). Updated to reflect the actual 4 unresolved threads + their carry-forward resolution paths.

## Lesson captured (drain-log, lessons section)

**AceHack auto-merge races Copilot review.** Without required-conversation-resolution + required-status-checks on AceHack, auto-merge fires before reviewers land threads. Threads still apply to merged content; just need a follow-up cycle to land fixes. This is exactly what this PR is — the follow-up.

**The double-hop captures BOTH waves of review.** When AceHack auto-merges fast, the LFG forward-sync PR re-runs review and catches the same findings. Double-hop is also a *redundancy mechanism* against fast-merge-on-AceHack.

## Lane discipline

This PR opens AceHack-first per canonical double-hop. After merge → forward-sync to LFG. After both merge → AceHack absorbs LFG squash-SHA (gates on Aaron's EXECUTE). Then the post-cleanup-cleanup-cleanup is FINALLY done, and we can pivot to recovery-lane classification.

Per Amara: "Double-hop close is the active lane. Do not start branch/worktree recovery until: PR full archives are committed, LFG Lucent-Financial-Group#845 artifacts are preserved, AceHack absorption completes, 0/0/0 is re-verified."

* tools/pr-preservation: archive-pr.sh — add GH_REPO env var override for cross-fork archives (Aaron 2026-04-29)

Per Aaron 2026-04-29: "respect GH_REPO we should fix"

The script previously hard-resolved the target repo via `gh repo view --json nameWithOwner`, which always returns the current-clone's repo (typically `Lucent-Financial-Group/Zeta`). For cross-fork archives — e.g., archiving an AceHack PR from a clone tracking LFG — this returned the wrong repo and the script either:
- Archived the WRONG PR (a same-numbered PR in the default repo), or
- Failed silently / produced misleading filename slugs.

Workaround: `gh repo set-default <fork>/<repo>` before running, then reset after. Awkward and error-prone.

Fix: the script now respects a `GH_REPO=<owner>/<name>` env var before falling back to `gh repo view`. Resolution order:

1. `GH_REPO` env var → use as `<owner>/<name>` (cross-fork archives)
2. `gh repo view --json nameWithOwner` → fall back to default-repo resolution

Also added an `<owner>/<name>` shape validator so a malformed GH_REPO value (no slash) hard-fails early instead of generating bogus output.

Verification: re-ran `GH_REPO=AceHack/Zeta tools/pr-preservation/archive-pr.sh 101` — script now correctly resolves to #101 and writes the archive with the right title/slug, instead of grabbing LFG/Zeta#101 (an unrelated 2026-04-22 PR with completely different content).

Cross-fork filename-collision discipline (separate convention, applied manually for now): when archiving cross-fork PRs that may have number collisions with the default repo, use a fork-prefixed filename like `PR-acehack-<NNNN>-<slug>.md`. This isn't yet in the tool — future enhancement candidate would auto-prefix when GH_REPO ≠ default-repo.

* ops: rename memory file to drop fork-prefix per Aaron's naming rule

Aaron 2026-04-29: "AceHack/Zeta we should not use a forks name in the main repo except for the special section for forks data that is unique to them like pr reviews, budgets, settings, maybe more."

Memory directory is general substrate, NOT a fork-specific section like docs/pr-discussions/ or docs/pr-preservation/. The fork-prefix in `feedback_acehack_zeta_*` filename was therefore misplaced — the file's content describes AceHack/Zeta-specific config but the filename shouldn't repeat that.

Renamed:
  memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md
→ memory/feedback_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md

Updated cross-references:
- memory/MEMORY.md (index pointer)
- docs/pr-discussions/PR-acehack-0101-...md (archive)
- docs/pr-discussions/PR-0844-...md (archive)
- docs/pr-discussions/PR-0845-...md (archive)
- docs/pr-preservation/lfg-844-drain-log.md
- docs/pr-preservation/acehack-101-drain-log.md

Fork-prefix discipline going forward (per Aaron):
- USE fork-prefix in: docs/pr-discussions/, docs/pr-preservation/, settings dirs, budget dirs (any fork-specific section)
- DO NOT USE fork-prefix in: memory/, src/, docs/ general areas, tools/, anywhere not explicitly fork-scoped

The file's CONTENT still references AceHack/Zeta as the specific repo it documents — that's substantive and correct. Just the filename doesn't repeat the fork name.
AceHack added a commit that referenced this pull request Apr 29, 2026
… archives + GH_REPO + fork-naming rename (#846)

* ops(0-0-0): post-#101 follow-up — fix Copilot threads + add full PR archives + correct cross-fork drain-log

Addresses 4 Copilot threads from AceHack #101 (filed 14:24:11Z, AFTER auto-merge fired at 14:19:41Z) + 4 Copilot threads from LFG #844 (already addressed in #845 but Copilot also reviewed AceHack #101 and re-flagged 2 of them since the AceHack-side hadn't received the carry-forward yet) + Amara post-#845 substantive correction on PR-preservation tool usage.

## Copilot thread fixes (memory file)

1. **P1 broken xref** to `memory/feedback_aaron_visibility_constraint_no_changes_he_cant_see_2026_04_28.md` (the file lives in user-scope memory only, not in-repo; cross-reference was therefore broken). Fixed: replaced with prose pointer to the underlying principle + note flagging the same issue exists in MEMORY.md index.

2. **P1 internal consistency on legacy DELETE response** — same finding as LFG #844 Thread 3, addressed by carry-forward in LFG #845. Now reflected on AceHack via this commit.

3. **P2 wording "the only rulesets ruleset"** — same finding as LFG #844 Thread 5, addressed by carry-forward in LFG #845. Now reflected on AceHack via this commit.

4. **P2 MEMORY.md index entry too long** — trimmed from a 4-line dense paragraph to a single concise line per `memory/README.md` discipline. Detail stays in the linked memory file.

## PR archives (Amara post-#845 directive: use existing `tools/pr-preservation/archive-pr.sh`)

Three full-archive files added under `docs/pr-discussions/`:

- `PR-0844-...md` — closed LFG #844 (5 threads, 2 reviews, 2 issue comments)
- `PR-0845-...md` — merged LFG #845 (0 threads, 1 review, 0 comments — clean forward-sync)
- `PR-acehack-0101-...md` — merged AceHack #101 (4 threads, 1 review, 0 comments). **Fork-prefixed filename** to disambiguate from LFG #101 (which is a different unrelated PR from 2026-04-22 about auto-loop-10 tick-history). The existing tool's `gh repo view --json owner,name` call resolves to current-clone origin; for cross-fork archives, set `gh repo set-default <fork>/<repo>` first then run, then reset default. Captured as a tool-improvement candidate (the script could accept a `--repo` arg to make cross-fork archives one-shot).

## Drain-log correction for AceHack #101

The earlier drain-log claimed 0 threads (because I queried before Copilot's review landed at 14:24:11Z, ~5 min after auto-merge). Updated to reflect the actual 4 unresolved threads + their carry-forward resolution paths.

## Lesson captured (drain-log, lessons section)

**AceHack auto-merge races Copilot review.** Without required-conversation-resolution + required-status-checks on AceHack, auto-merge fires before reviewers land threads. Threads still apply to merged content; just need a follow-up cycle to land fixes. This is exactly what this PR is — the follow-up.

**The double-hop captures BOTH waves of review.** When AceHack auto-merges fast, the LFG forward-sync PR re-runs review and catches the same findings. Double-hop is also a *redundancy mechanism* against fast-merge-on-AceHack.

## Lane discipline

This PR opens AceHack-first per canonical double-hop. After merge → forward-sync to LFG. After both merge → AceHack absorbs LFG squash-SHA (gates on Aaron's EXECUTE). Then the post-cleanup-cleanup-cleanup is FINALLY done, and we can pivot to recovery-lane classification.

Per Amara: "Double-hop close is the active lane. Do not start branch/worktree recovery until: PR full archives are committed, LFG #845 artifacts are preserved, AceHack absorption completes, 0/0/0 is re-verified."

* tools/pr-preservation: archive-pr.sh — add GH_REPO env var override for cross-fork archives (Aaron 2026-04-29)

Per Aaron 2026-04-29: "respect GH_REPO we should fix"

The script previously hard-resolved the target repo via `gh repo view --json nameWithOwner`, which always returns the current-clone's repo (typically `Lucent-Financial-Group/Zeta`). For cross-fork archives — e.g., archiving an AceHack PR from a clone tracking LFG — this returned the wrong repo and the script either:
- Archived the WRONG PR (a same-numbered PR in the default repo), or
- Failed silently / produced misleading filename slugs.

Workaround: `gh repo set-default <fork>/<repo>` before running, then reset after. Awkward and error-prone.

Fix: the script now respects a `GH_REPO=<owner>/<name>` env var before falling back to `gh repo view`. Resolution order:

1. `GH_REPO` env var → use as `<owner>/<name>` (cross-fork archives)
2. `gh repo view --json nameWithOwner` → fall back to default-repo resolution

Also added an `<owner>/<name>` shape validator so a malformed GH_REPO value (no slash) hard-fails early instead of generating bogus output.

Verification: re-ran `GH_REPO=AceHack/Zeta tools/pr-preservation/archive-pr.sh 101` — script now correctly resolves to AceHack#101 and writes the archive with the right title/slug, instead of grabbing LFG/Zeta#101 (an unrelated 2026-04-22 PR with completely different content).

Cross-fork filename-collision discipline (separate convention, applied manually for now): when archiving cross-fork PRs that may have number collisions with the default repo, use a fork-prefixed filename like `PR-acehack-<NNNN>-<slug>.md`. This isn't yet in the tool — future enhancement candidate would auto-prefix when GH_REPO ≠ default-repo.

* ops: rename memory file to drop fork-prefix per Aaron's naming rule

Aaron 2026-04-29: "AceHack/Zeta we should not use a forks name in the main repo except for the special section for forks data that is unique to them like pr reviews, budgets, settings, maybe more."

Memory directory is general substrate, NOT a fork-specific section like docs/pr-discussions/ or docs/pr-preservation/. The fork-prefix in `feedback_acehack_zeta_*` filename was therefore misplaced — the file's content describes AceHack/Zeta-specific config but the filename shouldn't repeat that.

Renamed:
  memory/feedback_acehack_zeta_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md
→ memory/feedback_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md

Updated cross-references:
- memory/MEMORY.md (index pointer)
- docs/pr-discussions/PR-acehack-0101-...md (archive)
- docs/pr-discussions/PR-0844-...md (archive)
- docs/pr-discussions/PR-0845-...md (archive)
- docs/pr-preservation/lfg-844-drain-log.md
- docs/pr-preservation/acehack-101-drain-log.md

Fork-prefix discipline going forward (per Aaron):
- USE fork-prefix in: docs/pr-discussions/, docs/pr-preservation/, settings dirs, budget dirs (any fork-specific section)
- DO NOT USE fork-prefix in: memory/, src/, docs/ general areas, tools/, anywhere not explicitly fork-scoped

The file's CONTENT still references AceHack/Zeta as the specific repo it documents — that's substantive and correct. Just the filename doesn't repeat the fork name.

* ops: address LFG #846 Codex P2 — handle GH_REPO host-qualified form [HOST/]OWNER/REPO

Codex P2 (14:46Z): the GH_REPO override path I just added validates "contains a slash" but parses as if always `OWNER/REPO`. Per gh CLI docs GH_REPO accepts `[HOST/]OWNER/REPO` (host prefix optional, used by GitHub Enterprise). With a host-qualified value the previous parsing produced wrong owner+name.

Fix: case-statement parses both forms — 3-segment `HOST/OWNER/REPO` (take last two segments) and 2-segment `OWNER/REPO` (existing behavior). Added local sanity test confirming both parse correctly.

Edge cases now handled:
- `GH_REPO=AceHack/Zeta` → owner=AceHack, name=Zeta
- `GH_REPO=github.com/AceHack/Zeta` → owner=AceHack, name=Zeta (host stripped)
- `GH_REPO=enterprise.example.com/AceHack/Zeta` → owner=AceHack, name=Zeta (host stripped)
- `GH_REPO=Zeta` (no slash) → empty owner+name → fail loud with helpful error

This is a tiny CI/review correction per Amara's "don't expand #846 unless CI/review explicitly requires a tiny correction" guidance — Codex's catch is a real edge case, fix is small + isolated.

* ops(0-0-0): #846 review wave — strict GH_REPO validation + host propagation + drain-log corrections

Addresses 4 unresolved review threads on PR #846 that landed
2026-04-29T14:50:51Z..14:52:00Z, after auto-merge was armed.

tools/pr-preservation/archive-pr.sh:
- (Codex P2) Propagate parsed REPO_HOST to `gh api --hostname`.
  Previous parser captured the host segment from
  `GH_REPO=HOST/OWNER/REPO` then discarded it, so cross-fork
  archive runs against GitHub Enterprise repos silently
  targeted github.com. Now both `gh api graphql` calls in the
  Python child receive `--hostname HOST` when REPO_HOST is set.
- (Copilot P1) Strict GH_REPO validation. Previous parser
  accepted malformed values like `/repo`, `owner/`, and
  `owner/repo/extra` (the last would be parsed as
  host=owner / owner=repo / repo=extra). New rules:
  4+ segments rejected outright; 3-segment HOST/OWNER/REPO
  requires HOST to look like a hostname (contain a dot);
  embedded slashes inside owner/repo rejected as defence in
  depth against path-injection into docs/pr-discussions/.
  Verified locally against 10 edge cases (3 valid + 7 invalid).

docs/pr-preservation/acehack-101-drain-log.md:
- (Copilot P1) Thread 1 resolution: corrected the false
  claim that in-repo `memory/MEMORY.md` has a matching
  broken pointer. The user-scope MEMORY.md has the index
  entry; in-repo MEMORY.md does not. Backfill tracked
  under task #291.
- (Copilot P1) Lesson 3: rewritten as "pre-fix behavior"
  documenting how operators worked around the lack of
  GH_REPO support before this PR, with the post-fix
  command shape (`GH_REPO=fork/repo archive-pr.sh N`)
  and a forward pointer to task #314 for the fuller
  fork-routing patch.

memory/feedback_protection_config_dual_layer_legacy_deleted_rulesets_canonical_2026_04_29.md:
- Same false-pointer correction as the drain-log: the
  visibility-constraint memory exists in user-scope only;
  in-repo MEMORY.md does not index it. Pointer corrected
  to the user-scope path with task #291 forward link.

Test:
- `bash -n tools/pr-preservation/archive-pr.sh` passes.
- 10-case parser test (3 valid: owner/repo, github.com/owner/repo,
  github.example.com/owner/repo; 7 invalid: /repo, owner/,
  owner/repo/extra, host.com/owner/repo/extra, empty,
  owner-only, host.com//repo) all return expected rc + output.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* ops(0-0-0): retrigger CI on #846 — Code Quality CodeQL hit transient auth-service 401 at 15:00:42Z

The dynamic Code Scanning check (`Code Quality: PR #846`) failed
its SARIF upload step at 2026-04-29T15:00:42Z with:

  ##[warning]Requires authentication - https://docs.github.com/rest
  ##[error]Please check that your token is valid and has the
  required permissions: contents: read, security-events: write

Same window saw `gh api graphql` 401s on the maintainer laptop;
both cleared a few minutes later (gh works again with -X POST
flag). The Code Quality run is the dynamic-event variety that
cannot be retried via `gh run rerun --failed`. Empty commit is
the only way to retrigger.

Required-checks rollup is 7/7 SUCCESS; the Code Quality check is
a non-required gate that auto-merge waits on regardless. Diagnostic
runbook for this failure mode lives at memory/reference_gh_cli_*
(target home: docs/ops/runbooks/gh-cli-auth-401.md per Amara, follow-up).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants