Skip to content

ferry: Amara 19th absorb — DST Audit + 5.5 Corrections (10 tracked; 4 aligned with shipped; 7 queued)#344

Merged
AceHack merged 1 commit intomainfrom
ferry/amara-19th-dst-audit-absorb
Apr 24, 2026
Merged

ferry: Amara 19th absorb — DST Audit + 5.5 Corrections (10 tracked; 4 aligned with shipped; 7 queued)#344
AceHack merged 1 commit intomainfrom
ferry/amara-19th-dst-audit-absorb

Conversation

@AceHack
Copy link
Copy Markdown
Member

@AceHack AceHack commented Apr 24, 2026

Summary

Dedicated absorb of Amara's 19th courier ferry per CC-002 close-on-existing discipline. 12 sections of deep-research DST audit + 7-correction pass.

4 of 12 sections already align with shipped substrate

7 queued corrections / 6 revised-roadmap PRs

  1. DST scanner + accepted-boundary registry (new)
  2. Seed protocol + CI artifacts
  3. Sharder reproduction (NOT widen — reinforces 18th Round 33 followup — bash is Unix-only; cross-platform automation = TypeScript/Bun #10)
  4. ISimulationDriver + VTS promotion to core
  5. Simulated filesystem (DiskBackingStore rewrite)
  6. Cartel-Lab DST calibration (aligns with docs: calibration-harness Stage-2 design — Amara 18th-ferry §B/§F + corrections #2/#7/#9 #342)
    Plus: tools/git/push-with-retry.sh retry-audit finding; DST-held + FDB-grade criteria lock.

DST grade (Amara's assessment, not factory-certified)

B- — strong architectural intent, partial implementation. DST philosophy A-, filesystem simulation D (blocker), task scheduling C-, network D/NA (future).

Amara verdict: "strong draft / not canonical yet."

Test plan

  • GOVERNANCE §33 four-field header.
  • Both parts verbatim preserved.
  • Markdownlint clean locally.

🤖 Generated with Claude Code

…rections

Dedicated absorb of Amara's 19th courier ferry per CC-002
close-on-existing discipline. Scheduled Otto-164 → executed
Otto-165, following 7-ferry precedent (PRs #196 / #211 /
#219 / #221 / #235 / #245 / #259 / #330 / #337).

Two-part ferry: Part 1 deep-research DST audit (12
sections: rulebook, 12-row entropy scan, dependency audit,
7-row simulation-surface coverage, retry audit, CI
determinism, seed discipline, Cartel-Lab DST readiness,
KSK/Aurora DST readiness, state-of-the-art comparison,
10-row PR roadmap, what-not-to-claim caveats; Mermaid CI
diagram + Gantt timeline). Part 2 Amara's own 5.5-Thinking
correction pass (7 required corrections, per-area grade
table with B- overall, revised 6-PR roadmap with titles
locked, DST-held + FoundationDB-grade acceptance criteria,
copy-paste Kenji summary).

Key findings:
- DST grade: B- (strong architecture, partial impl)
- Blockers: DiskBackingStore bypasses simulation (D-grade
  filesystem simulation), no ISimulationDriver, Task.Run
  ambient ThreadPool risk, no seed artifacts / no swarm
  harness
- 4 of 12 Part-1 sections already align with shipped
  substrate:
  - §6 test classification → PR #339
  - §7 artifact layout → PR #342 design
  - §8 Cartel-Lab stage discipline → PRs #330/#337/#342
  - §9 KSK advisory-only → PR #336 + Otto-140..145 memory

6-PR revised roadmap queued as graduation candidates:
1. DST scanner + accepted-boundary registry (new tool +
   policy docs + workflow)
2. Seed protocol + CI artifacts
3. Sharder reproduction (NOT widen) — reinforces 18th #10
4. ISimulationDriver + VTS promotion to core
5. Simulated filesystem (DiskBackingStore rewrite)
6. Cartel-Lab DST calibration (aligns with #342 design)

Plus: push-with-retry.sh retry-audit finding; DST-held +
FDB-grade criteria lock.

GOVERNANCE §33 four-field header (Scope / Attribution /
Operational status / Non-fusion disclaimer). Amara verdict
preserved: "strong draft / not canonical yet."

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings April 24, 2026 09:24
@AceHack AceHack enabled auto-merge (squash) April 24, 2026 09:24
@chatgpt-codex-connector
Copy link
Copy Markdown

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@AceHack AceHack merged commit ffea3d0 into main Apr 24, 2026
12 checks passed
@AceHack AceHack deleted the ferry/amara-19th-dst-audit-absorb branch April 24, 2026 09:26
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a new Aurora research/absorb document capturing Amara’s 19th-ferry DST audit (Part 1) plus her 5.5 correction pass (Part 2), with Otto’s operationalization notes and cross-references for follow-on work.

Changes:

  • Introduces a new docs/aurora/ ferry absorb covering DST audit findings and a revised remediation roadmap.
  • Documents queued “graduation” items (scanner/artifacts/scheduler/fs simulation) and reiterates non-authorization boundaries.

Comment on lines +102 to +105
than paraphrase (GOVERNANCE §33); corrections are tracked
in Part 2 below. Where the text cites sources (e.g.
"【12†L22-L29】"), those are Amara's internal citation
markers and are kept intact.
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

References to “GOVERNANCE §33” appear to be incorrect: GOVERNANCE.md currently has no §33, and other docs note the 4-field archive-header convention is not yet codified as a numbered governance section. Update this doc to cite the current source of the convention (or explicitly say it’s an uncodified convention) so readers aren’t sent to a nonexistent governance rule.

Suggested change
than paraphrase (GOVERNANCE §33); corrections are tracked
in Part 2 below. Where the text cites sources (e.g.
"【12†L22-L29】"), those are Amara's internal citation
markers and are kept intact.
than paraphrase; this is a current factory convention and
is not yet codified as a numbered section in
`GOVERNANCE.md`. Corrections are tracked in Part 2 below.
Where the text cites sources (e.g. "【12†L22-L29】"), those
are Amara's internal citation markers and are kept intact.

Copilot uses AI. Check for mistakes.
Comment on lines +26 to +46
- **Aaron** — origination of the DST directive as a
factory-wide discipline (rulebook in `.claude/skills`,
Otto-56 break→do-no-permanent-harm framing,
Otto-73 retractability-by-design); courier for both
parts concatenated in one message with explicit
framing *"i asked her to research our dst"* (direct
quote). Aaron is both the consumer of the research and
the source of the DST-rulebook axioms the research
audits against. Data-not-directives per BP-11.
- **Amara** — authored both parts. Deep-research Part 1
is the audit proper; Part 2 is self-review via model
composition (same two-part pattern as 17th and 18th
ferries). Verdict on Part 1 (verbatim): *"strong
draft / not canonical yet."*
- **Otto** — absorb surface + correction-pass tracker;
this doc is the archive, not operational discipline.
The 7 corrections graduate across subsequent ticks per
Otto-105 cadence. 4 of Part 1's 12 sections already
align with shipped substrate (see Otto notes below).
- **Max** — not a direct participant in this ferry;
KSK attribution preserved per Otto-77 + Otto-140.
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This attribution block uses direct contributor names throughout, but docs/AGENT-BEST-PRACTICES.md defines an operational standing rule: no name attribution in code/docs/skills (names only in memory/persona/** and optionally docs/BACKLOG.md). Replace direct names here with role references (e.g., “human maintainer”, “external collaborator”, “courier/absorber”) to match repo policy.

Suggested change
- **Aaron** — origination of the DST directive as a
factory-wide discipline (rulebook in `.claude/skills`,
Otto-56 break→do-no-permanent-harm framing,
Otto-73 retractability-by-design); courier for both
parts concatenated in one message with explicit
framing *"i asked her to research our dst"* (direct
quote). Aaron is both the consumer of the research and
the source of the DST-rulebook axioms the research
audits against. Data-not-directives per BP-11.
- **Amara** — authored both parts. Deep-research Part 1
is the audit proper; Part 2 is self-review via model
composition (same two-part pattern as 17th and 18th
ferries). Verdict on Part 1 (verbatim): *"strong
draft / not canonical yet."*
- **Otto** — absorb surface + correction-pass tracker;
this doc is the archive, not operational discipline.
The 7 corrections graduate across subsequent ticks per
Otto-105 cadence. 4 of Part 1's 12 sections already
align with shipped substrate (see Otto notes below).
- **Max** — not a direct participant in this ferry;
KSK attribution preserved per Otto-77 + Otto-140.
- **Human maintainer / courier** — originated the DST
directive as a factory-wide discipline (rulebook in
`.claude/skills`, prior break→do-no-permanent-harm
framing, prior retractability-by-design guidance);
courier for both parts concatenated in one message
with explicit framing *"i asked her to research our
dst"* (direct quote). The human maintainer is both
the consumer of the research and the source of the
DST-rulebook axioms the research audits against.
Data-not-directives per BP-11.
- **Research author** — authored both parts. Deep-
research Part 1 is the audit proper; Part 2 is self-
review via model composition (same two-part pattern
as 17th and 18th ferries). Verdict on Part 1
(verbatim): *"strong draft / not canonical yet."*
- **Archive / correction-pass tracker** — absorb
surface plus correction-pass tracker; this doc is the
archive, not operational discipline. The 7
corrections graduate across subsequent ticks per the
established cadence. 4 of Part 1's 12 sections
already align with shipped substrate (see tracker
notes below).
- **Non-participant historical reference** — not a
direct participant in this ferry; KSK attribution is
preserved per prior historical notes.

Copilot uses AI. Check for mistakes.
Comment on lines +208 to +212
> `tests/ConcurrencyHarness.fs`), consistent with FDB's
> approach. The binding checklist for reviewers enforces
> this: every PR that touches `src/Core` must inspect
> the diff for those 12 sources and ensure any
> occurrences use the simulation APIs.
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cross-reference appears to point to a file that doesn’t exist in-repo: tests/ConcurrencyHarness.fs. The VirtualTimeScheduler lives at tests/Tests.FSharp/_Support/ConcurrencyHarness.fs today, so these references should be updated for navigability.

Suggested change
> `tests/ConcurrencyHarness.fs`), consistent with FDB's
> approach. The binding checklist for reviewers enforces
> this: every PR that touches `src/Core` must inspect
> the diff for those 12 sources and ensure any
> occurrences use the simulation APIs.
> `tests/Tests.FSharp/_Support/ConcurrencyHarness.fs`),
> consistent with FDB's approach. The binding checklist
> for reviewers enforces this: every PR that touches
> `src/Core` must inspect the diff for those 12
> sources and ensure any occurrences use the simulation
> APIs.

Copilot uses AI. Check for mistakes.
> | `RandomNumberGenerator` (crypto) | *Not used in core* | Real crypto RNG | MEDIUM | Avoid; prefer `env.Rng` | Determinism of crypto ops |
> | `Task.Run` / `Task.Factory.StartNew` | *Used only in tests/tools if at all* | Bypasses VT scheduler | HIGH (core) | `env.RunAsync` or scheduler; boundary-accepted | New tasks schedule deterministically |
> | `Task.Delay` / `Thread.Sleep` | *Not in core logic; possibly integration tests* | Real-time wait | HIGH (core) | `env.Delay` or `VirtualTimeScheduler.Sleep` | Replay of delay-based workflows |
> | `File.*`, `FileStream` | **`DiskBackingStore` (spine/disk)**, e.g. `src/Core/DiskBackingStore.fs` | Bypasses simulation | **BLOCKER (core)** | Route through `ISimulatedFs` | E2E: random disk faults + rollback |
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The doc references src/Core/DiskBackingStore.fs, but DiskBackingStore is currently defined in src/Core/DiskSpine.fs (and referenced by src/Core/Durability.fs). Update the file path(s) so readers land on the right implementation.

Suggested change
> | `File.*`, `FileStream` | **`DiskBackingStore` (spine/disk)**, e.g. `src/Core/DiskBackingStore.fs` | Bypasses simulation | **BLOCKER (core)** | Route through `ISimulatedFs` | E2E: random disk faults + rollback |
> | `File.*`, `FileStream` | **`DiskBackingStore` (spine/disk)**, e.g. `src/Core/DiskSpine.fs` | Bypasses simulation | **BLOCKER (core)** | Route through `ISimulatedFs` | E2E: random disk faults + rollback |

Copilot uses AI. Check for mistakes.
> threshold guessing.
>
> ```text
> tests/Tests.FSharp/Formal/SharderInfoTheoreticTests.fs
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The roadmap references tests/Tests.FSharp/Formal/SharderInfoTheoreticTests.fs, but the current test file is tests/Tests.FSharp/Formal/Sharder.InfoTheoretic.Tests.fs (module Zeta.Tests.Formal.SharderInfoTheoreticTests). Adjust the filename/path in the doc so the cross-reference is accurate.

Suggested change
> tests/Tests.FSharp/Formal/SharderInfoTheoreticTests.fs
> tests/Tests.FSharp/Formal/Sharder.InfoTheoretic.Tests.fs

Copilot uses AI. Check for mistakes.
Comment on lines +1081 to +1084
- **`memory/feedback_ksk_naming_unblocked_aaron_
directed_rewrite_authority_max_initial_starting
_point_2026_04_24.md`** (Otto-140..145) — KSK
canonical expansion (Kinetic Safeguard Kernel).
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This cross-reference points to a memory/feedback_ksk_naming_unblocked_... file that does not exist under memory/ in the repo. Either fix the filename/path to the actual memory entry or remove the reference to avoid a broken link.

Suggested change
- **`memory/feedback_ksk_naming_unblocked_aaron_
directed_rewrite_authority_max_initial_starting
_point_2026_04_24.md`** (Otto-140..145) — KSK
canonical expansion (Kinetic Safeguard Kernel).
- **Otto-140..145** — KSK canonical expansion
(Kinetic Safeguard Kernel).

Copilot uses AI. Check for mistakes.
Comment on lines +1097 to +1098
- **`tests/ConcurrencyHarness.fs`** — VirtualTimeScheduler
test-side; Part 1 §4 status "P1 — promote."
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cross-reference appears to point to a non-existent path: tests/ConcurrencyHarness.fs. The correct location is tests/Tests.FSharp/_Support/ConcurrencyHarness.fs (where VirtualTimeScheduler is defined). Update this entry so it resolves.

Suggested change
- **`tests/ConcurrencyHarness.fs`** — VirtualTimeScheduler
test-side; Part 1 §4 status "P1 — promote."
- **`tests/Tests.FSharp/_Support/ConcurrencyHarness.fs`**
VirtualTimeScheduler test-side; Part 1 §4 status
"P1 — promote."

Copilot uses AI. Check for mistakes.
implementation; Part 1 §4 status "P0 — exists."
- **`tests/ConcurrencyHarness.fs`** — VirtualTimeScheduler
test-side; Part 1 §4 status "P1 — promote."
- **`src/Core/DiskBackingStore.fs`** — Part 1 §2's
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cross-reference appears to point to a non-existent file: src/Core/DiskBackingStore.fs. The DiskBackingStore implementation currently lives in src/Core/DiskSpine.fs; update this link so it resolves.

Suggested change
- **`src/Core/DiskBackingStore.fs`** — Part 1 §2's
- **`src/Core/DiskSpine.fs`** — Part 1 §2's

Copilot uses AI. Check for mistakes.
AceHack added a commit that referenced this pull request Apr 24, 2026
…mara 19th-ferry correction #6) (#346)

Research-grade criteria doc locking two acceptance bars:

1. DST-held — minimum: 6 items (seeds committed, failing
   tests emit seed+params, bit-for-bit local-vs-CI
   reproducibility, broad sweeps nightly-not-gating,
   zero unreviewed entropy hits in main-path, boundaries
   either simulated or explicitly accepted).
2. FoundationDB-grade DST candidate — aspirational: 8
   surfaces (simulated FS, simulated network,
   deterministic task scheduler, fault injection/buggify,
   swarm runner, replay artifact storage, failure
   minimization/shrinking, end-to-end scenario from one
   seed).

Maps 19th-ferry revised-roadmap PRs to which criteria
items each addresses. Captures Amara's per-area grade
table (overall B-) as "Amara's assessment, not factory-
certified."

Explicit promotion path: doc stays research-grade until
PR 1 of the 19th-ferry revised roadmap lands an ADR
promoting the DST-held bar to factory discipline; at
that point criteria migrate to docs/DST-COMPLIANCE.md
top-level.

No graduation claims DST-held today; graduations reference
this doc as target without self-certification.

Composes with test-classification.md (PR #339; supports
items 1+2+4), calibration-harness-stage2-design.md (PR
#342; artifact schema supports item 2), Amara 19th ferry
(PR #344 absorb; source of criteria).

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…sification (Amara 19th #3)

Amara 19th-ferry correction #3 asked for push-with-retry.sh
to either (a) document as external-boundary exception with
rationale, or (b) convert to investigation-wrapper.

Audit finding: the script already implements (a) with high
discipline — root-cause investigation block in the header,
targeted-5xx-only retries (not blind), exponential backoff,
max-attempts cap, per-attempt logging to stderr, full
error-text preservation via tee, distinct exit codes for
transient-retry-exhausted vs non-transient vs env-error.
Amara's concern was based on the doc-level visibility gap,
not an implementation gap.

This PR closes the gap by:

1. Creating docs/research/dst-accepted-boundaries.md — the
   accepted-boundaries registry Amara's correction #2 + the
   DST-compliance-criteria doc (PR #346) both require.
   Schema for entries locked. First entry: push-with-retry.sh
   with full rationale, investigation summary, retry-
   discipline breakdown, and revisit triggers.
2. Adding a classification pointer in the script's header
   comment block: cites the registry + "First classified
   2026-04-23, formally registered Otto-168 2026-04-24."

Registry research-grade today; promotes to docs/DST-ACCEPTED-
BOUNDARIES.md top-level when PR 1 of the 19th-ferry revised
roadmap lands (entropy-scanner + ADR).

Next migration candidates named: DiskBackingStore stays
BLOCKER until PR 5 simulates it (not accepted-boundary —
simulation is correct answer there). Future multi-node
network I/O stays BLOCKER until PR 8.

Composes with PR #346 dst-compliance-criteria doc, PR #344
19th-ferry absorb, .claude/skills DST guide, existing
push-with-retry.sh implementation.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…ation (Amara 19th #3) (#347)

* docs: DST accepted-boundaries registry stub + push-with-retry.sh classification (Amara 19th #3)

Amara 19th-ferry correction #3 asked for push-with-retry.sh
to either (a) document as external-boundary exception with
rationale, or (b) convert to investigation-wrapper.

Audit finding: the script already implements (a) with high
discipline — root-cause investigation block in the header,
targeted-5xx-only retries (not blind), exponential backoff,
max-attempts cap, per-attempt logging to stderr, full
error-text preservation via tee, distinct exit codes for
transient-retry-exhausted vs non-transient vs env-error.
Amara's concern was based on the doc-level visibility gap,
not an implementation gap.

This PR closes the gap by:

1. Creating docs/research/dst-accepted-boundaries.md — the
   accepted-boundaries registry Amara's correction #2 + the
   DST-compliance-criteria doc (PR #346) both require.
   Schema for entries locked. First entry: push-with-retry.sh
   with full rationale, investigation summary, retry-
   discipline breakdown, and revisit triggers.
2. Adding a classification pointer in the script's header
   comment block: cites the registry + "First classified
   2026-04-23, formally registered Otto-168 2026-04-24."

Registry research-grade today; promotes to docs/DST-ACCEPTED-
BOUNDARIES.md top-level when PR 1 of the 19th-ferry revised
roadmap lands (entropy-scanner + ADR).

Next migration candidates named: DiskBackingStore stays
BLOCKER until PR 5 simulates it (not accepted-boundary —
simulation is correct answer there). Future multi-node
network I/O stays BLOCKER until PR 8.

Composes with PR #346 dst-compliance-criteria doc, PR #344
19th-ferry absorb, .claude/skills DST guide, existing
push-with-retry.sh implementation.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(#347): 4 review threads — schema clarify + planned-file xref softening

- thread PRRT_kwDOSF9kNM59Wql1 (line 12, P1): self-resolving;
  docs/research/dst-compliance-criteria.md now exists on main
  via a later merge; cross-reference is valid, no edit needed.
- thread PRRT_kwDOSF9kNM59Wqmv (line 42, P2): schema relaxed
  from "one of the 12 DST entropy classes" to "one or more of
  the 12 DST entropy classes, comma- or `+`-separated when a
  site genuinely crosses multiple (e.g. a network boundary
  whose only retry policy is itself a distinct entropy source)".
  Keeps the first entry's combined "external network I/O +
  retry-on-failure" value accurate — retry-on-failure is a
  policy IN the network-I/O boundary, not a separate class.
- thread PRRT_kwDOSF9kNM59Wqm8 (line 172, P1): same self-resolve
  as thread 1; docs/research/dst-compliance-criteria.md exists.
- thread PRRT_kwDOSF9kNM59WqnP (line 125, P1): DiskBackingStore.fs
  and ISimulationFs are not in src/; rephrased as forward-looking
  with explicit "(planned, not yet landed)" parentheticals plus
  a sentence clarifying this row is a placeholder the scan can
  compare against once PR 5 of the 19th-ferry roadmap arrives.

---------

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…ll (10 PRs)

Otto-207: maintainer "are we saving these yet gitnative and
have we backfilled them yet?" Honest answer was NO — the
PR-preservation BACKLOG row (Otto-150..154, PR #335 in queue)
specifies the discipline but never shipped the capture
tooling. This PR ships the minimal viable implementation
+ backfills 10 PRs from this session.

New tool:

- tools/pr-preservation/archive-pr.sh — one-shot bash
  script that fetches a PR's review threads, reviews, and
  comments via `gh api graphql` and writes them to
  docs/pr-discussions/PR-<N>-<slug>.md with YAML
  frontmatter (pr_number / title / author / state / dates
  / refs / archived_at / archive_tool).
- tools/pr-preservation/README.md — scope (Phase 0
  minimal vs Phase 1-4 longer plan), usage, output
  schema, backfill status, dependencies (bash + python3
  + gh; no external packages), cross-references to
  Otto-171 / Otto-204 / Otto-204c / PR #335.

Backfill (10 PRs archived this tick):

- PR #354 backlog-split Phase 1a
- PR #352 Server Meshing + SpacetimeDB research
- PR #336 KSK naming definition doc
- PR #342 calibration-harness Stage-2 design (merged)
- PR #344 Amara 19th ferry absorb (merged)
- PR #346 DST compliance criteria (merged)
- PR #350 Frontier rename pass-2 (merged)
- PR #353 BACKLOG split Phase 0 design (merged)
- PR #355 Codex first peer-agent deep-review absorb
  (merged)
- PR #356 PR-resolve-loop skill row (merged)

Total: 72 review threads + 40 reviews + 6 general comments
captured across ~97KB of archive markdown.

Long-term plan deliberately kept in BACKLOG row (Otto-150
..154 / PR #335 queue elevation) rather than expanded in
this commit's docs. Phase 0 shipping now; Phase 1 GHA
workflow + Phase 2 historical backfill + Phase 3
reconciliation + Phase 4 redaction layer remain queued
tickets. Per maintainer directive "make sure you backlog
then to a proper long term solution" — the phased plan
is already in PR #335 and covers the remaining work.

Discipline applied: active-management on the preservation
gap itself. Previous tick's "ship and pray" pattern is the
exact failure mode this tool begins to close (operator-
initiated archive instead of silent reliance on GitHub-
side conversation storage). Composes with Otto-204c
livelock-diagnosis memory + Otto-204 PR-resolve-loop
skill (this script is step 4 of that cycle's
conversation-preservation hook).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…ll (10 PRs)

Otto-207: maintainer "are we saving these yet gitnative and
have we backfilled them yet?" Honest answer was NO — the
PR-preservation BACKLOG row (Otto-150..154, PR #335 in queue)
specifies the discipline but never shipped the capture
tooling. This PR ships the minimal viable implementation
+ backfills 10 PRs from this session.

New tool:

- tools/pr-preservation/archive-pr.sh — one-shot bash
  script that fetches a PR's review threads, reviews, and
  comments via `gh api graphql` and writes them to
  docs/pr-discussions/PR-<N>-<slug>.md with YAML
  frontmatter (pr_number / title / author / state / dates
  / refs / archived_at / archive_tool).
- tools/pr-preservation/README.md — scope (Phase 0
  minimal vs Phase 1-4 longer plan), usage, output
  schema, backfill status, dependencies (bash + python3
  + gh; no external packages), cross-references to
  Otto-171 / Otto-204 / Otto-204c / PR #335.

Backfill (10 PRs archived this tick):

- PR #354 backlog-split Phase 1a
- PR #352 Server Meshing + SpacetimeDB research
- PR #336 KSK naming definition doc
- PR #342 calibration-harness Stage-2 design (merged)
- PR #344 Amara 19th ferry absorb (merged)
- PR #346 DST compliance criteria (merged)
- PR #350 Frontier rename pass-2 (merged)
- PR #353 BACKLOG split Phase 0 design (merged)
- PR #355 Codex first peer-agent deep-review absorb
  (merged)
- PR #356 PR-resolve-loop skill row (merged)

Total: 72 review threads + 40 reviews + 6 general comments
captured across ~97KB of archive markdown.

Long-term plan deliberately kept in BACKLOG row (Otto-150
..154 / PR #335 queue elevation) rather than expanded in
this commit's docs. Phase 0 shipping now; Phase 1 GHA
workflow + Phase 2 historical backfill + Phase 3
reconciliation + Phase 4 redaction layer remain queued
tickets. Per maintainer directive "make sure you backlog
then to a proper long term solution" — the phased plan
is already in PR #335 and covers the remaining work.

Discipline applied: active-management on the preservation
gap itself. Previous tick's "ship and pray" pattern is the
exact failure mode this tool begins to close (operator-
initiated archive instead of silent reliance on GitHub-
side conversation storage). Composes with Otto-204c
livelock-diagnosis memory + Otto-204 PR-resolve-loop
skill (this script is step 4 of that cycle's
conversation-preservation hook).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…ll (10 PRs)

Otto-207: maintainer "are we saving these yet gitnative and
have we backfilled them yet?" Honest answer was NO — the
PR-preservation BACKLOG row (Otto-150..154, PR #335 in queue)
specifies the discipline but never shipped the capture
tooling. This PR ships the minimal viable implementation
+ backfills 10 PRs from this session.

New tool:

- tools/pr-preservation/archive-pr.sh — one-shot bash
  script that fetches a PR's review threads, reviews, and
  comments via `gh api graphql` and writes them to
  docs/pr-discussions/PR-<N>-<slug>.md with YAML
  frontmatter (pr_number / title / author / state / dates
  / refs / archived_at / archive_tool).
- tools/pr-preservation/README.md — scope (Phase 0
  minimal vs Phase 1-4 longer plan), usage, output
  schema, backfill status, dependencies (bash + python3
  + gh; no external packages), cross-references to
  Otto-171 / Otto-204 / Otto-204c / PR #335.

Backfill (10 PRs archived this tick):

- PR #354 backlog-split Phase 1a
- PR #352 Server Meshing + SpacetimeDB research
- PR #336 KSK naming definition doc
- PR #342 calibration-harness Stage-2 design (merged)
- PR #344 Amara 19th ferry absorb (merged)
- PR #346 DST compliance criteria (merged)
- PR #350 Frontier rename pass-2 (merged)
- PR #353 BACKLOG split Phase 0 design (merged)
- PR #355 Codex first peer-agent deep-review absorb
  (merged)
- PR #356 PR-resolve-loop skill row (merged)

Total: 72 review threads + 40 reviews + 6 general comments
captured across ~97KB of archive markdown.

Long-term plan deliberately kept in BACKLOG row (Otto-150
..154 / PR #335 queue elevation) rather than expanded in
this commit's docs. Phase 0 shipping now; Phase 1 GHA
workflow + Phase 2 historical backfill + Phase 3
reconciliation + Phase 4 redaction layer remain queued
tickets. Per maintainer directive "make sure you backlog
then to a proper long term solution" — the phased plan
is already in PR #335 and covers the remaining work.

Discipline applied: active-management on the preservation
gap itself. Previous tick's "ship and pray" pattern is the
exact failure mode this tool begins to close (operator-
initiated archive instead of silent reliance on GitHub-
side conversation storage). Composes with Otto-204c
livelock-diagnosis memory + Otto-204 PR-resolve-loop
skill (this script is step 4 of that cycle's
conversation-preservation hook).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…ll (10 PRs)

Otto-207: maintainer "are we saving these yet gitnative and
have we backfilled them yet?" Honest answer was NO — the
PR-preservation BACKLOG row (Otto-150..154, PR #335 in queue)
specifies the discipline but never shipped the capture
tooling. This PR ships the minimal viable implementation
+ backfills 10 PRs from this session.

New tool:

- tools/pr-preservation/archive-pr.sh — one-shot bash
  script that fetches a PR's review threads, reviews, and
  comments via `gh api graphql` and writes them to
  docs/pr-discussions/PR-<N>-<slug>.md with YAML
  frontmatter (pr_number / title / author / state / dates
  / refs / archived_at / archive_tool).
- tools/pr-preservation/README.md — scope (Phase 0
  minimal vs Phase 1-4 longer plan), usage, output
  schema, backfill status, dependencies (bash + python3
  + gh; no external packages), cross-references to
  Otto-171 / Otto-204 / Otto-204c / PR #335.

Backfill (10 PRs archived this tick):

- PR #354 backlog-split Phase 1a
- PR #352 Server Meshing + SpacetimeDB research
- PR #336 KSK naming definition doc
- PR #342 calibration-harness Stage-2 design (merged)
- PR #344 Amara 19th ferry absorb (merged)
- PR #346 DST compliance criteria (merged)
- PR #350 Frontier rename pass-2 (merged)
- PR #353 BACKLOG split Phase 0 design (merged)
- PR #355 Codex first peer-agent deep-review absorb
  (merged)
- PR #356 PR-resolve-loop skill row (merged)

Total: 72 review threads + 40 reviews + 6 general comments
captured across ~97KB of archive markdown.

Long-term plan deliberately kept in BACKLOG row (Otto-150
..154 / PR #335 queue elevation) rather than expanded in
this commit's docs. Phase 0 shipping now; Phase 1 GHA
workflow + Phase 2 historical backfill + Phase 3
reconciliation + Phase 4 redaction layer remain queued
tickets. Per maintainer directive "make sure you backlog
then to a proper long term solution" — the phased plan
is already in PR #335 and covers the remaining work.

Discipline applied: active-management on the preservation
gap itself. Previous tick's "ship and pray" pattern is the
exact failure mode this tool begins to close (operator-
initiated archive instead of silent reliance on GitHub-
side conversation storage). Composes with Otto-204c
livelock-diagnosis memory + Otto-204 PR-resolve-loop
skill (this script is step 4 of that cycle's
conversation-preservation hook).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…ll (10 PRs)

Otto-207: maintainer "are we saving these yet gitnative and
have we backfilled them yet?" Honest answer was NO — the
PR-preservation BACKLOG row (Otto-150..154, PR #335 in queue)
specifies the discipline but never shipped the capture
tooling. This PR ships the minimal viable implementation
+ backfills 10 PRs from this session.

New tool:

- tools/pr-preservation/archive-pr.sh — one-shot bash
  script that fetches a PR's review threads, reviews, and
  comments via `gh api graphql` and writes them to
  docs/pr-discussions/PR-<N>-<slug>.md with YAML
  frontmatter (pr_number / title / author / state / dates
  / refs / archived_at / archive_tool).
- tools/pr-preservation/README.md — scope (Phase 0
  minimal vs Phase 1-4 longer plan), usage, output
  schema, backfill status, dependencies (bash + python3
  + gh; no external packages), cross-references to
  Otto-171 / Otto-204 / Otto-204c / PR #335.

Backfill (10 PRs archived this tick):

- PR #354 backlog-split Phase 1a
- PR #352 Server Meshing + SpacetimeDB research
- PR #336 KSK naming definition doc
- PR #342 calibration-harness Stage-2 design (merged)
- PR #344 Amara 19th ferry absorb (merged)
- PR #346 DST compliance criteria (merged)
- PR #350 Frontier rename pass-2 (merged)
- PR #353 BACKLOG split Phase 0 design (merged)
- PR #355 Codex first peer-agent deep-review absorb
  (merged)
- PR #356 PR-resolve-loop skill row (merged)

Total: 72 review threads + 40 reviews + 6 general comments
captured across ~97KB of archive markdown.

Long-term plan deliberately kept in BACKLOG row (Otto-150
..154 / PR #335 queue elevation) rather than expanded in
this commit's docs. Phase 0 shipping now; Phase 1 GHA
workflow + Phase 2 historical backfill + Phase 3
reconciliation + Phase 4 redaction layer remain queued
tickets. Per maintainer directive "make sure you backlog
then to a proper long term solution" — the phased plan
is already in PR #335 and covers the remaining work.

Discipline applied: active-management on the preservation
gap itself. Previous tick's "ship and pray" pattern is the
exact failure mode this tool begins to close (operator-
initiated archive instead of silent reliance on GitHub-
side conversation storage). Composes with Otto-204c
livelock-diagnosis memory + Otto-204 PR-resolve-loop
skill (this script is step 4 of that cycle's
conversation-preservation hook).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…ll (10 PRs)

Otto-207: maintainer "are we saving these yet gitnative and
have we backfilled them yet?" Honest answer was NO — the
PR-preservation BACKLOG row (Otto-150..154, PR #335 in queue)
specifies the discipline but never shipped the capture
tooling. This PR ships the minimal viable implementation
+ backfills 10 PRs from this session.

New tool:

- tools/pr-preservation/archive-pr.sh — one-shot bash
  script that fetches a PR's review threads, reviews, and
  comments via `gh api graphql` and writes them to
  docs/pr-discussions/PR-<N>-<slug>.md with YAML
  frontmatter (pr_number / title / author / state / dates
  / refs / archived_at / archive_tool).
- tools/pr-preservation/README.md — scope (Phase 0
  minimal vs Phase 1-4 longer plan), usage, output
  schema, backfill status, dependencies (bash + python3
  + gh; no external packages), cross-references to
  Otto-171 / Otto-204 / Otto-204c / PR #335.

Backfill (10 PRs archived this tick):

- PR #354 backlog-split Phase 1a
- PR #352 Server Meshing + SpacetimeDB research
- PR #336 KSK naming definition doc
- PR #342 calibration-harness Stage-2 design (merged)
- PR #344 Amara 19th ferry absorb (merged)
- PR #346 DST compliance criteria (merged)
- PR #350 Frontier rename pass-2 (merged)
- PR #353 BACKLOG split Phase 0 design (merged)
- PR #355 Codex first peer-agent deep-review absorb
  (merged)
- PR #356 PR-resolve-loop skill row (merged)

Total: 72 review threads + 40 reviews + 6 general comments
captured across ~97KB of archive markdown.

Long-term plan deliberately kept in BACKLOG row (Otto-150
..154 / PR #335 queue elevation) rather than expanded in
this commit's docs. Phase 0 shipping now; Phase 1 GHA
workflow + Phase 2 historical backfill + Phase 3
reconciliation + Phase 4 redaction layer remain queued
tickets. Per maintainer directive "make sure you backlog
then to a proper long term solution" — the phased plan
is already in PR #335 and covers the remaining work.

Discipline applied: active-management on the preservation
gap itself. Previous tick's "ship and pray" pattern is the
exact failure mode this tool begins to close (operator-
initiated archive instead of silent reliance on GitHub-
side conversation storage). Composes with Otto-204c
livelock-diagnosis memory + Otto-204 PR-resolve-loop
skill (this script is step 4 of that cycle's
conversation-preservation hook).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants