Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 1 addition & 5 deletions docs/BACKLOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5266,11 +5266,7 @@ systems. This track claims the space.

## P2 — research-grade

- [ ] **KSK naming definition doc — `docs/definitions/KSK.md` leading with canonical expansion `KSK = Kinetic Safeguard Kernel`.** **Authority: Aaron Otto-140 rewrite approved; Max attribution preserved as initial-starting-point contributor (Otto-77).** Amara 2026-04-24 (16th courier ferry, GPT-5.5 Thinking) flagged the naming ambiguity: *"'KSK' has multiple possible meanings: DNSSEC-style Key Signing Key, your emerging Kinetic Safeguard Kernel / trust-anchor idea, maybe broader 'ceremony + root-of-trust + governance key' structure."* Aaron Otto-142..145 (self-correcting Otto-141 typo "SDK") canonicalized: *"kinetic safeguare Kernel, i did the wrong name / it is what amara said / kinetic safeguard kernel"* — matches Amara's 5th and 16th ferry phrasing. Doc scope: (1) lead sentence *"KSK = Kinetic Safeguard Kernel. 'Kernel' here is safety-kernel / security-kernel sense (Anderson 1972, Saltzer-Schroeder reference-monitor, aviation safety-kernel) — a small trusted enforcement core, **NOT OS-kernel-mode** (not ring 0, not Linux/Windows kernel)"*; (2) "Inspired by..." DNSSEC KSK / DNSCrypt / threshold-sig ceremonies / security-kernel lineage; (3) "NOT identical to..." OS kernel, DNSSEC KSK (signs zone keys); (4) cross-refs to 5 ferries elaborating architecture; (5) Max attribution: *"Initial starting point committed by Max under Aaron's direction in LFG/lucent-ksk; substrate is Aaron+Amara's concept, completely rewritable."* (Otto-140 lifted the Max-coordination gate; Otto-77 attribution stands.) Priority P2 research-grade (elevated from P3); effort S (doc) — coordination overhead removed. Composes with Amara 17th-ferry correction #7 (now resolved), Otto-77 Max attribution, Otto-90 Aaron+Max-not-gates, Otto-140..145 Aaron canonical expansion + gate-lift, Otto-108 single-team-until-interfaces-harden.

- [ ] **SharderInfoTheoreticTests "Uniform traffic" flake — process-hash-randomization root cause + deterministic-threshold fix.** Aaron 2026-04-24 Otto-132 flag: *"SharderInfoTheoreticTests.Uniform (not seed locked, falkey, DST?)"*. Test lives at `tests/Tests.FSharp/Formal/Sharder.InfoTheoretic.Tests.fs` (module `Zeta.Tests.Formal.SharderInfoTheoreticTests`). **Corrected root-cause hypothesis (PR #327 review):** the test is an xUnit `[<Fact>]`, not an FsCheck property — it already uses `Random 42` for key generation, so the flake is *not* lack of FsCheck seed-lock. Actual suspected causes, in priority order: (1) **`HashCode.Combine` is process-randomized** — .NET's `HashCode` uses a per-process random seed (hash-randomization mitigation), so `uint64 (HashCode.Combine k)` differs across runs even with identical keys; the downstream `JumpConsistentHash.Pick` then yields a different shard distribution each run, and the `max/avg < 1.2` assertion trips on unlucky seeds; (2) tolerance is overly tight for a 100k-sample, 16-shard Zipf baseline — 1.2× is ~2σ territory on finite samples; (3) nondeterministic iteration order in `Array.init` closures if `rng.Next` is ever called concurrently (not the current shape, but a future-drift risk). Scope: (1) replace `HashCode.Combine` with a seeded deterministic hash (`XxHash64` / FNV with explicit seed) or a fixed-salt wrapper; (2) widen tolerance with justification (Wilson interval or measured 99th-percentile across N CI runs) rather than arbitrary 1.2; (3) sweep sibling Sharder tests for the same `HashCode.Combine` pattern; (4) document the "no process-randomized hashes in deterministic tests" rule as a hygiene row; (5) quantify flake rate from historical CI. Priority P2 research-grade (blocks other PRs' auto-merge when it trips). Effort S + S. **Does NOT authorize:** blanket `HashCode.Combine` ban outside tests (prod paths may legitimately want hash-randomization); widening tolerance without empirical justification.

- [ ] **Schema-as-Graph — entire database schema as first-class typed graph.** Aaron 2026-04-24 Otto-127: *"would it be possible to have a graph view of the entire table and relations so the whole schema is first class i the graph plus we could have special edge node whatever else we need entities if needed for more fidelity than reguarl sql table structrue and relationships allow. backlog"*. Natural extension of Graph substrate (PR #317 + #324). Scope: (1) schema-node types — Table/Column/Index/Constraint/View/StoredProcedure/Trigger; (2) schema-edge types — ForeignKey/Contains/References/DependsOn/InheritsFrom; (3) custom entity types beyond SQL — Domain/Aggregate/EventStream/Retraction (first-class "removed" with timestamp)/Provenance; (4) round-trip SQL↔Graph invariants; (5) bidirectional — Graph mutations emit DDL; DDL mutates Graph. Schema-change-over-time = Graph event stream w/ retraction-native. Aminata BP-11 threat-pass. Priority P2; effort M+M+L.
- [ ] **Research + map the `claude --agent <agent_name>` harness flag.** Aaron 2026-04-24 Otto-120 directive: *"FYI your harness just popped up this tip you should reserach and map Tip: Use --agent <agent_name> to directly start a conversation with a subagent backlog"*. Claude Code CLI apparently supports direct subagent invocation via `--agent <name>`, bypassing normal Task-tool orchestration. Factory has ~25+ personas (Kenji / Aminata / Naledi / Rune / Aarav / Ilyana / Viktor / Sova / Kira / etc.) reachable this way. Scope: (1) locate official doc for the flag; (2) test with 2-3 factory personas end-to-end (does it use `.claude/agents/*.md` definitions? same permissions? memory access?); (3) map use-cases — direct-invoke Aminata for a one-shot threat-pass on a file without spinning up full Otto loop; direct-invoke Ilyana for a public-API review; direct-invoke Aarav for skill-tune-up without scheduling a full tick; (4) document in `docs/references/claude-cli-agent-flag.md` with examples; (5) integrate into factory workflows where direct invocation beats Task-tool dispatch (lightweight review-only invocations without main-agent context pollution). Priority P3 convenience. Effort S (research + doc). Composes with Otto-108 team-autonomy memory.
Comment thread
AceHack marked this conversation as resolved.
Comment thread
AceHack marked this conversation as resolved.

- [ ] **Git-native PR-conversation preservation — extract PR review threads + comments to git-tracked files on merge.** Aaron 2026-04-24 Otto-113 directive: *"you probably need to resolve and save the conversations on the PRs to git for gitnative presevration"*. Currently PR review threads (Copilot / Codex connector / human reviewer comments) live GitHub-side only; if repo is mirrored / forked / GitHub has an outage / repo is migrated, the conversation substrate is lost. For glass-halo transparency + retractability, PR discussions belong in-repo. Proposed mechanism: workflow (or post-merge skill) that fetches all review threads + general PR comments for a merged PR, serialises them as markdown at `docs/pr-discussions/PR-<number>-<slug>.md` with attribution (reviewer id/bot), timestamps, thread structure, and resolution status. Scope: (1) design PR-discussion schema + file shape; (2) fetch-on-merge mechanism (GHA workflow using `gh api graphql`); (3) privacy pass (strip anything sensitive from reviewer comments); (4) backfill historical PRs or declare cutover-forward. Priority P2 research-grade; effort S (mechanism) + M (backfill if chosen). Composes with Otto-113 bootstrap-attempt-1 memory + docs-lint/memory-no-lint policy (discussions go in docs/) + the ChatGPT-download-skill (PR #300) pattern.

Expand Down
Loading