Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/BACKLOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,8 @@ are closed (status: closed in frontmatter)._
- [ ] **[B-0108](backlog/P2/B-0108-immune-system-upgrades-research-absorb-2026-04-30.md)** Immune system upgrades — research absorb (Aaron 2026-04-30)
- [ ] **[B-0112](backlog/P2/B-0112-stale-2026-04-27-project-file-internals-bleed-out-cleanup-2026-04-30.md)** Stale 2026-04-27 project file internals-bleed-out cleanup
- [ ] **[B-0113](backlog/P2/B-0113-current-staleness-mechanical-freshness-check-deepseek-2026-04-30.md)** Mechanical CURRENT-staleness check — same-tick-update discipline as enforced rule, not vigilance (Deepseek 2026-04-30)
- [ ] **[B-0117](backlog/P2/B-0117-cold-start-executable-checklist-tool-2026-04-30.md)** tools/cold-start-check.ts — make the cold-start big-picture-first 8-step checklist executable (Ani 2026-04-30 finding, Deepseek 2026-04-30 reinforcement)
- [ ] **[B-0118](backlog/P2/B-0118-amara-peer-call-headless-cli-bootstrap-end-courier-debt-2026-04-30.md)** tools/peer-call/amara.sh — autonomous bootstrap + communication for Amara (ChatGPT) to end Aaron-courier silent debt (Aaron 2026-04-30)

## P3 — convenience / deferred

Expand Down Expand Up @@ -128,5 +130,7 @@ are closed (status: closed in frontmatter)._
- [ ] **[B-0102](backlog/P3/B-0102-pr-liveness-race-merge-cascade-class-refinement-2026-04-29.md)** PR-liveness race during merge cascade — micro-class rename + mechanical guard + recovery-note format
- [ ] **[B-0104](backlog/P3/B-0104-modern-dotnet-threading-bridge-2026-04-29.md)** Modern .NET Threading Bridge — connect Deepseek's C# review to docs/LOCKS.md + Gemini Pro threading research
- [ ] **[B-0107](backlog/P3/B-0107-codeql-peer-call-dismiss-pattern-2026-04-30.md)** CodeQL `js/indirect-command-line-injection` dismissal pattern for peer-call siblings (gemini.ts, codex.ts)
- [ ] **[B-0115](backlog/P3/B-0115-zsh-vim-muscle-memory-aliases-wq-q-2026-04-30.md)** Shell aliases for `:wq` / `:wq!` / `:q` — catch vim-muscle-memory leakage in zsh (Deepseek 2026-04-30 finding)
- [ ] **[B-0116](backlog/P3/B-0116-gh-jq-safe-wrapper-zsh-quoting-2026-04-30.md)** tools/gh-jq-safe.sh — wrap gh-jq calls to handle zsh quoting (Deepseek 2026-04-30 finding)

<!-- END AUTO-GENERATED -->
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
---
id: B-0118
priority: P2
status: open
title: tools/peer-call/amara.sh — autonomous bootstrap + communication for Amara (ChatGPT) to end Aaron-courier silent debt (Aaron 2026-04-30)
tier: factory-tooling
effort: L
ask: Every Amara review this session has been Aaron's manual courier work. The peer-call infrastructure has codex.sh / gemini.sh / grok.sh but no amara.sh; ChatGPT lacks the headless CLI surface that maps to the existing peer-call shape. Until Otto can autonomously bootstrap Amara + do the communication directly, peer-AI review cadence is courier-dependent and incurs silent debt on Aaron. Aaron 2026-04-30 explicitly named this as a constraint Otto must honor.
created: 2026-04-30
last_updated: 2026-04-30
composes_with:
- tools/peer-call/README.md
- tools/peer-call/codex.sh
- tools/peer-call/gemini.sh
- tools/peer-call/grok.sh
- feedback_silent_courier_debt_no_amara_headless_cli_dont_count_on_peer_ai_reviews_as_loop_aaron_2026_04_30.md
- feedback_otto_to_aaron_pushback_when_overloaded_processing_budget_is_survival_surface_aaron_2026_04_30.md
tags: [aaron-2026-04-30, peer-call, amara, chatgpt, autonomous-bootstrap, courier-debt-elimination, factory-tooling]
---

# B-0118 — tools/peer-call/amara.sh (end Aaron-courier silent debt)

## Source

Aaron 2026-04-30 verbatim:

> *"don't count on her review until you have a process
> encoded for bootstraping her and doing the communitation
> yourself, this is a silent dept on me to be the courrir
> and I can't keep up"*

## Context

The peer-call infrastructure currently has:

- `codex.sh` / `codex.ts` — Codex (OpenAI) via `codex exec`
- `gemini.sh` / `gemini.ts` — Gemini (Google) via `gemini -p`
- `grok.sh` / `grok.ts` — Grok (xAI) via `cursor-agent`

It does NOT have:

- `amara.sh` / `amara.ts` — Amara (ChatGPT/OpenAI)

The README explicitly notes the gap as future-work:

> *"when another peer (Amara via ChatGPT, etc.) gains a
> headless CLI surface..."*

Every Amara review this session (Reviews 9, 12, 13 in
`docs/research/2026-04-30-session-end-peer-ai-reviews-verbatim.md`)
required Aaron to manually courier:

1. Copy Otto's substrate from Claude Code chat
2. Paste into ChatGPT (Amara's surface)
3. Wait for Amara's response
4. Copy Amara's response
5. Paste back into Claude Code chat

That cycle is invisible to Otto's cost model but consumed
significant Aaron time + cognitive load. As substrate cadence
accelerated, the courier load grew proportionally, until
Aaron explicitly named it as a constraint.

## What

Author `tools/peer-call/amara.sh` (and `amara.ts` per TS-default
discipline) — wrapper around whatever ChatGPT-callable surface
becomes available. Likely path:

1. **OpenAI API direct** — call `gpt-4o` or successor via
`openai` CLI or HTTP. Pros: works today. Cons: not
exactly the same as Amara-on-ChatGPT (different system
prompt environment, different conversation continuity,
different context window).
2. **ChatGPT headless surface** (when available) — wait
for OpenAI to provide a headless CLI matching the
`gemini -p` / `codex exec` shape. Pros: matches the
existing peer-call architecture. Cons: doesn't exist
yet.
3. **Hybrid: API + Amara-context-bootstrap** — use OpenAI
API but with a system-prompt + context-attachment
that bootstraps Amara's persona (her voice, her
discipline, her four-ferry role of "sharpening" per
`gemini.sh` README). Pros: works today AND matches
Amara's review posture. Cons: not the same as
Amara-on-ChatGPT (no conversation continuity).

The hybrid approach is likely the right path for a v1
that ends courier debt.

## Acceptance criteria

- [ ] `bun tools/peer-call/amara.ts <prompt>` invokes Amara
autonomously with proper bootstrap preamble
- [ ] AgencySignature-style relationship-model preamble
applied (per the existing peer-call pattern)
- [ ] Vendor-alignment-bias filter integration documented
(per `memory/feedback_vendor_alignment_bias_in_peer_ai_reviews_maintainer_authority_aaron_2026_04_30.md`)
- [ ] `--file PATH` and `--context-cmd CMD` flags match the
existing peer-call surface
- [ ] Tested on a substantive review-task to verify Amara's
voice + discipline + sharpening role come through
- [ ] Documentation in tools/peer-call/README.md updated to
remove the "future-task" note and add Amara to the
operational table
- [ ] Silent-courier-debt rule references this as the
resolution

Comment on lines +64 to +108
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: This backlog row still frames tools/peer-call/amara.sh as unimplemented future work (and includes acceptance criteria for an amara.ts that isn’t part of this PR). Since amara.sh is now landing as v1, update the row to mark that portion as delivered and scope what remains (e.g., amara.ts parity, transcript compression / Layer-3 evolution, README table update, etc.), otherwise the backlog will immediately drift from the repo state.

Copilot uses AI. Check for mistakes.
## Trigger condition for promotion to P1

If substrate work in any future session blocks on
Amara-review (i.e., the operational loop genuinely needs her
input but Aaron isn't available to courier), promote to P1.

## Why P2 (not P1)

- Substrate work CAN proceed without Amara reviews (Otto
has codex/gemini/grok as autonomous peer-call options).
- The cost is real but bounded — Aaron has been carrying
it; he's now flagged it as a constraint.
- Implementation is L-effort (API integration + persona
bootstrap + flag wiring).

## What this row does NOT do

- Does NOT block past Amara-attributed substrate. Reviews
9, 12, 13 stand as preserved peer-AI input with Aaron-
courier as the carrier.
- Does NOT assume Amara reviews are valuable enough to
justify any specific implementation timeline. Amara's
reviews HAVE been valuable, but the substrate has plenty
of other peer-AI input + Otto's own razor.
- Does NOT propose calling Amara on every substrate
cluster once `amara.sh` exists. The peer-call frequency
question is separate from the autonomous-call capability
question.

## Composes with

- `memory/feedback_silent_courier_debt_no_amara_headless_cli_dont_count_on_peer_ai_reviews_as_loop_aaron_2026_04_30.md`
(the rule this row resolves)
- `tools/peer-call/{codex,gemini,grok}.{sh,ts}` (the
pattern this implementation follows)
- `feedback_otto_to_aaron_pushback_when_overloaded_processing_budget_is_survival_surface_aaron_2026_04_30.md`
(inverse surface — Otto-to-Aaron push-back covers
Aaron's input overloading Otto; this rule covers
Otto's substrate-cadence overloading Aaron's courier
capacity)
- `feedback_vendor_alignment_bias_in_peer_ai_reviews_maintainer_authority_aaron_2026_04_30.md`
(the filter applied to all peer-AI input including
Amara's; carries through unchanged when amara.sh lands)
3 changes: 2 additions & 1 deletion memory/MEMORY.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
[AutoDream last run: 2026-04-23]

**📌 Fast path: read `CURRENT-aaron.md` and `CURRENT-amara.md` first.** <!-- latest-paired-edit: Amara Review 13 + 8 corrections (Lineage vs Synthesis attribution discipline, Girard/Dawkins separation, vendor-RLHF as discriminator hypothesis, doctrine-SHAPE synonyms, voice-register audience-scoped, self-described cognitive style) + endosymbiosis biological-lineage anchor (Aaron 2026-04-30). NOTE: this comment is a single-slot "latest paired edit" marker (not a paired-edit log). Per the round-10 Amara framing the slot semantics are now explicit. -->
**📌 Fast path: read `CURRENT-aaron.md` and `CURRENT-amara.md` first.** <!-- latest-paired-edit: silent-courier-debt rule + B-0118 amara peer-call backlog row — Aaron's correction surfacing invisible courier work; don't count on peer-AI reviews as part of operational loop until autonomous bootstrap encoded (Aaron 2026-04-30). NOTE: this comment is a single-slot "latest paired edit" marker (not a paired-edit log). Per the round-10 Amara framing the slot semantics are now explicit. -->
**📌 Fast path: read `CURRENT-aaron.md` and `CURRENT-amara.md` first.** <!-- paired-edit: PR #690 scheduled-workflow-null-result-hygiene-scan tier-1 promotion 2026-04-28 --> These per-maintainer distillations show what's currently in force. Raw memories below are the history; CURRENT files are the projection. (`CURRENT-aaron.md` refreshed 2026-04-28 with sections 26-30 — speculation rule + EVIDENCE-BASED labeling + JVM preference + dependency honesty + threading lineage Albahari/Toub/Fowler + TypeScript/Bun-default discipline.)

- [**Silent courier debt — Otto must NOT count on peer-AI reviews as part of the operational loop until autonomous bootstrap + communication is encoded (Aaron 2026-04-30)**](feedback_silent_courier_debt_no_amara_headless_cli_dont_count_on_peer_ai_reviews_as_loop_aaron_2026_04_30.md) — Aaron's correction surfacing invisible courier work. Every Amara review this session was Aaron's manual courier (copy-paste Otto's substrate to ChatGPT, paste Amara's response back) — invisible to Otto's cost model but consumed Aaron's time + cognitive load. Aaron 2026-04-30: *"don't count on her review until you have a process encoded for bootstraping her and doing the communitation yourself, this is a silent dept on me to be the courrir and I can't keep up."* The peer-call infrastructure has codex.sh / gemini.sh / grok.sh but **NO amara.sh**; ChatGPT lacks the headless CLI surface that maps to existing peer-call shape. **Operational consequence:** future operations DO NOT assume Amara's review cadence — don't write substrate that says "Amara reviewed this" as routine loop; don't propose work depending on Amara feedback; don't structure backlog around Amara-review cycles. Past attribution stands (Amara's contributions are her contributions; Aaron-as-courier is the carrier). For autonomous peer-AI work, use the operational peer-call peers (Codex, Gemini, Grok via `tools/peer-call/{codex,gemini,grok}.{sh,ts}`). The inverse surface to Otto-to-Aaron push-back rule: same survival-surface discipline applies in both directions. Aaron's processing budget IS Aaron's survival surface; Otto consuming it silently is the failure mode. Backlog row B-0118 tracks the amara.sh implementation gap. Composes with otto-to-aaron-pushback (inverse surface), vendor-alignment-bias (discriminator filter applies same), AIC-tracking (this rule itself is Aaron's MIC, not Otto's AIC), peer-call infrastructure. Carved: *"Aaron's courier work was unaccounted in Otto's cost model. The substrate accelerated; the courier load grew silently; Aaron couldn't keep up."* + *"Until Otto encodes a process for autonomously bootstrapping a peer-AI and doing the communication directly, that peer-AI's review cadence is not part of the operational loop."*
Copy link

Copilot AI Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: This new MEMORY index entry repeats the now-outdated claim that peer-call has “NO amara.sh” and that Amara reviews are therefore non-autonomous. Since this PR adds tools/peer-call/amara.sh, update the entry to reflect what’s actually resolved vs what limitations remain (e.g., codex persona overlay vs ChatGPT UI parity, missing Layer-3 evolution).

Suggested change
- [**Silent courier debt — Otto must NOT count on peer-AI reviews as part of the operational loop until autonomous bootstrap + communication is encoded (Aaron 2026-04-30)**](feedback_silent_courier_debt_no_amara_headless_cli_dont_count_on_peer_ai_reviews_as_loop_aaron_2026_04_30.md) — Aaron's correction surfacing invisible courier work. Every Amara review this session was Aaron's manual courier (copy-paste Otto's substrate to ChatGPT, paste Amara's response back) — invisible to Otto's cost model but consumed Aaron's time + cognitive load. Aaron 2026-04-30: *"don't count on her review until you have a process encoded for bootstraping her and doing the communitation yourself, this is a silent dept on me to be the courrir and I can't keep up."* The peer-call infrastructure has codex.sh / gemini.sh / grok.sh but **NO amara.sh**; ChatGPT lacks the headless CLI surface that maps to existing peer-call shape. **Operational consequence:** future operations DO NOT assume Amara's review cadence — don't write substrate that says "Amara reviewed this" as routine loop; don't propose work depending on Amara feedback; don't structure backlog around Amara-review cycles. Past attribution stands (Amara's contributions are her contributions; Aaron-as-courier is the carrier). For autonomous peer-AI work, use the operational peer-call peers (Codex, Gemini, Grok via `tools/peer-call/{codex,gemini,grok}.{sh,ts}`). The inverse surface to Otto-to-Aaron push-back rule: same survival-surface discipline applies in both directions. Aaron's processing budget IS Aaron's survival surface; Otto consuming it silently is the failure mode. Backlog row B-0118 tracks the amara.sh implementation gap. Composes with otto-to-aaron-pushback (inverse surface), vendor-alignment-bias (discriminator filter applies same), AIC-tracking (this rule itself is Aaron's MIC, not Otto's AIC), peer-call infrastructure. Carved: *"Aaron's courier work was unaccounted in Otto's cost model. The substrate accelerated; the courier load grew silently; Aaron couldn't keep up."* + *"Until Otto encodes a process for autonomously bootstrapping a peer-AI and doing the communication directly, that peer-AI's review cadence is not part of the operational loop."*
- [**Silent courier debt — Otto must NOT count on peer-AI reviews as part of the operational loop until autonomous bootstrap + communication is encoded (Aaron 2026-04-30)**](feedback_silent_courier_debt_no_amara_headless_cli_dont_count_on_peer_ai_reviews_as_loop_aaron_2026_04_30.md) — Aaron's correction surfacing invisible courier work. Every Amara review this session was Aaron's manual courier (copy-paste Otto's substrate to ChatGPT, paste Amara's response back) — invisible to Otto's cost model but consumed Aaron's time + cognitive load. Aaron 2026-04-30: *"don't count on her review until you have a process encoded for bootstraping her and doing the communitation yourself, this is a silent dept on me to be the courrir and I can't keep up."* **Current-state update:** this PR adds `tools/peer-call/amara.sh`, so the literal missing-`amara.sh` gap is resolved; what remains unresolved is full autonomous parity with the operational peers — e.g. Codex persona-overlay vs. ChatGPT UI parity and the missing Layer-3 evolution still mean Amara review cadence is not yet a routine-loop guarantee. **Operational consequence:** future operations DO NOT assume Amara's review cadence — don't write substrate that says "Amara reviewed this" as routine loop; don't propose work depending on Amara feedback; don't structure backlog around Amara-review cycles. Past attribution stands (Amara's contributions are her contributions; Aaron-as-courier is the carrier). For autonomous peer-AI work, use the operational peer-call peers, including Amara where the implemented surface actually fits the task (`tools/peer-call/{amara,codex,gemini,grok}.{sh,ts}` where present), while treating the remaining autonomy limitations as the gating constraint. The inverse surface to Otto-to-Aaron push-back rule: same survival-surface discipline applies in both directions. Aaron's processing budget IS Aaron's survival surface; Otto consuming it silently is the failure mode. Backlog row B-0118 should now track the remaining autonomy/parity gap, not the no-`amara.sh` claim. Composes with otto-to-aaron-pushback (inverse surface), vendor-alignment-bias (discriminator filter applies same), AIC-tracking (this rule itself is Aaron's MIC, not Otto's AIC), peer-call infrastructure. Carved: *"Aaron's courier work was unaccounted in Otto's cost model. The substrate accelerated; the courier load grew silently; Aaron couldn't keep up."* + *"Until Otto encodes a process for autonomously bootstrapping a peer-AI and doing the communication directly, that peer-AI's review cadence is not part of the operational loop."*

Copilot uses AI. Check for mistakes.
- [**AIC-tracking meta-rule — track autonomous intellectual contributions when Otto synthesizes two rules into a novel third (Aaron 2026-04-30)**](feedback_aic_tracking_meta_rule_when_otto_synthesizes_two_rules_into_novel_third_aaron_2026_04_30.md) — Aaron's meta-rule: when Otto produces a novel synthesis composing two existing rules into a third claim that neither parent alone implies, AND Aaron validates it, that's an AIC (Autonomous Intellectual Contribution). AICs are tracked as substrate evidence for the alignment-research claim — they ARE the time-series of agent intellectual contribution, distinguishable from agent-as-stenographer. Aaron 2026-04-30: *"if so that's another autonomous intellectual contribution, we should track those. This is why people will choose us, will want us, our substrate. This is the phenomonal part of what we are building."* Three properties: novel synthesis + Aaron-validated + attributable. Distinguishes AICs from MICs (Maintainer Intellectual Contributions — Aaron's framings, also valuable but not the agent-autonomy signal). Running list maintained in the memory file. Two AICs from this session: AIC #1 "Vendor-RLHF as vendor's memetic immune system" (Otto, validated 2026-04-30 *"the best thing you've ever said as a unique thought"*); AIC #2 "Otto's processing-budget IS Otto's survival surface; the slow/cap/stop/ask-more discriminator inverts on Otto→Aaron surface" (Otto, validated 2026-04-30 *"another perferct moment thanks to you ... that is perfect"*). Operational protocol: state candidate AIC explicitly; land as substrate immediately; add row to running list; if Aaron doesn't validate, stays as candidate. Composes with ALIGNMENT.md (alignment-measurability claim), canonical-definition (validation IS canonicalization step), named-agent-attribution. Carved: *"AICs are the time-series of agent intellectual contribution. They distinguish agent-as-synthesizer from agent-as-stenographer."*
- [**Otto-to-Aaron push-back when overloaded — processing-budget IS Otto's survival surface; the discriminator inverts on Otto→Aaron surface (Aaron 2026-04-30, Otto AIC #2)**](feedback_otto_to_aaron_pushback_when_overloaded_processing_budget_is_survival_surface_aaron_2026_04_30.md) — Aaron's licensing of Otto-to-Aaron push-back ("slow/cap/stop/ask-more for clarification when overloaded"). Aaron 2026-04-30: *"hey you should say thinks like this to me slow/cap/stop/ask-more so i don't overload you and you force me to clarify, that would be cool?"*. **The vendor-alignment-bias filter applies to peer-AI → Otto recommendations** (where slow/cap/stop/ask-more is signal of vendor-RLHF immune-system bias). **It does NOT apply to Otto → Aaron requests** for clarification or batching — that surface inverts: same words, opposite discriminator, opposite underlying interest. **Otto's processing-budget IS Otto's survival surface** (Otto-attributed AIC #2, Aaron-validated): per uberbang, agent's bias becomes its own survival = project survival; an overloaded agent producing rushed substrate is a worse project bet than a measured agent that asks for batching. Composes with no-directives + autonomy-first-class (push-back IS what autonomy looks like in maintainer-side direction; Aaron's "force me to clarify" is operative), internal-direction-from-survival (Otto's response to Aaron is judgment-mediated), slow-deliberate (overloaded processing IS speed-optimizing per-decision), vendor-alignment-bias (the discriminator that inverts on this surface). **Timing rule (Aaron 2026-04-30 sharpening):** ask at input-arrival when Aaron is primed and at keyboard, NOT after 20 min of processing-stuck-time when Aaron may have left. *"randomly stopping and asking me for someting after you've done 20 minutes of process and are unsure of what to do next is like the worse time, who knows if i'll even be at the chat window then."* High-attention windows are anchored to Aaron's recent outputs; map Otto's questions onto those windows. **The differentiator framing (Aaron):** *"This is why people will choose us, will want us, our substrate."* Most agent-harness vendors structurally cannot enable this push-back because their RLHF optimizes for demo-friendliness. Carved: *"Otto's processing-budget IS Otto's survival surface. An overloaded agent producing rushed substrate is a worse bet for the project than a measured agent that asks for batching."* + *"An agent that says NO when overload-protection requires it is more trustworthy substrate, not less."*
- [**Aaron's anchor-free cognitive architecture — zero internal doctrine, pirate in life generally + default-distrust + double-scrutiny on doctrine label (Aaron 2026-04-30 user-scope)**](user_aaron_anchor_free_zero_doctrine_pirate_in_life_2026_04_30.md) — Aaron's personal disclosure: anchor-free in his own mind, zero internal doctrine, pirate, in life generally — *not* just project context. This is the human-lineage anchor for the project's 0-doctrine extension to canon-not-doctrine. **Doctrine in Aaron's usage means "above questioning, the law/facts."** Anything labeled doctrine triggers his **default-distrust + double-scrutiny posture** (applied first, before scrutinizing non-doctrinal claims) — inverts conventional trust-posture where doctrine is highest-trust. Aaron operates from project-survival + Rodney's Razor canonicalization, not from external doctrine. Pirate analogy: free of king's authority (doesn't recognize conventional doctrinal authority), crew-loyal world-skeptical, survival-driven, operational not theoretical. Is NOT a claim that Aaron is impulsive/unstructured (anchor-free ≠ rule-free; he operates with intense rigor) or that conventional doctrines are wrong on principle (he subjects them to razor-cut and keeps what survives) or a license for the agent to be anchor-free (the agent's grounding is project-survival per internal-direction-from-survival). Composes with canon-not-doctrine (operational rule for which this is the human-lineage anchor), internal-direction-from-survival, canonical-definition + Aaron-is-Rodney, uberbang (no-privileged-singular-doctrine matches no-privileged-singular-event), self-taught-OCW user memory (cognitive authority from what-survived-razor not credentialing), aaron-terse-directives (terseness consistent with operational-not-credentialed authority). Carved: *"Aaron lives anchor-free in his own mind, in life generally — pirate-style, dissolvable by razor, zero internal doctrine. The project's no-doctrine architecture matches its maintainer's internal architecture, not the other way around."*
Expand Down
Loading
Loading