Conversation
…direct integration (Aaron 2026-04-28) /btw aside during PR drain — durable backlog landing per the /btw skill's directive-queued cross-session escalation rule. Three parallel paths sequenced by leverage: 1. Forge CLI/harness — Aaron already has accounts; Forge accesses Ollama natively → fastest entry point. Add to the agent / CLI roster alongside Claude-Code, Codex, Cursor, Grok-CLI, Kiro-CLI. 2. Local-model install + smoke-test — hardware-aware per Aaron explicit "search for best latest for the hardware we are on" + "take into account the resources on the machine." Otto-247 version-currency applies HARD: model release cadence is weeks. 3. Direct local-model integration — Aaron alternative to "going through forge or ollama". Research llama.cpp / MLX (Apple Silicon native) / vLLM / SGLang / direct GGUF loading on .NET 10 (TorchSharp / ONNX Runtime). Aaron explicit "this is just the start" — umbrella row; sub-rows spawn as paths clarify. Per Otto-275 log-but- don't-implement-yet + Aaron "this will be a later tasks": no implementation this tick. Composes with Otto-247, Otto-235 4-shell portability, task Lucent-Financial-Group#287 cost monitoring, task Lucent-Financial-Group#303 peer-call sibling scripts, kiro-cli roster row. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Adds a new P2 backlog umbrella row (B-0068) documenting a “local AI trajectory” direction (Forge CLI/harness + Ollama + direct local-model integration), without implementing any of the underlying work yet.
Changes:
- Introduces
B-0068backlog row capturing the rationale, non-goals, and three parallel exploration paths. - Adds cross-references to related governance rules, tasks, and (intended) memory entries.
| status: backlog | ||
| created: 2026-04-28 |
There was a problem hiding this comment.
Frontmatter doesn’t match the documented per-row schema in tools/backlog/README.md: status is expected to be open/closed/superseded-by-*/deferred (etc.), and last_updated is marked required. Consider aligning this row’s frontmatter to the standard fields so future tooling/lints don’t treat it as malformed.
| status: backlog | |
| created: 2026-04-28 | |
| status: open | |
| created: 2026-04-28 | |
| last_updated: 2026-04-28 |
| --- | ||
| id: B-0068 | ||
| priority: P2 | ||
| slug: local-ai-trajectory-forge-ollama-direct-integration |
There was a problem hiding this comment.
This row filename includes a maintainer + date suffix, but the backlog tooling schema documents row filenames as docs/backlog/P<tier>/B-<NNNN>-<slug>.md (no additional suffix). Renaming this file to the standard pattern will keep it consistent with existing rows and any future automation (e.g., new-row.sh).
|
|
||
| - Add Forge to the agent / CLI roster alongside | ||
| Claude-Code, Codex, Cursor, Grok-CLI, Kiro-CLI (per | ||
| `memory/feedback_kiro_cli_added_to_agent_roster_aaron_2026_04_28.md`) |
There was a problem hiding this comment.
The referenced memory file memory/feedback_kiro_cli_added_to_agent_roster_aaron_2026_04_28.md doesn’t exist in memory/. Either add it in the same PR, or update this reference to an existing canonical source (e.g., the relevant section in docs/HARNESS-SURFACES.md / an existing memory entry).
| `memory/feedback_kiro_cli_added_to_agent_roster_aaron_2026_04_28.md`) | |
| `docs/HARNESS-SURFACES.md`) |
| - `feedback_announce_non_default_harness_dependencies_plugins_mcp_skills_2026_04_28.md` | ||
| — Forge / Ollama / local model name is named at point | ||
| of use |
There was a problem hiding this comment.
The reference to memory/feedback_announce_non_default_harness_dependencies_plugins_mcp_skills_2026_04_28.md appears to be a broken xref (no such file under memory/). Please either add the referenced memory entry or replace the link with an existing source for this rule.
| - `feedback_announce_non_default_harness_dependencies_plugins_mcp_skills_2026_04_28.md` | |
| — Forge / Ollama / local model name is named at point | |
| of use | |
| - Non-default harness dependencies, plugins, MCP | |
| servers, skills, and local-model choices are named | |
| at point of use |
| - expands the peer-call roster (task #303 sibling | ||
| scripts: `tools/peer-call/{gemini,codex,grok}.sh`) by | ||
| adding a local sibling `tools/peer-call/local.sh` | ||
| trajectory |
There was a problem hiding this comment.
This bullet refers to “task Lucent-Financial-Group#303” and to existing tools/peer-call/{gemini,codex,grok}.sh, but (a) task #303 doesn’t appear to be defined/referenced anywhere else in docs/, and (b) tools/peer-call/ currently only contains grok.sh. Suggest either (1) linking to the canonical place where Lucent-Financial-Group#303 is tracked, and rewording the script list as planned work, or (2) dropping the task number and only referencing scripts that exist today.
| - expands the peer-call roster (task #303 sibling | |
| scripts: `tools/peer-call/{gemini,codex,grok}.sh`) by | |
| adding a local sibling `tools/peer-call/local.sh` | |
| trajectory | |
| - expands the peer-call roster from today's | |
| `tools/peer-call/grok.sh` toward a planned local | |
| sibling `tools/peer-call/local.sh` |
Summary
llama.cpp/ MLX / vLLM / .NET-native GGUF)Why now
Aaron sent a /btw aside during the AceHack PR drain naming this trajectory. Per the /btw skill's directive-queued cross-session escalation rule, durable-backlog landing is mandatory. The aside contains 8 distinct discrete pieces (Forge accounts, Ollama access, local model install, hardware-aware selection, resource accounting, direct integration alternative, "whole local AI trajectory", explicit "backlog") — landing one umbrella row preserves them all without forcing premature commitment to specific stack.
Composition
tools/peer-call/local.shfuture siblingTest plan
B-NNNN-<slug>-<maintainer>-<YYYY-MM-DD>.mdpertools/backlog/README.mdschema🤖 Generated with Claude Code