Skip to content

upstream merge 2026-04-27 PR-C: chat + plans + version bump (4 commits)#447

Merged
MocA-Love merged 5 commits intomainfrom
upstream/batch-2026-04-27-pr-c
Apr 27, 2026
Merged

upstream merge 2026-04-27 PR-C: chat + plans + version bump (4 commits)#447
MocA-Love merged 5 commits intomainfrom
upstream/batch-2026-04-27-pr-c

Conversation

@MocA-Love
Copy link
Copy Markdown
Owner

Summary

upstream (superset-sh/superset) から fork (MocA-Love/superset) への 2026-04-27 バッチ取り込み 第3弾 (最終)。22 commits を 3 PR に分割した中の PR-C: chat / plans / version bump (4 commits + 1 fix commit)。

依存: このPRは #446 (PR-B) の上に乗っている。base ブランチは upstream/batch-2026-04-27-pr-bPR-A → PR-B → PR-C の順でマージしてください

取り込み内容

SHA PR# 概要
2d41885e9 upstream superset-sh#3716 v2 chat architecture proposals + reference research + interim fix。25 files / +2685 行。chat / TiptapPromptEditor / host-service chat runtime / WorkspaceChatInterface 全般の改修と、plans/*.md で他社チャット (Open-Inspect / OpenCode / T3 Code) アーキテクチャ調査ドキュメント追加
dab3225e3 upstream superset-sh#3782 plans/local/ を gitignore に追加し、6c2a7719a で導入された Chat architecture reference docs (background-agents-chat-architecture-reference.md, opencode-electron-chat-architecture-reference.md, t3code-chat-architecture-reference.md) を削除
3f6c39c60 upstream superset-sh#3761 desktop version 1.5.10 → 1.6.1
971cf80eb upstream superset-sh#3789 desktop version 1.6.1 → 1.6.2

加えて fork 固有の解消 commit fix(merge): clean up chat trpc + unused imports after #3716 を含む。

Fork 側の方針変更点

desktop version 1.6.2 へ追従

fork は従来 1.5.10 固定で v1.5.10-fork.N 運用していたが、upstream の major/minor 進行 (1.6 系) に合わせて追従。次のリリースタグから v1.6.2-fork.1 で開始する (これまでの v1.5.10-fork.N 系列は終了)。

.gitignore の fork 方針

upstream dab3225e3 (superset-sh#3782) は plans/local/, .mcp.json, .cursor/mcp.json の3つを ignore に追加するが、fork は AGENTS.md ルールにより .mcp.json をコミット追跡対象として管理しているため、.mcp.json.cursor/mcp.json の ignore は取り込まない。plans/local/ のみ追加。

chat trpc 経路の整理

6c2a771ModelPicker の Anthropic / OpenAI 状態取得が chatServiceTrpc.auth.* から workspaceTrpc.auth.* (host-service 経由) に移行された。fork も同方針を採用 (host-service 経由)。chatServiceTrpcuseNextEditCompletion 等の他箇所で引き続き使用しているため import は維持。

ChatPaneInterface.tsx では chatServiceTrpcAppRouter (host-service) の両方が必要 (前者は auth 系問い合わせ、後者は inferRouterOutputs<AppRouter>["chat"]["getSlashCommands"] 型導出)。両 import を保持。

Fork 固有機能ヘルスチェック

baseline と比較し全項目健在を確認:

  • 19 個の独自 tRPC procedures 健在
  • fork 専用依存: ansi_up, @vscode/ripgrep, @xyflow/react
  • fork マーカー: 全項目健在
  • electron-builder.tsdmg.size = "4g" 維持
  • desktop version: 1.6.2 (fork 運用更新)
  • packages/db/drizzle/ 最大 idx 0039 維持 (PR-B から)
  • packages/local-db/drizzle/ 最大 idx 0072 維持

Test plan

  • bun install 整合性 OK
  • bun run typecheck 全 28 task green
  • bun run lint biome green
  • (ローカル) bun run --filter @superset/desktop build 成功確認
  • (手動) v2 chat ペインで model picker / Anthropic / OpenAI ログイン状態取得が動作
  • (手動) chat slash commands (host-service runtime/chat/chat.ts 経由) が動作
  • (手動) 次回 fork リリースで v1.6.2-fork.1 タグを切れることを確認

このバッチでの取り込み完了

合計 22 upstream commits を取り込み完了。

AviPeltz and others added 5 commits April 28, 2026 00:46
…rim fix to v2 (superset-sh#3716)

* docs(chat): v2 chat architecture proposals + reference research

Plans-only PR. Adds the chat-architecture research and proposals for
the v2 chat rearchitect, plus a pragmatic v1->host-service migration
plan that unblocks the greenfield work.

- v2-chat-greenfield-architecture.md: tRPC subscriptions over WS,
  EventLog interface, P0-P4 race-fix + LocalEventLog (SQLite),
  P5a (Postgres) / P5b (Durable Objects), P6 (handoff via git
  courier), P7 (mid-turn handoff, deferred).

- v1-to-v2-fast-migration.md: ship-this-month plan. Move
  ChatRuntimeService into host-service, collapse two polling
  queries into one atomic getSnapshot, gate behind per-workspace
  flag, canary, GA, delete legacy paths.

- t3code/opencode/background-agents reference docs: research
  notes informing the design.

* docs(chat): expand fast-migration plan with audit + verified bugs

Adds an Implementation Audit section (file inventory, what's solid,
stubs, gaps, verified bugs), a Fixes-at-a-glance priority table,
and restructures Phased Migration so P0 captures the two HIGH
defects concretely:

- Runtime leak on session delete (no disposeRuntime / endSession)
- Cross-workspace sessionId race in runtimeCreations map

P1 folds in the dual-poll race fix via a single getSnapshot query,
the fps:60 -> default 4fps polling cadence drop, and the missing
cloud lastActiveAt update on host send (selector ordering).

P4 enumerates parity gaps with v1 file references; Stop and
Notification hook events explicitly deferred per scope decision.

* fix(chat): runtime disposal, cross-workspace race, snapshot collapse

Lands the P0 + P1 fixes from plans/v1-to-v2-fast-migration.md.

Host-service runtime (packages/host-service/src/runtime/chat/chat.ts):
- runtimeCreations map now stores { workspaceId, promise } and
  validates the workspace at the in-flight path. Concurrent requests
  with the same sessionId across different workspaceIds no longer
  silently share the wrong-workspace runtime.
- Adds disposeRuntime(sessionId) - aborts harness, drops from map,
  idempotent. Closes the runtime leak on session delete.
- Adds getSnapshot(input) returning { displayState, messages,
  observedAt } from a single observation. Reuses the existing
  display-state shaping logic.

Host-service router (packages/host-service/src/trpc/router/chat/chat.ts):
- New chat.getSnapshot query.
- New chat.endSession mutation.
- chat.sendMessage fires-and-forgets ctx.api.chat.updateSession with
  lastActiveAt: new Date() after success so the v2 session selector
  keeps reordering after activity.

Cloud router (packages/trpc/src/router/chat/chat.ts):
- updateSession now accepts an optional lastActiveAt: z.date().

V2-workspace ChatPane:
- useWorkspaceChatDisplay: replaces dual getDisplayState +
  listMessages queries with a single getSnapshot query. Derives
  state from snapshot.displayState / snapshot.messages.
- ChatPaneInterface: drops fps: 60 override (~120 RPCs/sec/pane)
  in favor of the hook's default fps: 4 (matches v1).
- useWorkspaceChatController: handleDeleteSession now also calls
  workspaceTrpc.chat.endSession after the cloud delete to tear
  down the host-service runtime.

P4 parity gaps (slash commands, file search, hooks, title
generation, MCP, model auth state) are deferred per scope.

* feat(chat): enable v2 chat pane

Drops the last v1-IPC coupling from TiptapPromptEditor by switching the
file-search and slash-command-preview calls to prop injection. v1's
ChatInputFooter wires them to chatServiceTrpc (legacy IPC, unchanged);
v2's wires searchFiles to workspaceTrpc.filesystem.searchFiles so the
@-mention popup works for both local and remote host-service workspaces.

The chip-anchored SlashCommandPreviewPopover becomes opt-in via prop and
stays off in v2, which already mounts its own SlashCommandPreview using
workspaceTrpc.chat.previewSlashCommand.

usePaneRegistry now mounts the real <ChatPane> instead of the
"temporarily disabled" placeholder, threading sessionId / launchConfig
through ctx.actions.updateData.

* feat(chat): wire v2 ModelPicker auth status to host-service

Adds host-service auth surface and connects the v2 ModelPicker to it,
so the picker can show real Anthropic/OpenAI authentication state
instead of hardcoded `true`.

Host-service:
- types.ts: HostServiceRuntime gains an `auth: ChatService` field.
- app.ts: instantiate a long-lived ChatService singleton (auth is
  per-machine, not per-workspace) and expose it via runtime.auth.
- New trpc/router/auth/auth.ts router with 17 procedures mirroring
  v1's chatServiceTrpc.auth.* surface (getAnthropicStatus,
  startAnthropicOAuth, completeAnthropicOAuth, etc.).
- Wired into appRouter as `auth`.

ChatService delegates to mastra's createAuthStorage() under the hood,
so this is a thin proxy — no auth logic is duplicated. Mastra's storage
files are the same paths host-service already reads from when resolving
credentials at runtime, so v1 settings/models writes and v2 reads stay
in sync via disk.

Renderer:
- v2 ModelPicker now queries workspaceTrpc.auth.getAnthropicStatus and
  getOpenAIStatus; previously hardcoded both to authenticated. Refetches
  on selector open. The "open settings" action keeps navigating to
  /settings/models which is the existing v1 page (still works because
  it writes via Electron-main IPC to the same on-disk storage).

Remote-host auth UI is a follow-on; for remote workspaces the status
correctly reflects the remote machine's credentials, but the
/settings/models flow only authenticates the local machine.

* fix(chat): pass observer/reflector model to mastra runtime

Mastra defaults observational-memory observer + reflector models to
google/gemini-2.5-flash. Without GOOGLE_GENERATIVE_AI_API_KEY in the
host-service process env, the post-turn reflection step fails with
"Could not find API key process.env.GOOGLE_GENERATIVE_AI_API_KEY".

v1's ChatRuntimeService already handles this in
packages/chat/src/server/trpc/service.ts:88 — derive the OM model
from whichever provider the user is authenticated with (Anthropic ->
Haiku 4.5, OpenAI -> GPT-4.1 nano, Google env -> Gemini). Mirror that
helper in host-service and pass the result as initialState
{observerModelId, reflectorModelId} to createMastraCode.

* feat(chat): wire v2 slash commands to host-service

Replaces the host-service slash-command stubs with v1's actual logic
(scans .claude/commands and .agents/commands in the workspace cwd plus
the global ~/.claude/commands). v1's getSlashCommands and
resolveSlashCommand are pure cwd-scoped functions — no Electron deps —
so they import cleanly into host-service.

@superset/chat:
- server/desktop now re-exports getSlashCommands, resolveSlashCommand,
  and SlashCommand from the slash-commands subpath so host-service can
  consume them via the package's public surface.

Host-service:
- runtime/chat/chat.ts: getSlashCommands/resolveSlashCommand/preview-
  SlashCommand now look up the workspace's worktreePath and delegate
  to the v1 helpers (via a small resolveWorkspaceCwd helper). Returned
  command kinds are "builtin" | "custom" matching v1.

v2 client:
- useSlashCommandExecutor's default switch case now calls
  workspaceTrpc.chat.resolveSlashCommand for non-builtin commands.
  Successful resolutions substitute the rendered prompt as nextText
  via resolveSlashPromptResult; unhandled commands fall through with
  the original text. Built-in commands (/new /clear /stop /model /mcp)
  still resolve client-side without a roundtrip.

* fix(chat): getSnapshot regression + disposeRuntime hardening

Addresses bugs introduced in c183954 (getSnapshot collapse + runtime
disposal). Pre-existing v1 issues (authStorage.reload guarding, OpenAI
provider ID compat) are intentionally left out — they exist in v1's
service.ts too and should be fixed there separately.

getSnapshot regression
- getSnapshot duplicated displayState shaping inline and diverged from
  getDisplayState in two ways: it skipped the answeredQuestionIds
  filter (so answered ask_user questions could shadow sandbox questions
  the same turn — exactly the bug the existing filter was added to
  prevent), and the sandbox question shape differed (no top-level
  description; reason baked into the "Yes" option label).
- Extract a private buildDisplayState(runtime) helper used by both
  getDisplayState and getSnapshot so they cannot drift again. Both now
  apply the answeredQuestionIds filter and emit the same sandbox shape.

disposeRuntime hardening
- Accept and validate workspaceId. Mirrors getOrCreateRuntime; prevents
  cross-workspace teardown if a sessionId leaks. Router updated to pass
  it through.
- If a creation is in-flight for this sessionId, await it before
  cleanup. Otherwise dispose returns no-op while createRuntime is
  mid-way and the just-created runtime gets inserted into this.runtimes
  after we delete from it — leak.
- Disconnect the MCP manager (await runtime.mcpManager?.disconnect()).
  Mastra's McpManager owns child processes / server connections that
  leak until process exit otherwise. HookManager has no teardown API,
  so leaving it untouched is correct.

All cleanup steps are best-effort (try/catch) so a single failure
doesn't block the rest of the teardown.

* fix(chat): slash commands work without an active session

Slash commands are workspace-scoped (they only need cwd to scan
.claude/commands and .agents/commands), but the host-service procedures
required a sessionId via z.uuid() and the v2 client gated everything on
having a session. Result: in a fresh chat with no session yet (the
common case), the autocomplete was empty, the preview popover never
fired, and pressing Enter on a custom slash command sent the literal
"/foo args" text instead of resolving it.

Host-service:
- New workspaceSlashInput schema (workspaceId only) replaces sessionInput
  on getSlashCommands / resolveSlashCommand / previewSlashCommand.
- ChatRuntimeManager methods drop the sessionId parameter (it was unused
  anyway — they only consume workspaceId to resolve cwd).

v2 client:
- ChatPaneInterface.tsx: getSlashCommands.useQuery no longer gated on
  sessionId; passes only workspaceId. Autocomplete now fires on
  fresh sessions.
- SlashCommandPreview: drops sessionId prop and the !sessionId early
  return; previewSlashCommand mutation passes only workspaceId.
- useSlashCommandExecutor: removes the !sessionId early return that
  blocked all non-builtin slash commands when no session exists. The
  switch's per-case guards handle session-dependent built-ins (/mcp
  still self-guards on sessionId since getMcpOverview is unchanged).
  Default case (custom commands) calls resolveSlashCommand with
  workspaceId only.
- ChatInputFooter: drops the now-unused sessionId prop.

* fix(chat): optimistic message lost on first send in v2

When v2 chat migrated from chat.listMessages to chat.getSnapshot in
c183954, sendMessageToSession kept writing optimistic user messages
to the listMessages cache via setData. The display now reads from
snapshot.messages, so the optimistic message landed in a dead cache
and the first user message disappeared until the next snapshot poll
fetched it back from the harness (~250ms+ at 4fps polling).

Most visible on the first message of a fresh chat (page is empty
before it), but every send had the gap.

Update sendMessageToSession to write to chat.getSnapshot.setData. Two
cases:

- Existing snapshot in cache: spread, append optimistic message.
- Fresh session, no snapshot yet: seed with a minimal displayState
  (the next poll fills it in) and an array containing just the
  optimistic message so it renders immediately.

Rollback path mirrors: filter the optimistic id out of snapshot.messages
on send failure.

* perf(chat): cut idle-poll rerender churn in v2 ChatPane

Two small changes that eliminate forced rerenders when nothing has
actually changed in the snapshot.

1. Drop observedAt from chat.getSnapshot

The server was stamping every snapshot response with observedAt:
Date.now(). Nothing on the client read it. But because the timestamp
differs every poll, React Query's structuralSharing couldn't preserve
the response object's identity — every 250ms (4fps polling) produced
a new reference, forcing every component reading `snapshot` to rerender
even when displayState and messages were byte-identical.

Removing observedAt lets structural sharing kick in: idle polls
now produce identical objects and downstream consumers don't rerender.

Also drops the field from the optimistic-seed object in
sendMessageToSession so it stays in sync with the new shape.

2. Memoize the slashCommands select mapper

ChatPaneInterface.tsx had an inline `select: (commands) => commands.map(...)`
in the getSlashCommands.useQuery options. The arrow function was
recreated every render → mapper re-ran → new array reference → every
consumer of `slashCommands` (TiptapPromptEditor, executor, etc.)
rerendered even on identical input.

useCallback the mapper and pass the stable reference instead.

Together these two cuts close the cascade: poll resolves with
structurally-equal data → React Query preserves identity → consumers
see same reference → no rerender. Subsequent message-arrival polls
still rerender (because data actually changed) — that's correct.

* fix(chat): first-message flicker — keep optimistic in local state, not RQ cache

User saw the message flicker in/out on the first send: thinking, then
the message, then nothing, then thinking again, then the full result.
Cause was a stale poll clobbering the optimistic update.

The existing-session send path already does this correctly — useChatDisplay
holds an `optimisticUserMessage` in component state and merges it into
the messages output, outside React Query's cache. Stale polls can't
overwrite local component state, so the message stays visible until
the harness's response includes it.

The new-session path (sendMessageToSession, only fired on first send)
took a different route: write the optimistic message into the
chat.getSnapshot React Query cache. With 4fps polling running
continuously, a poll often resolved during the mutation with the
harness's pre-message state and clobbered the optimistic write —
message vanished, then reappeared on the next poll once the harness
had processed it.

Switch the new-session path to the same pattern: set pendingUserTurn
local state in handleSend before calling sendMessageToSession.
getVisibleMessagesWithPendingUserTurn already merges that into the
displayed list and clears it once the server's messages array
contains a matching entry. sendMessageToSession reduces to a thin
mutation wrapper — no cache writes, no rollback.

The restart flow already used pendingUserTurn for the same reason; this
just brings the new-session path into parity.
Move ad-hoc reference notes out of the tracked plans/ tree and into
plans/local/, which is gitignored.
PR-C 取り込み中、6c2a7719a (v2 chat architecture proposals) の
auto-merge 残りを処理:

- ChatPaneInterface.tsx: chatServiceTrpc と AppRouter 両方の
  import が必要なため両方残す (前者は auth 系状態問い合わせ、
  後者は inferRouterOutputs<AppRouter>["chat"]["getSlashCommands"]
  型導出に使用)
- ModelPicker.tsx: getAnthropicStatus / getOpenAIStatus を
  workspaceTrpc.auth へ寄せた upstream の方針を採用
  (chatServiceTrpc 側は他箇所 useNextEditCompletion 等に残置)
- chat.ts: auto-merge で混入した node:fs / node:os / node:path
  の未使用 import を削除 (slash command 経由で fs 操作不要)

bun.lock 更新は workspace version 1.5.10 → 1.6.2 への追従
(c885eb2 取り込み結果)。
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 27, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 32e558a0-4f4b-4e0a-9d94-1cae9bac1554

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch upstream/batch-2026-04-27-pr-c

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 27, 2026

🧹 Preview Cleanup Complete

The following preview resources have been cleaned up:

  • ⚠️ Neon database branch
  • ⚠️ Electric Fly.io app

Thank you for your contribution! 🎉

@MocA-Love MocA-Love changed the base branch from upstream/batch-2026-04-27-pr-b to main April 27, 2026 23:10
@MocA-Love MocA-Love merged commit 041f93c into main Apr 27, 2026
12 of 13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants