Bump react from 18.3.1 to 19.2.0#3
Closed
dependabot[bot] wants to merge 1 commit intomainfrom
Closed
Conversation
3440db1 to
60183d0
Compare
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
60183d0 to
5aa9b58
Compare
5aa9b58 to
eba6293
Compare
eba6293 to
8fe2589
Compare
8fe2589 to
0731e39
Compare
0731e39 to
2ae60b9
Compare
2ae60b9 to
eef3c1e
Compare
Bumps [react](https://github.com/facebook/react/tree/HEAD/packages/react) from 18.3.1 to 19.2.0. - [Release notes](https://github.com/facebook/react/releases) - [Changelog](https://github.com/facebook/react/blob/main/CHANGELOG.md) - [Commits](https://github.com/facebook/react/commits/v19.2.0/packages/react) --- updated-dependencies: - dependency-name: react dependency-version: 19.2.0 dependency-type: direct:development update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com>
eef3c1e to
e0b214e
Compare
Contributor
Author
|
Looks like react is no longer updatable, so this is no longer needed. |
andreasasprou
referenced
this pull request
in andreasasprou/superset
Dec 27, 2025
…persistence Fixes 3 bugs in terminal session restoration: - Bug #1: Terminal blank after attach failure - auto-reconnect with retry backoff - Bug #2: Wrong dimensions after restore - pass actual cols/rows to attachSession - Bug #3: Cannot type after restore - lifecycle routes writes through connected PTY Changes: - Add SessionLifecycle class with states: disconnected→connecting→connected→reconnecting→failed→closed - Add TmuxError classification for intelligent retry decisions - Add -d flag to detach other clients before attaching - Integrate lifecycle into TerminalManager (write, resize, kill, detach, cleanup)
andreasasprou
referenced
this pull request
in andreasasprou/superset
Dec 28, 2025
…persistence Fixes 3 bugs in terminal session restoration: - Bug #1: Terminal blank after attach failure - auto-reconnect with retry backoff - Bug #2: Wrong dimensions after restore - pass actual cols/rows to attachSession - Bug #3: Cannot type after restore - lifecycle routes writes through connected PTY Changes: - Add SessionLifecycle class with states: disconnected→connecting→connected→reconnecting→failed→closed - Add TmuxError classification for intelligent retry decisions - Add -d flag to detach other clients before attaching - Integrate lifecycle into TerminalManager (write, resize, kill, detach, cleanup)
andreasasprou
referenced
this pull request
in andreasasprou/superset
Dec 28, 2025
…persistence Fixes 3 bugs in terminal session restoration: - Bug #1: Terminal blank after attach failure - auto-reconnect with retry backoff - Bug #2: Wrong dimensions after restore - pass actual cols/rows to attachSession - Bug #3: Cannot type after restore - lifecycle routes writes through connected PTY Changes: - Add SessionLifecycle class with states: disconnected→connecting→connected→reconnecting→failed→closed - Add TmuxError classification for intelligent retry decisions - Add -d flag to detach other clients before attaching - Integrate lifecycle into TerminalManager (write, resize, kill, detach, cleanup)
andreasasprou
referenced
this pull request
in andreasasprou/superset
Dec 28, 2025
…persistence Fixes 3 bugs in terminal session restoration: - Bug #1: Terminal blank after attach failure - auto-reconnect with retry backoff - Bug #2: Wrong dimensions after restore - pass actual cols/rows to attachSession - Bug #3: Cannot type after restore - lifecycle routes writes through connected PTY Changes: - Add SessionLifecycle class with states: disconnected→connecting→connected→reconnecting→failed→closed - Add TmuxError classification for intelligent retry decisions - Add -d flag to detach other clients before attaching - Integrate lifecycle into TerminalManager (write, resize, kill, detach, cleanup)
3 tasks
5 tasks
6 tasks
This was referenced Apr 7, 2026
Kitenite
added a commit
that referenced
this pull request
Apr 17, 2026
- host-service ai-branch-name: run trailing-trim after slice so a 100-char truncation can't re-introduce a bare "." or "-" that git rejects as an invalid ref (coderabbit / cubic #2, #7). - host-service workspace-creation.generateBranchName: reuse the existing listBranchNames helper instead of the inline git walk, which classified off the short refname and could conflate a local "origin/foo" with refs/remotes/origin/foo (coderabbit #3). - packages/chat shared/small-model: drop the unused hasSmallModelCredentials export; only a test mock consumed it (greptile #4). - resolveAnthropicCredential: on refresh failure, return null instead of kind:"oauth" with a stale expiresAt so callers fall back cleanly (cubic #8). - chat-service.getAnthropicAuthStatus: log context when refresh throws instead of silently swallowing (cubic #9).
7 tasks
Kitenite
added a commit
that referenced
this pull request
Apr 18, 2026
…3517) * remove 7 day rule * Upgrade mastra * upgrade ai * Ad mastra * refactor(desktop): remove dead provider-diagnostics plumbing The provider-diagnostics store was fed by callSmallModel's per-attempt reporting, which was removed when small-model tasks moved to direct AI-SDK + mastracode's AuthStorage. Nothing writes to the issue map anymore, so the clearIssue mutation, getStatuses query, and diagnosticStatus plumbing in ModelsSettings were all no-ops. Settings still surfaces "Session expired / Reconnect" via auth-status alone. ProviderIssue type collapsed from 8 codes to just "expired" to match. * fix(auth): auto-refresh expired Anthropic OAuth tokens Anthropic credentials were read via authStorage.get() everywhere, so mastracode's built-in refresh flow never ran. Once the 1-hour access token expired, status flipped to "Reconnect" and users had to do a full PKCE re-auth, even though a valid refresh token was already stored. Resolvers now call authStorage.getApiKey() for oauth creds on expiry, which triggers refreshToken() and persists the refreshed credential. getAnthropicAuthStatus does the same before declaring issue: "expired". Mirrors the pattern already used for OpenAI small-model auth. * review: address PR feedback from cubic + coderabbit + greptile - host-service ai-branch-name: run trailing-trim after slice so a 100-char truncation can't re-introduce a bare "." or "-" that git rejects as an invalid ref (coderabbit / cubic #2, #7). - host-service workspace-creation.generateBranchName: reuse the existing listBranchNames helper instead of the inline git walk, which classified off the short refname and could conflate a local "origin/foo" with refs/remotes/origin/foo (coderabbit #3). - packages/chat shared/small-model: drop the unused hasSmallModelCredentials export; only a test mock consumed it (greptile #4). - resolveAnthropicCredential: on refresh failure, return null instead of kind:"oauth" with a stale expiresAt so callers fall back cleanly (cubic #8). - chat-service.getAnthropicAuthStatus: log context when refresh throws instead of silently swallowing (cubic #9). * fix(chat): read auth.json directly instead of importing mastracode Importing createAuthStorage from mastracode loads the entire CLI tree (fastembed → onnxruntime-node's 208 MB native binary) via eager top-level requires in mastracode's CJS entry. This crashed electron-vite bundling and bloated the get-small-model chunk. getSmallModel now reads mastracode's auth.json file directly using the same path resolution logic (~/Library/Application Support/mastracode/ on macOS). Zero mastracode import, zero bundle impact. The chunk stays at 1.2 MB (just @ai-sdk/anthropic + @ai-sdk/openai). Production build verified: compile:app succeeds, Electron main process boots with no onnxruntime error. * docs(desktop): add manual testing plan for PR #3517 * fix api key storage slot * fix(auth): store API keys in dedicated slot so OAuth doesn't clobber them setApiKeyForProvider and setStoredAnthropicApiKeyFromEnvVariables now use authStorage.setStoredApiKey() (writes to "apikey:<provider>") instead of authStorage.set() (writes to the main "<provider>" slot shared with OAuth). This way connecting/disconnecting OAuth doesn't overwrite or delete a stored API key. resolveAuthMethodForProvider falls back to hasStoredApiKey() after checking the main slot, so status correctly reports authenticated when only an API key is stored. * fix(auth): backup/restore API keys across OAuth connect/disconnect mastracode's resolveModel only reads API keys from the main authStorage slot (authStorage.get("anthropic")). OAuth login overwrites this slot, and disconnect removes it — losing any previously saved API key. Fix: backup the API key to the dedicated apikey: slot before OAuth connect, restore it after disconnect. setApiKeyForProvider now writes to both slots (main for resolveModel compatibility, apikey: for backup). resolveAuthMethodForProvider checks both. Applies to both Anthropic and OpenAI providers. * chore: add upstream PR reference to auth workaround Point to mastra-ai/mastra#15483 so the backup/restore code can be removed once upstream lands and we bump mastracode. * refactor(desktop): derive settings provider action from status Replace the cascade of if/else + canDisconnect flag with a single getProviderAction(status) → connect | reconnect | logout | null. Fixes "Active" badge + "Connect" button showing simultaneously when authenticated via API key. * fix(desktop): always show Logout when provider is active Active providers now always show a Logout button. Clears OAuth or API key depending on authMethod — no more "Active" badge with no way to disconnect. * fix(desktop): simplify OpenAI OAuth dialog + auto-open browser Match Anthropic dialog's layout: remove the raw OAuth URL display and "Tip" block, auto-open the browser on OAuth start. Change "Back" to "Cancel" for consistency. * refactor(desktop): unify OAuth dialogs into shared OAuthDialog Extract shared OAuthDialog component with provider config object. AnthropicOAuthDialog and OpenAIOAuthDialog become thin wrappers that pass provider-specific labels and options. * fix(desktop): show 'Copied!' feedback on Copy URL button * refactor(desktop): merge provider account + API key into single card Each provider section now renders AccountCard + ConfigRow inside one rounded card with a divider, instead of two separate cards. Removes the standalone "API Keys" collapsible section. * refactor(desktop): compact OAuth row in provider settings card OAuth row is now a single inline row (label + status + action) instead of a stacked AccountCard. Both providers share the same 2-row card layout: OAuth row + API key row with divider. * fix(desktop): contextual buttons in provider settings Connect is now primary (filled). Save only shows when there's input. Clear only shows when a key is saved. Removes visual noise from empty-state provider cards. * ui(desktop): add provider icons to settings section headers * ui(desktop): show 'Not connected' badge instead of subtitle for disconnected providers * ui: remove redundant disconnected subtitle * ui: remove subtitle text from OAuth rows * chore: remove dead AccountCard + getProviderSubtitle * docs: update test plan to match current UI * chore: move shipped plans to done/ --------- Co-authored-by: AviPeltz <aj.peltz@gmail.com>
Spectralgo
referenced
this pull request
in Spectralgo/spectralSet
Apr 23, 2026
…elper + persist v10 (ss-ain5) Wave A #3: wires a store path that opens Gas Town surfaces (Today / Mail / Convoys / Agents) as workspace panes. - utils.ts: add createGastownPane(tabId, kind, options?) mirroring createChatPane — attaches per-kind optional bag only for its matching kind. - store.ts: add addGastownPane(workspaceId, options) mirroring addChatTab. Auto-names duplicates in a workspace as "Today" / "Today 2" / "Today 3". Emits posthog panel_opened with panel_type=<kind>. - types.ts: export GastownPaneKind, add AddGastownPaneOptions + TabsStore addGastownPane signature. - persist: bump version 9 → 10 (no schema changes — records the crossover point where gastown-* pane kinds became valid). - utils.test.ts: unit tests for createGastownPane covering default names, kind-specific bag attachment, and cross-kind isolation. Dispatch and sidebar wiring live in ss-lmlv and ss-xblb respectively.
Kitenite
added a commit
that referenced
this pull request
Apr 30, 2026
Implements Open Decision #3 from the implementation plan: detect crash loops and stop respawning the daemon when something is fundamentally broken, instead of burning CPU on a forever-loop respawn. Behavior: - Daemon exits we initiated (coordinator.stop) don't count toward the crash counter — tracked via a `stopping` set. - Unexpected exits add a timestamp to the per-org crashTimes list. Older-than-60s timestamps are dropped on each accounting. - Up to 3 crashes within 60s → auto-respawn. - The 4th crash within the window → circuit OPEN. No more respawns until clearCrashCircuit(orgId) is called from the UI's "retry" affordance, or the desktop app restarts. - ensure() fails fast with a clear error message when the circuit is open, instead of trying to spawn-and-time-out repeatedly. Plumbing for the UI surface (telemetry + retry affordance) lands in a follow-up commit.
Kitenite
added a commit
that referenced
this pull request
May 1, 2026
… restarts (#3896) * feat(pty-daemon): standalone PTY daemon package (Phase 1, skeleton) New package @superset/pty-daemon implementing the long-lived PTY-owning process described in apps/desktop/plans/20260429-pty-daemon-implementation.md. This PR adds the daemon in isolation; host-service integration lands in a follow-up PR so both can be reviewed independently. What's in: - Versioned Unix-socket protocol (length-prefixed JSON frames; hello/ack handshake; open/input/resize/close/list/subscribe/unsubscribe ops). - Pty wrapper around node-pty with dim validation. - SessionStore: in-memory map + 64KB ring buffer per session. No persistence — explicitly out of scope per the v1 lessons. - Server: AF_UNIX SOCK_STREAM accept loop, file-mode 0600 auth boundary, per-connection subscription set, output/exit fan-out. - Handlers: pure functions over (store, conn, msg). Stateless from the client's perspective. - main.ts entrypoint: argv parsing, signal handling, graceful shutdown. Runtime: Node ≥ 20, not Bun. Verified during implementation that node-pty 1.1's master fd setup is incompatible with Bun 1.3's tty.ReadStream (onData/onExit silently never fire). Daemon ships as a Node script in the desktop app bundle; host-service stays on Bun. Tests: 24 unit tests under bun test (protocol framing, SessionStore, handlers with a fake spawn), 6 integration tests under node --test spawning real shells through real Unix sockets. All green. What's NOT in (separate PRs): - Host-service DaemonClient + terminal.ts refactor + manifest adoption. - Daemon-upgrade fd inheritance handoff (Phase 2). - Renderer / WS / tRPC changes (none required; the renderer is unchanged). * feat(pty-daemon): control-plane integration suite + production build Adds an exhaustive control-plane integration test that exercises every usage pattern host-service can throw at the daemon end-to-end (real shells, real Unix socket), plus the production build pipeline matching the host-service pattern. Test coverage (28 integration tests, all passing in ~2.5s): - Handshake variants (non-hello first, unsupported version, mutual picking, duplicate hello) - Session lifecycle (bad dims, duplicate id, ENOENT on missing, instant-exit, SIGKILL hung shell) - I/O patterns (resize during streaming, burst output, multi-byte UTF-8) - Multi-client fan-out (two subscribers, unsubscribe stops delivery, dropped subscriber doesn't crash) - Detach + reattach (late subscriber gets replay, full disconnect → new conn → continues live) - Hostile input (malformed frames, oversized frames, input on dead session) - Concurrency (20 sessions on one conn, 10 conns in parallel) - Server shutdown (in-flight clients disconnect cleanly) - Frame splitting across TCP chunks Reusable test client extracted to test/helpers/client.ts (waitFor, collect, sendRaw, onClose). Found and fixed during the suite: Server.close() now kills owned PTYs synchronously so the daemon process can actually exit (open master fds were keeping the event loop alive). Aligns with the v1-lessons "synchronous teardown only" rule. Production build: build.ts mirrors packages/host-service/build.ts — Bun.build target=node, externalizes node-pty, emits dist/pty-daemon.js that runs under Electron's bundled Node via process.execPath. No new runtime in the desktop bundle. Bun is build-only; same shape as host-service today. * feat(host-service): DaemonClient — Unix-socket client for pty-daemon New module packages/host-service/src/terminal/DaemonClient/. Single long-lived connection to pty-daemon, typed protocol API: - connect() + handshake, exposing version + protocol number - open / close / list as request/response promises - input / resize as fire-and-forget - subscribe(id, { replay }, { onOutput, onExit }) with multi-local- subscriber fan-out from one wire subscription; unsubscribe returned - onDisconnect(cb) for daemon-crash signaling - dispose() for clean shutdown Failure model is intentionally dumb: connection-level errors surface via onDisconnect; the desktop coordinator is responsible for respawning the daemon and host-service can reconnect by constructing a new DaemonClient. No in-band reconnect logic. Adds @superset/pty-daemon as a workspace dependency (host-service was already on node-pty 1.1; this layers the daemon protocol on top). Enables allowImportingTsExtensions in host-service tsconfig because the pty-daemon package's exports map points at .ts source files (Node ESM requires explicit extensions). Tests: 5 integration tests against a real Server (node --test): - connect + handshake exposes daemon version - open + subscribe + receive output + close - input forwarded; resize updates dims - multiple local subscribers fan out from one wire subscription - disconnect callback fires when daemon goes away Avoids parameter property shorthand in the constructor — Node's --experimental-strip-types doesn't allow it. Doesn't touch terminal.ts yet — that's the next commit on this branch. * feat(desktop): pty-daemon coordinator + manifest + main entry Sibling of HostServiceCoordinator that spawns/adopts the long-lived pty-daemon and feeds its socket path to host-service via SUPERSET_PTY_DAEMON_SOCKET. PTYs now live in a process whose lifetime is decoupled from host-service, so host-service restarts don't kill user shells. Pieces: - apps/desktop/src/main/lib/pty-daemon-manifest.ts — sibling of host-service-manifest.ts. Manifest at $SUPERSET_HOME_DIR/host/{orgId}/pty-daemon-manifest.json with pid, socketPath, protocolVersions, daemonVersion, startedAt. - apps/desktop/src/main/lib/pty-daemon-coordinator.ts — ensure() spawns detached child or adopts existing daemon (PID alive AND socket connectable). Same spawn shape as host-service: process.execPath + bundled script, openRotatingLogFd for stdio, writes manifest after socket-ready check. - apps/desktop/src/main/pty-daemon/index.ts — Electron main entry that imports @superset/pty-daemon's Server and provides argv/signal glue. Sibling of src/main/host-service/index.ts. - electron.vite.config.ts: register pty-daemon as a main entry so it bundles to dist/main/pty-daemon.js next to host-service.js. - host-service-coordinator: instantiate PtyDaemonCoordinator, ensure daemon up before each host-service spawn, pass its socket path to host-service via env. buildEnv signature gains a ptyDaemonSocket parameter. - tsconfig: enable allowImportingTsExtensions in apps/desktop and packages/host-service so transitively imported pty-daemon source type-checks (Node ESM requires explicit .ts extensions). What works after this commit: - Daemon spawns and listens on a 0600 Unix socket per organization. - host-service receives SUPERSET_PTY_DAEMON_SOCKET in its env. - Adoption: if a previous daemon is alive + reachable, reuse it. - Stale daemons (PID alive, socket gone) get killed and respawned. What does NOT work yet (next commit on this branch): - terminal.ts in host-service still calls pty.spawn directly. The daemon spawns but its DaemonClient isn't wired into terminal session creation. That's the load-bearing refactor; landing it separately so the coordinator change above can be reviewed in isolation. * feat(host-service): route terminal sessions through pty-daemon The load-bearing change. terminal.ts no longer calls node-pty's spawn; PTY ownership lives in pty-daemon and host-service is a remote control. After this commit, killing host-service does not kill user shells — the daemon's session map and ring buffer survive the restart, and a fresh host-service connects to the existing daemon and re-subscribes with replay. What changed: - New: src/terminal/daemon-client-singleton.ts. Lazy-initialized DaemonClient pulling SUPERSET_PTY_DAEMON_SOCKET from env. Surfaces daemon-disconnect via console.error; the desktop coordinator is responsible for respawning the daemon and restarting host-service. - terminal.ts: replace pty.spawn / pty.onData / pty.onExit with daemon.open + daemon.subscribe(replay:true). The PTY field becomes a thin DaemonPty facade exposing write/resize/kill/onData/onExit unchanged for callers (teardown.ts, etc). - createTerminalSessionInternal becomes async (await daemon.open). All callers updated: trpc/router/terminal launchSession, workspace-creation/setup-terminal startSetupTerminalIfPresent, runtime/teardown runTeardownScript. - session.unsubscribeDaemon is called on disposeSession to release the primary subscription cleanly. - DaemonPty.onData / onExit register additional subscribers via daemon.subscribe; daemon's multi-subscriber fan-out makes this safe. Tests: - pty-daemon: 24 bun unit + 28 node integration → all green - host-service: 37 bun unit + 5 node DaemonClient integration → all green (existing terminal logic still passes its tests; the daemon is wired in but mocked out at the unit level) - Workspace-wide tsc clean across all 27 packages. Build/test plumbing: - DaemonClient.test.ts → DaemonClient.node-test.ts so bun test won't pick it up (node-pty doesn't work under Bun). packages/host-service: new "test:integration" script invokes node. - tooling/typescript/base.json: enable allowImportingTsExtensions globally. Multiple packages now transitively pull in @superset/pty- daemon, whose source uses .ts extension imports (Node ESM requires explicit extensions for directory-style resolution). Roll back the per-package opt-ins added earlier in this branch. What still needs verifying (separate commit/PR): - End-to-end smoke: launch desktop, open a terminal, kill host-service, observe shell survives and renderer reattaches via existing exponential-backoff WebSocket reconnect. The infrastructure is in place; this commit doesn't run the e2e itself. * fix(desktop): make pty-daemon spawn failure non-fatal for host-service If the daemon can't start (dev build without dist/main/pty-daemon.js, node-pty native module mismatch, etc.), the prior commit took host- service down with it — workspaces, git, chat, all unreachable. That's the wrong coupling: the daemon's job is terminal survival, not gating the rest of the API. Now: catch the daemon-spawn error, log it loudly with the cause, and spawn host-service with SUPERSET_PTY_DAEMON_SOCKET="" so terminal ops fail with a specific message ("pty-daemon is not available: ...") and everything else keeps working. The user can still use the app while the daemon issue is investigated. This unblocks the "workspace on sidebar but not found in db" symptom seen during dev: the sidebar shows entries from cloud / local-db, but host-service was never running so its DB queries return nothing. * debug(desktop): surface daemon spawn failures with log tail + child exit code When the daemon fails to come up, the prior coordinator just said "socket did not become ready within 5000ms" with no idea what went wrong. Now: 1. Refuse to spawn if scriptPath doesn't exist (e.g. dist/main/pty- daemon.js missing because electron-vite hasn't bundled the new entry yet). The error tells the user to restart the dev server. 2. Listen for child early-exit; include exit code or signal in the timeout error. 3. On timeout, read the daemon's log file (tail 2 KB) and include it in the thrown error. 4. Console.log the spawn args before fork so the dev terminal shows exactly what's being launched. This makes the next failure self-diagnosing instead of opaque. * fix(desktop): allow .env / shell to provide SUPERSET_PTY_DAEMON_SOCKET When the coordinator's own daemon spawn fails, we no longer overwrite the env var with empty string — we leave whatever the parent process has. That makes ".env workaround" actually work: run a daemon manually, export SUPERSET_PTY_DAEMON_SOCKET=/path/to/sock in your shell or .env, and host-service will pick it up and terminals will function again until the spawn-side bug is fixed. * fix(desktop): use short /tmp path for pty-daemon socket (Darwin sun_path) In dev, SUPERSET_HOME_DIR resolves to <worktree>/superset-dev-data, which made the daemon socket path 159+ characters: /Users/.../worktrees/1c99c8eb-.../elastic-lens/superset-dev-data/host/<36-char-orgId>/pty-daemon.sock Darwin's sun_path is 104 bytes — kernel rejects listen() with EINVAL ("invalid argument") before the daemon ever gets to write to its log. Production paths are shorter but still uncomfortably close to the limit. Move the socket file to os.tmpdir() with a 12-char SHA256 hash of the org id: /var/folders/<dev-hash>/T/superset-ptyd-<12hex>.sock (~80 chars) Owner-only file mode (0600) is the security boundary, set by the daemon's Server.listen() — the directory permissions don't matter. Manifest still lives at $SUPERSET_HOME_DIR/host/<orgId>/pty-daemon- manifest.json with the socket path recorded inside; adoption logic reads it from there. * fix(host-service): adopt existing daemon sessions on host-service restart Headline bug from the smoke test: after host-service restarts, the renderer reconnects, host-service has an empty in-memory sessions Map, calls daemon.open(id) blindly, daemon already has the session → "session already exists" → renderer retries → tight loop, terminal unusable. The whole point of the daemon is for sessions to survive host-service restarts. The fix: - In createTerminalSessionInternal, wrap daemon.open in an inner try/catch. On "session already exists", call daemon.list(), find the existing session by id, treat it as adoption: reuse the pid, skip workspace-spawn args, set isAdopted=true. - For adopted sessions: shellReadyState starts as "ready" (the shell is already past the OSC 133;A marker that originally fired in the prior host-service lifetime), and initialCommandQueued=true so we don't re-fire the initial command. - The daemon's existing fan-out + ring-buffer behavior already handles the data plane: subscribe(replay:true) below pulls the buffered output and continues live streaming. Tests we should have written before shipping: packages/pty-daemon/test/control-plane.test.ts gains a "cross-client continuity (host-service restart simulation)" suite with 4 cases: 1. Client B finds session A's id via list after A disconnects. 2. Re-opening an existing id returns EEXIST (the trigger we depend on). 3. Client B subscribes-with-replay to A's session and gets buffered output + live stream from the still-living shell. 4. Daemon `list` returns sessions whose only client just dropped (the daemon must NOT garbage-collect on last-client-disconnect). packages/host-service/src/terminal/DaemonClient/DaemonClient.node-test gains an "adoption flow" test that drives the exact sequence host- service does after restart: open → drop → re-open (errors) → list → subscribe(replay:true) → see prior output AND new input flow to the still-living shell. Total: 5 new integration tests covering the hot path that just shipped broken to production. All 35 daemon-touching tests pass under node --test. * test(pty-daemon): replay-on-exited-session edge case Sister to the "host-service restart" suite: covers the case where the shell exited during the host-service downtime. New host-service subscribes-with-replay → must get the buffered output of the dying shell and observe via list() that the session has alive:false. Without this, the renderer hangs waiting for output that will never come. The exit-event side is documented in the test as best-effort — daemon's wireSession fires onExit once at the moment the shell dies; late subscribers see the buffer plus alive:false in list. host-service already supplements with a list check (the adoption flow looks for alive:true), so this test asserts the contract host-service depends on. * test(host-service): full E2E adoption test under Electron-as-Node End-to-end coverage of createTerminalSessionInternal's adoption path against the real stack: - Real pty-daemon Server (in-process, spawning real /bin/sh). - Real SQLite host DB (better-sqlite3) with workspace/project rows. - Test-only escape hatch __resetSessionsForTesting() to simulate a host-service process restart in-process. The test asserts the headline property: after host-service restart, re-calling createTerminalSessionInternal with the same terminalId returns a session with the SAME shell pid (proving adoption, not respawn) and new input flows to the still-living shell. Why it runs under Electron-as-Node, not raw `node`: host-service uses better-sqlite3, which is compiled against Electron's Node ABI for production. Running the test under Electron-as-Node matches that ABI so the production native module loads cleanly. The bundled Electron binary is in node_modules anyway (via the electron npm package), so this adds no test-time dependency. Mechanical changes: - Convert relative imports to use explicit `.ts` extensions across the host-service modules transitively reachable from terminal.ts (db, events, ports, runtime/filesystem) and across @superset/port- scanner. Required because Node ESM doesn't allow extension-less directory imports; needed for the test to load under --experimental-strip-types. Bun tolerates either form, so production is unaffected. Workspace tsc --noEmit clean across all 27 packages. - New `bun run test:e2e` script (packages/host-service/scripts/ test-e2e.ts) that resolves the workspace's Electron binary and runs the test under it with the right env. - New `__resetSessionsForTesting()` export in terminal.ts (test-only escape hatch, documented as such). Test results: - ✔ fresh open spawns a shell via the daemon - ✔ adopts existing daemon session after host-service restart simulation - ✔ adopted session keeps listed/exited bookkeeping 3/3 pass in ~550ms. This is the test that would have caught the "session already exists" loop bug shipped earlier in this branch. * fix(pty-daemon) + test(host-service): three more edge cases Found while expanding the e2e adoption suite to cover paths the inventory survey turned up: 1. **Daemon-side bug**: handleOpen was rejecting EEXIST on dead sessions (alive:false) too. dispose-then-recreate-with-same-id tight-looped because the daemon kept the session row around (kept for late-subscriber replay) but treated it as a collision. Fix: handleOpen now treats already-exited entries as recyclable — drops the dead entry and lets the spawn proceed. Live shells still get EEXIST so host-service drives the adoption-via-list path. handleClose stays as-is (the natural-exit replay path that late subscribers depend on still works). 2. **Test gap**: adopted session does NOT re-fire initialCommand. This would have been catastrophic for setup.sh terminals — every host-service restart would re-run setup. Verifies the initialCommandQueued: isAdopted shortcut from the original fix. Sentinel file + mtime check. 3. **Test gap**: adoption when the original workspace row is gone returns a clear error (not crash, not loop). Race: user deletes workspace cloud-side while host-service is down; daemon still has the live session; renderer reconnects. Must surface "Workspace worktree not found" cleanly. 4. **Test gap**: dispose then re-create with the same id works without zombie state. Catches the daemon-side bug above. Asserts the second create gets a different shell pid (real fresh spawn, not adoption of the dead session). Final test counts: - pty-daemon: 24 bun unit + 30 control-plane (node) - host-service: 5 DaemonClient (node) + 6 e2e adoption (Electron-as-Node) Total: 65 tests across 4 layers. * docs(desktop): pty-daemon implementation report Concise audit of the pty-daemon-host-integration branch against the implementation plan. Calls out: - 18 plan decisions correctly implemented - 6 deviations from the plan (each with a DECISION marker for accept-as- written / revert-to-plan / defer) - 7 explicit plan items not yet done (telemetry, crash supervision, real kill-9 tests, Linux verification, disconnect → close WS, /tmp sweep) - 5 decisions I made that weren't in the plan (worth documenting) - 1 wrong claim from a prior summary corrected (app-quit handling — the daemon SHOULD outlive app quit, mirroring host-service) Ends with a 5-question summary the reviewer can answer in one pass to sign off this report and update the plan. * fix(host-service): close terminal WS streams on daemon disconnect Without this, a daemon crash leaves the renderer's WebSocket sockets open while host-service's DaemonClient is dead. Input/resize silently fail, and the renderer thinks the terminal is alive. Now: daemon-client-singleton emits an onDaemonDisconnect event; terminal listens and closes every WS socket with code 1011. The renderer's existing exponential-backoff reconnect kicks in. On reconnect host- service rebuilds the DaemonClient (next getDaemonClient call), and the adoption-via-list path re-attaches to live sessions on the respawned daemon. Two related drive-bys from PR review: - daemon-client-singleton catches connect failure to dispose the partially-initialized client (was leaking on connect failure). - disposeDaemonClient now also handles the in-flight connecting promise. - Fire-and-forget WS handler checks ws.readyState before adding to session set after the daemon-open await — prevents adding a closed WS to broadcast. * feat(desktop): pty-daemon crash supervision (3-in-60s circuit breaker) Implements Open Decision #3 from the implementation plan: detect crash loops and stop respawning the daemon when something is fundamentally broken, instead of burning CPU on a forever-loop respawn. Behavior: - Daemon exits we initiated (coordinator.stop) don't count toward the crash counter — tracked via a `stopping` set. - Unexpected exits add a timestamp to the per-org crashTimes list. Older-than-60s timestamps are dropped on each accounting. - Up to 3 crashes within 60s → auto-respawn. - The 4th crash within the window → circuit OPEN. No more respawns until clearCrashCircuit(orgId) is called from the UI's "retry" affordance, or the desktop app restarts. - ensure() fails fast with a clear error message when the circuit is open, instead of trying to spawn-and-time-out repeatedly. Plumbing for the UI surface (telemetry + retry affordance) lands in a follow-up commit. * feat(desktop): pty-daemon telemetry events Wires the coordinator-side events the implementation plan called out. Uses the existing main/lib/analytics track() helper that already feeds PostHog with telemetry consent gating. Events emitted: - pty_daemon_spawn { organizationId, pid, socketPath } - pty_daemon_adopt { organizationId, pid, ageSeconds } - pty_daemon_spawn_failed { organizationId, reason, timeoutMs, earlyExitCode, earlyExitSignal } - pty_daemon_crash { organizationId, exitCode, crashesInWindow, windowSeconds, ageSeconds } - pty_daemon_circuit_open { organizationId, crashesInWindow } Known gap (not in this commit): the host-service-side events from the plan — pty_daemon_session_open, pty_daemon_session_exit, host_service_restart_sessions_preserved (the headline metric) — need host-service → desktop-main IPC since host-service runs as a separate Node process with no PostHog client of its own. Tracked separately; doesn't block the operational signals (spawn/adopt/crash/circuit-open) from being available for monitoring. * test(pty-daemon): real SIGKILL recovery test Adds a test that spawns the bundled daemon as a child process, sends it a real SIGKILL (no Server.close, no graceful shutdown, no exit event broadcast), and asserts that connected clients see the socket close cleanly without hanging. Different from the existing control-plane Server.close test, which exercises the cooperative shutdown path. Real production crashes don't go through Server.close — this test covers the actual path. Wired into `bun run test:integration` script. * refactor(host-service): own pty-daemon supervision Move the pty-daemon supervisor (spawn / adopt / restart / version-detect / crash-circuit / manifest) from the desktop main process into host-service. The daemon is supervised by host-service so it can be deployed independently of Electron — that's the v2 thesis. Daemon outlives host-service crashes via detached spawn + manifest adoption (unchanged). Renderer reads daemon state through `workspaceTrpc.terminal.daemon.*` instead of `electronTrpc.ptyDaemon.*`. Telemetry track() calls become structured `console.log` lines (JSON with `component: "pty-daemon-supervisor"`) — host-service has no PostHog plumbing yet. Boot is fire-and-track: host-service kicks off `ensureDaemon(orgId)` at startup without awaiting; terminal request handlers `await waitForDaemonReady()` before using the supervisor's socket path. Non-terminal ops are unaffected if the daemon takes time to come up. The `SUPERSET_PTY_DAEMON_SOCKET` env-var contract from desktop → host-service goes away in production. Kept as a test escape hatch for the in-process adoption integration test. Tests: 21 supervisor unit tests moved to host-service. The 3 desktop real-spawn version-roundtrip tests are dropped — equivalent coverage already lives at the daemon package boundary. Plan: apps/desktop/plans/20260430-pty-daemon-host-service-migration.md (in the dull-protocol design-doc branch). * docs(host-service): daemon supervision reference Architecture/reference doc colocated with the supervisor code. Replaces the migration plan that was the input to this work — describes the end state for future contributors. * fix(host-service): DaemonClient lifecycle hardening Address PR review on the daemon transport. Four related issues, all in the same family of "subtle bugs that bite under load": - **Request-level timeouts** for open/close/list (15s/5s/5s). Without these, a live-but-stuck daemon (e.g. blocked node-pty.spawn) hangs callers indefinitely — only a full disconnect would unblock them. - **list() filtered to non-session error frames.** Previously any error could settle a pending list, so a concurrent error from a session could resolve a list() call with the wrong reply. - **Handshake failure tears down the socket.** A rejected handshake left the open socket and its listeners alive — leaked resources across retries. connect() now destroys + nulls on throw. - **Decode failure hard-closes the transport.** A protocol decode error called onClose() but didn't destroy() the socket, so the connection could keep delivering frames after local teardown. 200 host-service tests pass (the 1 unrelated `pull-requests` failure is preexisting on the branch baseline). * fix(host-service): subscribe replay assertion + daemon CLI polish - DaemonClient.subscribe now throws if a second subscriber requests replay:true. The daemon's ring buffer is delivered once, on the first subscribe; later subscribers can't get historical data this way and used to silently miss it. Loud-fail the surprising case so callers pick a server-side replay path instead. Updated the existing fan-out test to use replay:false on the second subscriber (the right value for that use case anyway). - pty-daemon main.ts: validate --buffer-bytes is a positive integer; wrap the shutdown handler in try/finally with a re-entry guard so a second SIGINT/SIGTERM during graceful close doesn't double-call server.close() and the process always exits deterministically. * test(host-service): supervisor + tRPC + Electron-coupling coverage Adds 16 new tests across four files to close the test gaps for the pty-daemon migration. Most are bun unit tests; the supervisor integration test runs under node:test because the supervisor uses process.execPath to spawn the daemon (must be node, not bun). - DaemonSupervisor.node-test.ts (5 real-spawn scenarios): fresh spawn, cross-instance adoption, version drift detection on adoption, user-restart kills + respawns, auto-respawn after SIGKILL. - singleton.test.ts (6 cases): getSupervisor identity, fire-and-track bootstrap doesn't await, idempotent startDaemonBootstrap, waitForDaemonReady kicks off lazy bootstrap, failed bootstrap is retryable. - terminal.daemon.test.ts (4 cases): tRPC procedure wiring against a stub supervisor — UNAUTHORIZED gating, getUpdateStatus delegation, listSessions awaits bootstrap before delegating, restart wiring. - no-electron-coupling.test.ts (1 case): asserts host-service source has zero Electron imports/globals/APIs. Substitutes for a true headless smoke test until native-addon distribution is solved (better-sqlite3, node-pty, @parcel/watcher are bundle-external and currently expect Electron's resolution path). Also exports __resetSupervisorForTesting from src/daemon/index.ts so tests can reset the singleton between runs, and registers the new node-test in the test:integration script. Total host-service test suite is now 211 pass / 1 fail (the failing one is a preexisting pull-requests test unrelated to the migration). * feat(host-service): kill pty-daemon on dev-mode shutdown Per migration plan D5: in dev mode (NODE_ENV !== production), the host-service shutdown handler now stops the supervised daemon before exit. Production still keeps the daemon detached so PTYs survive host-service restarts (the original v2 thesis). Lets dev iteration on daemon code reset cleanly without manually killing the daemon between cycles. * fix(desktop): restore pty-daemon bundle target for Electron The supervisor's `sideBySide` path resolution expects pty-daemon.js next to host-service.js in the same dist directory. The Electron deploy bundles host-service into apps/desktop/dist/main/ via electron-vite, and that pipeline still needs an entry to bundle the daemon alongside. Restoring `apps/desktop/src/main/pty-daemon/index.ts` as a thin shim: imports Server from @superset/pty-daemon (workspace dep), parses argv, handles signals, that's it. The daemon implementation still lives entirely in the package. Headless deploys can spawn the package's own main.ts directly via the supervisor's workspace-dist fallback path. * fix(desktop): wrap V2SessionsSection in WorkspaceClientProvider The Settings → Terminal route lives outside any WorkspaceClientProvider (those are per-workspace). Without one, workspaceTrpc hooks fall through to electron-trpc, which has no `terminal.daemon` namespace (we removed the desktop-side proxy in the migration). The renderer silently failed with "no procedure on path terminal.daemon.*". V2SessionsSection now mounts its own WorkspaceClientProvider keyed to the active org's host URL from LocalHostServiceProvider. Hooks now reach host-service over HTTP correctly. Also adds light startup logging on host-service: - `[host-service] starting (org=..., port=..., NODE_ENV=...)` - `[supervisor] kicking off bootstrap for org=...` - `[supervisor] bootstrap OK for org=... pid=... version=... [update pending]` - `[supervisor] bootstrap failed for org=...` These survived the migration debugging session and are useful as production startup-trace lines. Per-call procedure logs were stripped to keep noise low. * fix(host-service): correct sideBySide daemon-script path resolution resolveSupervisorScriptPath was walking two extra levels (`..`, `..`) when looking for pty-daemon.js next to host-service.js, which in the electron-vite bundle resolved to apps/desktop/pty-daemon.js (doesn't exist). The bundle emits both files in the same dist/main/ directory, so the path is just `path.resolve(here, "pty-daemon.js")`. Manifested as "[pty-daemon] script not found at apps/pty-daemon/dist/ pty-daemon.js" when triggering Restart daemon from Settings — the sideBySide check failed and the supervisor fell through to the workspace-source fallback path, which doesn't apply in a bundled deploy. * fix(pty-daemon): delete sessions on PTY exit (no accumulation) Server.onExit marked sessions as exited and fanned out the exit event but never deleted them from the SessionStore. Comment claimed "we delete on next list/close" but neither path did. Result: every closed terminal pane left a permanent row in the daemon's map — list-reply inflated, memory grew unbounded over time. Now: delete the session row immediately after fanning out the exit event. Also clear matching subscriptions on the live conns so they don't carry a stale id forward. Tradeoff: a late subscriber that connects after exit (e.g. host-service restarting *during* the exit window) gets ENOENT instead of buffered output + exit event. The renderer's xterm.js already has whatever was rendered before disconnect — what's lost is just the "Process exited with code N" footer for that narrow window. Accepted per project preference for simplest invariants. Updated tests: dropped subscribe-with-replay-on-exited (its premise no longer holds) and replaced with a non-accumulation assertion + ENOENT expectation on post-exit input. EEXITED is no longer a returned code (still defined in protocol for forward-compat). * feat(host-service): adopted-daemon liveness check + dev-mode log piping Two related dev-quality fixes uncovered during manual QA: 1. Adopted daemons aren't tracked by `child.on("exit")` — the supervisor only attaches that handler to daemons it spawned. When an adopted daemon dies externally (kill -9, OOM, etc.) the supervisor's `instances` map carries a stale entry forever: `getSocketPath` returns a socket nobody's listening on, terminal ops fail with ECONNREFUSED until something forces a restart. Fix: poll `process.kill(pid, 0)` every 2s for adopted PIDs. On detected death, clear the instance + manifest so the next `ensure()` respawns. Added integration test: "detects when an adopted daemon dies externally". 2. host-service and pty-daemon stdout went to per-org rotating log files in BOTH dev and prod, so dev iteration had no live log visibility — every diagnostic required tailing files. Now in dev (NODE_ENV !== production) child stdout/stderr pipes through to the parent (host-service → desktop main → bun dev), each line tagged with `[hs:<orgId>]` or `[ptyd:<orgId>]`. Production stdio still backs to the rotating log file so detached children can outlive the parent without losing logs. Helper `pipeWithPrefix` splits chunks on \n so multi-line bursts keep the prefix on every line (was: only the first line). * fix(pty-daemon): default close to SIGHUP — interactive shells leak on SIGTERM The chain (DaemonClient.close, daemon handleClose, DaemonPty.kill) defaulted to SIGTERM. Interactive shells (especially `zsh -l`, the default macOS login shell) trap SIGTERM and stay alive — so every closed v2 terminal pane leaked a PTY process and a daemon session until something else SIGKILL'd it. Verified: PID 46234 (`zsh -l`, status `Ss+`) survived the v2 pane-close path (which sends SIGTERM by default). Manual `kill -HUP 46234` killed it cleanly. SIGHUP is the right semantic — it's what the kernel sends when a TTY actually closes. Default changed to SIGHUP at all three layers; explicit signals still pass through for callers that need stronger termination (e.g. SIGKILL for hung shells in test). Regression test added: "default close (SIGHUP) terminates an interactive login shell". Without the fix, this would time out waiting for the exit event. The earlier integration tests didn't catch this because they used non-interactive scripts (`-c "true"`) that exit naturally — no signal handling involved. * chore(host-service): bump version to 0.5.0 to force fresh respawn on upgrade The pty-daemon supervision migration adds a new `terminal.daemon` tRPC namespace and changes host-service-internal lifecycle (supervisor owns the daemon now, dev-mode stdio piping, etc.). Existing 0.4.x host-services running on user machines don't have any of this. Without this version bump, the desktop coordinator's `tryAdopt` would adopt the old host-service in place — Settings → Manage daemon would 404 on the new procedures, and the v2 PTY-survival promise (the whole point of this PR) would silently not engage until something else forced a restart. Bumping HOST_SERVICE_VERSION + MIN_HOST_SERVICE_VERSION to 0.5.0 forces the coordinator to SIGTERM old host-services on first launch of the new desktop build and respawn from the new bundle. One-time terminal-session loss for users on upgrade — covered in release notes. * docs(host-service): update daemon-supervision reference for shipped behavior Documents the additions from the manual-QA debugging session: - Adopted-daemon liveness polling (separate code path from spawned child's on-exit handler) - SIGHUP default for close (interactive shells trap SIGTERM) - Session deletion on PTY exit + the niche regression accepted - Dev-mode stdio piping with per-line prefix - Version-bump procedure (HOST_SERVICE_VERSION + MIN_HOST_SERVICE_VERSION in lockstep when adoption-floor matters) - Phase 2 (daemon-upgrade fd-handoff) explicitly noted as deferred, with the design hooks already in place that future work will use Existing sections (boot pattern, version detection, crash circuit, tests) updated to point at the new test files added in this PR. * chore(host-service): post-merge CI cleanup - bunfig + test/setup-env.ts: populate env vars before @t3-oss/env-core validates at module load, so integration tests that boot via createApp (without serve.ts) don't crash importing the validated env module. - Align apps/desktop semver caret to ^7.7.4 (Sherif: multiple-dependency- versions across the workspace). - Drop pre-existing unused MinimalCtx interface and replace candidates[0]! non-null assertion with explicit guard (Biome lint). - pty-daemon Server.pickProtocol: remove dead `?? (... ? null : null)` branch and the now-orphan CURRENT_PROTOCOL_VERSION import.
6 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Bumps react from 18.3.1 to 19.2.0.
Release notes
Sourced from react's releases.
... (truncated)
Changelog
Sourced from react's changelog.
... (truncated)
Commits
5667a41Bump next prerelease version numbers (#34639)8bb7241Bump useEffectEvent to Canary (#34610)e3c9656Ensure Performance Track are Clamped and Don't overlap (#34509)68f00c9Release Activity in Canary (#34374)0e10ee9[Reconciler] Set ProfileMode for Host Root Fiber by default in dev (#34432)3bf8ab4Add missing Activity export to development mode (#34439)1549bda[Flight] Only assign_storein dev mode when creating lazy types (#34354)bb6f0c8[Flight] Fix wrong missing key warning when static child is blocked (#34350)05addfcUpdate Flow to 0.266 (#34271)ec5dd0aUpdate Flow to 0.257 (#34253)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)