feat(pty-daemon): standalone PTY daemon package (Phase 1, skeleton)#3886
feat(pty-daemon): standalone PTY daemon package (Phase 1, skeleton)#3886Kitenite wants to merge 2 commits into
Conversation
New package @superset/pty-daemon implementing the long-lived PTY-owning process described in apps/desktop/plans/20260429-pty-daemon-implementation.md. This PR adds the daemon in isolation; host-service integration lands in a follow-up PR so both can be reviewed independently. What's in: - Versioned Unix-socket protocol (length-prefixed JSON frames; hello/ack handshake; open/input/resize/close/list/subscribe/unsubscribe ops). - Pty wrapper around node-pty with dim validation. - SessionStore: in-memory map + 64KB ring buffer per session. No persistence — explicitly out of scope per the v1 lessons. - Server: AF_UNIX SOCK_STREAM accept loop, file-mode 0600 auth boundary, per-connection subscription set, output/exit fan-out. - Handlers: pure functions over (store, conn, msg). Stateless from the client's perspective. - main.ts entrypoint: argv parsing, signal handling, graceful shutdown. Runtime: Node ≥ 20, not Bun. Verified during implementation that node-pty 1.1's master fd setup is incompatible with Bun 1.3's tty.ReadStream (onData/onExit silently never fire). Daemon ships as a Node script in the desktop app bundle; host-service stays on Bun. Tests: 24 unit tests under bun test (protocol framing, SessionStore, handlers with a fake spawn), 6 integration tests under node --test spawning real shells through real Unix sockets. All green. What's NOT in (separate PRs): - Host-service DaemonClient + terminal.ts refactor + manifest adoption. - Daemon-upgrade fd inheritance handoff (Phase 2). - Renderer / WS / tRPC changes (none required; the renderer is unchanged).
📝 WalkthroughWalkthroughA new standalone Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Server as Server<br/>(per-conn)
participant FrameDecoder
participant Handlers
participant SessionStore
participant Pty
Client->>Server: TCP/AF_UNIX connect + send(hello frame)
Server->>FrameDecoder: push(raw bytes)
FrameDecoder->>Handlers: decode -> dispatch(hello)
Handlers->>Server: return(hello-ack) / error
Server->>Client: send(hello-ack)
Client->>Server: send(open frame)
Server->>Handlers: dispatch(open)
Handlers->>Pty: spawn(meta)
Pty-->>Handlers: Pty instance (pid)
Handlers->>SessionStore: add(session, pty)
Handlers->>Client: send(open-ok)
Pty->>SessionStore: onData(buffer)
SessionStore->>Server: appendOutput + notify subscribers
Server->>Client: send(output frame) [if subscribed]
Pty->>SessionStore: onExit(info)
SessionStore->>Server: notify subscribers
Server->>Client: send(exit frame)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 6/8 reviews remaining, refill in 10 minutes and 2 seconds.Comment |
Greptile SummaryThis PR introduces the standalone
Confidence Score: 4/5Safe to merge after addressing the session store cleanup bug; everything else is cosmetic or a minor perf note. One P1 issue: closed sessions are never removed from the store, leaking memory and blocking session ID reuse, with the implementing comment present but no code. The dead ternary (P2) and O(n²) buffer (P2) are quality notes that don't affect correctness for normal use. packages/pty-daemon/src/handlers/handlers.ts (missing store.delete in handleClose)
|
| Filename | Overview |
|---|---|
| packages/pty-daemon/src/handlers/handlers.ts | All protocol handlers; handleClose kills the PTY but never removes the session from the store, causing a memory leak and preventing session ID reuse |
| packages/pty-daemon/src/Server/Server.ts | Core accept loop, dispatch, and PTY fan-out; has a dead ternary in pickProtocol and a short TOCTOU window on socket chmod |
| packages/pty-daemon/src/protocol/framing.ts | Length-prefixed frame encoder/decoder; FrameDecoder.push does O(n²) buffer copies on every call |
| packages/pty-daemon/src/protocol/messages.ts | Wire protocol schema; message types are well-defined and cover all handshake, session, and subscription flows |
| packages/pty-daemon/src/SessionStore/SessionStore.ts | In-memory session map with 64 KB ring buffer per session; eviction logic is correct |
| packages/pty-daemon/src/Pty/Pty.ts | Thin node-pty adapter with cols/rows validation; clean and correct |
| packages/pty-daemon/src/main.ts | Entrypoint: argv parsing, Server.listen(), SIGINT/SIGTERM handlers, version reading from package.json |
| packages/pty-daemon/test/integration.test.ts | 6 Node integration tests covering handshake, shell lifecycle, input relay, replay-on-subscribe, and list; comprehensive happy paths |
| packages/pty-daemon/src/protocol/version.ts | Protocol version constants; straightforward |
Sequence Diagram
sequenceDiagram
participant C as Client (host-service)
participant S as Server (Unix socket)
participant H as Handlers
participant SS as SessionStore
participant P as node-pty
C->>S: hello {protocols:[1]}
S->>C: hello-ack {protocol:1}
C->>S: open {id, meta}
S->>H: handleOpen(ctx, msg)
H->>P: spawn(meta)
P-->>H: Pty instance
H->>SS: store.add(id, pty)
H-->>S: open-ok {id, pid}
S->>C: open-ok
Note over S,P: wireSession: onData/onExit hooked
C->>S: subscribe {id, replay:true}
S->>H: handleSubscribe(ctx, conn, msg)
H->>SS: snapshotBuffer(session)
H->>C: output {data: base64(replay)}
P-->>S: onData(chunk)
S->>SS: appendOutput(session, chunk)
S->>C: output {id, data: base64}
P-->>S: onExit({code, signal})
S->>C: exit {id, code, signal}
C->>S: close {id, signal}
S->>H: handleClose(ctx, msg)
H->>P: pty.kill(signal)
H-->>S: closed {id}
S->>C: closed
Prompt To Fix All With AI
Fix the following 4 code review issues. Work through them one at a time, proposing concise fixes.
---
### Issue 1 of 4
packages/pty-daemon/src/handlers/handlers.ts:94-105
**Session never removed from store after `close`**
`handleClose` kills the PTY but never calls `ctx.store.delete(msg.id)`, so closed sessions accumulate in memory indefinitely and their IDs can never be reused — a subsequent `handleOpen` with the same `id` returns `EEXIST`. The comment in `wireSession` ("we delete on next list/close") describes the intent but neither `handleList` nor `handleClose` actually implements the deletion.
```suggestion
export function handleClose(ctx: HandlerCtx, msg: CloseMessage): ServerMessage {
const session = ctx.store.get(msg.id);
if (!session) return errorFor(msg.id, `unknown session: ${msg.id}`, "ENOENT");
try {
session.pty.kill(msg.signal ?? "SIGTERM");
} catch (err) {
return errorFor(msg.id, (err as Error).message, "EKILL");
}
ctx.store.delete(msg.id);
return { type: "closed", id: msg.id };
}
```
### Issue 2 of 4
packages/pty-daemon/src/Server/Server.ts:231-240
**Dead ternary in `pickProtocol`**
Both branches of the ternary `supported.has(CURRENT_PROTOCOL_VERSION) ? null : null` produce `null`, so the whole expression is always `null`. This makes the `??` effectively `return best ?? null`, which is just `return best`. Looks like an incomplete draft — was this meant to fall back to `CURRENT_PROTOCOL_VERSION` when the client's list is empty?
```suggestion
function pickProtocol(hello: HelloMessage): number | null {
const supported = new Set(SUPPORTED_PROTOCOL_VERSIONS);
let best: number | null = null;
for (const v of hello.protocols) {
if (supported.has(v) && (best === null || v > best)) best = v;
}
return best;
}
```
### Issue 3 of 4
packages/pty-daemon/src/protocol/framing.ts:22-25
**O(n²) buffer growth in `FrameDecoder.push`**
`Buffer.concat([this.buf, chunk])` copies all accumulated bytes on every `push` call. For a high-frequency terminal stream (many small TCP chunks per keystroke/render frame), this produces quadratic allocation pressure. Consider collecting chunks into an array and only concatenating in `drain` when parsing is needed.
### Issue 4 of 4
packages/pty-daemon/src/Server/Server.ts:59-66
**TOCTOU window before `chmodSync(0o600)`**
Between `server.listen()` resolving and `chmodSync` executing, the socket file exists with default umask permissions. A common mitigation is to create the socket inside a `0700` directory so the directory itself is the auth boundary, eliminating the race regardless of the socket's own permissions.
Reviews (1): Last reviewed commit: "feat(pty-daemon): standalone PTY daemon ..." | Re-trigger Greptile
| const session = ctx.store.get(msg.id); | ||
| if (!session) return errorFor(msg.id, `unknown session: ${msg.id}`, "ENOENT"); | ||
| try { | ||
| session.pty.kill(msg.signal ?? "SIGTERM"); | ||
| } catch (err) { | ||
| return errorFor(msg.id, (err as Error).message, "EKILL"); | ||
| } | ||
| return { type: "closed", id: msg.id }; | ||
| } | ||
|
|
||
| export function handleList(ctx: HandlerCtx): ListReplyMessage { | ||
| return { type: "list-reply", sessions: ctx.store.list() }; |
There was a problem hiding this comment.
Session never removed from store after
close
handleClose kills the PTY but never calls ctx.store.delete(msg.id), so closed sessions accumulate in memory indefinitely and their IDs can never be reused — a subsequent handleOpen with the same id returns EEXIST. The comment in wireSession ("we delete on next list/close") describes the intent but neither handleList nor handleClose actually implements the deletion.
| const session = ctx.store.get(msg.id); | |
| if (!session) return errorFor(msg.id, `unknown session: ${msg.id}`, "ENOENT"); | |
| try { | |
| session.pty.kill(msg.signal ?? "SIGTERM"); | |
| } catch (err) { | |
| return errorFor(msg.id, (err as Error).message, "EKILL"); | |
| } | |
| return { type: "closed", id: msg.id }; | |
| } | |
| export function handleList(ctx: HandlerCtx): ListReplyMessage { | |
| return { type: "list-reply", sessions: ctx.store.list() }; | |
| export function handleClose(ctx: HandlerCtx, msg: CloseMessage): ServerMessage { | |
| const session = ctx.store.get(msg.id); | |
| if (!session) return errorFor(msg.id, `unknown session: ${msg.id}`, "ENOENT"); | |
| try { | |
| session.pty.kill(msg.signal ?? "SIGTERM"); | |
| } catch (err) { | |
| return errorFor(msg.id, (err as Error).message, "EKILL"); | |
| } | |
| ctx.store.delete(msg.id); | |
| return { type: "closed", id: msg.id }; | |
| } |
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/pty-daemon/src/handlers/handlers.ts
Line: 94-105
Comment:
**Session never removed from store after `close`**
`handleClose` kills the PTY but never calls `ctx.store.delete(msg.id)`, so closed sessions accumulate in memory indefinitely and their IDs can never be reused — a subsequent `handleOpen` with the same `id` returns `EEXIST`. The comment in `wireSession` ("we delete on next list/close") describes the intent but neither `handleList` nor `handleClose` actually implements the deletion.
```suggestion
export function handleClose(ctx: HandlerCtx, msg: CloseMessage): ServerMessage {
const session = ctx.store.get(msg.id);
if (!session) return errorFor(msg.id, `unknown session: ${msg.id}`, "ENOENT");
try {
session.pty.kill(msg.signal ?? "SIGTERM");
} catch (err) {
return errorFor(msg.id, (err as Error).message, "EKILL");
}
ctx.store.delete(msg.id);
return { type: "closed", id: msg.id };
}
```
How can I resolve this? If you propose a fix, please make it concise.| }); | ||
| } | ||
| } | ||
|
|
||
| function pickProtocol(hello: HelloMessage): number | null { | ||
| const supported = new Set(SUPPORTED_PROTOCOL_VERSIONS); | ||
| let best: number | null = null; | ||
| for (const v of hello.protocols) { | ||
| if (supported.has(v) && (best === null || v > best)) best = v; | ||
| } |
There was a problem hiding this comment.
Both branches of the ternary supported.has(CURRENT_PROTOCOL_VERSION) ? null : null produce null, so the whole expression is always null. This makes the ?? effectively return best ?? null, which is just return best. Looks like an incomplete draft — was this meant to fall back to CURRENT_PROTOCOL_VERSION when the client's list is empty?
| }); | |
| } | |
| } | |
| function pickProtocol(hello: HelloMessage): number | null { | |
| const supported = new Set(SUPPORTED_PROTOCOL_VERSIONS); | |
| let best: number | null = null; | |
| for (const v of hello.protocols) { | |
| if (supported.has(v) && (best === null || v > best)) best = v; | |
| } | |
| function pickProtocol(hello: HelloMessage): number | null { | |
| const supported = new Set(SUPPORTED_PROTOCOL_VERSIONS); | |
| let best: number | null = null; | |
| for (const v of hello.protocols) { | |
| if (supported.has(v) && (best === null || v > best)) best = v; | |
| } | |
| return best; | |
| } |
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/pty-daemon/src/Server/Server.ts
Line: 231-240
Comment:
**Dead ternary in `pickProtocol`**
Both branches of the ternary `supported.has(CURRENT_PROTOCOL_VERSION) ? null : null` produce `null`, so the whole expression is always `null`. This makes the `??` effectively `return best ?? null`, which is just `return best`. Looks like an incomplete draft — was this meant to fall back to `CURRENT_PROTOCOL_VERSION` when the client's list is empty?
```suggestion
function pickProtocol(hello: HelloMessage): number | null {
const supported = new Set(SUPPORTED_PROTOCOL_VERSIONS);
let best: number | null = null;
for (const v of hello.protocols) {
if (supported.has(v) && (best === null || v > best)) best = v;
}
return best;
}
```
How can I resolve this? If you propose a fix, please make it concise.|
|
||
| push(chunk: Buffer): void { | ||
| this.buf = this.buf.length === 0 ? chunk : Buffer.concat([this.buf, chunk]); | ||
| } |
There was a problem hiding this comment.
O(n²) buffer growth in
FrameDecoder.push
Buffer.concat([this.buf, chunk]) copies all accumulated bytes on every push call. For a high-frequency terminal stream (many small TCP chunks per keystroke/render frame), this produces quadratic allocation pressure. Consider collecting chunks into an array and only concatenating in drain when parsing is needed.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/pty-daemon/src/protocol/framing.ts
Line: 22-25
Comment:
**O(n²) buffer growth in `FrameDecoder.push`**
`Buffer.concat([this.buf, chunk])` copies all accumulated bytes on every `push` call. For a high-frequency terminal stream (many small TCP chunks per keystroke/render frame), this produces quadratic allocation pressure. Consider collecting chunks into an array and only concatenating in `drain` when parsing is needed.
How can I resolve this? If you propose a fix, please make it concise.| await new Promise<void>((resolve, reject) => { | ||
| this.server.once("error", reject); | ||
| this.server.listen(this.opts.socketPath, () => { | ||
| this.server.off("error", reject); | ||
| resolve(); | ||
| }); | ||
| }); | ||
| // Owner-only access. The socket file IS the auth boundary. |
There was a problem hiding this comment.
TOCTOU window before
chmodSync(0o600)
Between server.listen() resolving and chmodSync executing, the socket file exists with default umask permissions. A common mitigation is to create the socket inside a 0700 directory so the directory itself is the auth boundary, eliminating the race regardless of the socket's own permissions.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/pty-daemon/src/Server/Server.ts
Line: 59-66
Comment:
**TOCTOU window before `chmodSync(0o600)`**
Between `server.listen()` resolving and `chmodSync` executing, the socket file exists with default umask permissions. A common mitigation is to create the socket inside a `0700` directory so the directory itself is the auth boundary, eliminating the race regardless of the socket's own permissions.
How can I resolve this? If you propose a fix, please make it concise.
🚀 Preview Deployment🔗 Preview Links
Preview updates automatically with new commits |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (2)
packages/pty-daemon/src/SessionStore/SessionStore.test.ts (1)
58-81: ⚡ Quick winAdd an oversized-single-chunk buffer-cap test.
Please add one case where
appendOutputreceives a chunk larger thanbufferCap(e.g., cap 10, chunk length 20) so the intended replay semantics stay explicit.Proposed test addition
test("appendOutput keeps buffer within cap across many writes", () => { const store = new SessionStore({ bufferCap: 32 }); const session = store.add("s0", fakePty({ cols: 80, rows: 24 })); for (let i = 0; i < 100; i++) { store.appendOutput( session, Buffer.from(`chunk${i.toString().padStart(2, "0")}-`), ); } expect(session.bufferBytes).toBeLessThanOrEqual(32); // Final chunk must always be present expect(store.snapshotBuffer(session).toString()).toContain("chunk99-"); }); + +test("appendOutput behavior is explicit when a single chunk exceeds cap", () => { + const store = new SessionStore({ bufferCap: 10 }); + const session = store.add("s0", fakePty({ cols: 80, rows: 24 })); + store.appendOutput(session, Buffer.from("0123456789ABCDEFGHIJ")); + expect(session.bufferBytes).toBeLessThanOrEqual(10); + // Assert the intended semantics here (e.g., empty buffer or truncated tail). +});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/pty-daemon/src/SessionStore/SessionStore.test.ts` around lines 58 - 81, Add a new unit test in SessionStore.test.ts that covers the case where appendOutput receives a single chunk larger than the configured bufferCap: create a SessionStore with bufferCap 10, add a session via SessionStore.add, call SessionStore.appendOutput with a 20-byte Buffer, then assert that SessionStore.snapshotBuffer(session) contains only the last 10 bytes of that chunk and that session.bufferBytes === 10; reference the SessionStore class and its appendOutput and snapshotBuffer methods when locating where to add the test.packages/pty-daemon/tsconfig.json (1)
4-10: Consider separating Node runtime typings from Bun test typings in tsconfig.json configuration.The current
tsconfig.jsonincludesbun-typesglobally (line 4) for bothsrcandtest(line 10). While no Bun-specific API usage is currently present in the runtime source code, this configuration could allow accidental Bun imports. To improve type-level safety and intent clarity, consider using separate type configurations: a base config with onlynodetypes forsrc, and a test-specific tsconfig that layers inbun-typesfor the test directory.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/pty-daemon/tsconfig.json` around lines 4 - 10, Split runtime and test typings by creating a base tsconfig that only includes Node types and applies to src, and a test-specific tsconfig that extends the base and adds "bun-types" for tests: remove "bun-types" from the current top-level "types" array (leave "node"), create a new tsconfig.test.json (or tsconfig.extend.json) that uses "extends": "<base tsconfig>" and sets "types": ["bun-types","node"] and "include": ["test"], and ensure build/test tooling (tsc/IDE or npm scripts) uses the base tsconfig for compiling runtime code and the test tsconfig for running type-checks on tests so Bun typings are only available in tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/pty-daemon/package.json`:
- Around line 17-27: The package.json declares "engines": { "node": ">=20" } but
uses the Node flag --experimental-strip-types in the "start" and
"test:integration" scripts and a .ts entrypoint ("pty-daemon": "./src/main.ts"),
which requires Node >=22.6.0; update package.json to either bump the engines
field to "node": ">=22.6.0" (or ">=22.18.0" for stable support) or
remove/replace usages of --experimental-strip-types and the .ts entrypoint by
switching scripts "start" and "test:integration" to run transpiled JS (or a
transpilation step) so Node 20/21 users won't fail. Ensure you change the
"engines" value or the "start"/"test:integration" script entries consistently
and keep the "pty-daemon" entrypoint aligned with the chosen approach.
In `@packages/pty-daemon/README.md`:
- Around line 20-44: The README.md code fence showing the src/ and test/ tree
lacks a language tag which triggers markdownlint MD040; update the fenced block
around the directory listing to include a language tag (e.g., change the opening
``` to ```text) so the block becomes a language-tagged fenced code block in the
README (the block containing the lines starting with "src/" and ending with
"test/ integration.test.ts").
In `@packages/pty-daemon/src/handlers/handlers.ts`:
- Around line 71-73: The write to the PTY currently uses Buffer.from(msg.data,
"base64") which tolerates malformed base64; you should validate msg.data before
calling session.pty.write to fail closed: implement a strict base64 check in the
handler around the call to session.pty.write (reference msg.data and
session.pty.write in handlers.ts) — for example, validate with a regex for valid
base64 chars and correct padding or decode to a Buffer then re-encode
(buffer.toString("base64") vs msg.data normalized by trimming padding) and
reject if they differ; on validation failure return/send a protocol error and do
not call session.pty.write. Ensure the existing catch path no longer lets
malformed payloads reach the PTY.
In `@packages/pty-daemon/src/main.ts`:
- Around line 26-31: The parsed --buffer-bytes value from the
arg.startsWith("--buffer-bytes=") branch is not validated before being stored in
args.bufferBytes and passed into Server; modify that branch in
packages/pty-daemon/src/main.ts to parse the value, verify
Number.isFinite(value) && Number.isInteger(value) && value > 0 (or use > = 1 per
requirements), and if invalid print a clear CLI error and exit(1); ensure the
validated value is what gets assigned to args.bufferBytes and forwarded to the
Server constructor (where args.bufferBytes is used) so no NaN/non-positive
values reach Server.
In `@packages/pty-daemon/src/Server/Server.ts`:
- Around line 61-67: The socket file is being hardened after bind which is racy;
instead set a restrictive umask before creating the socket so the socket is
created with owner-only permissions atomically. Wrap the this.server.listen call
by saving the current umask (const old = process.umask(0o177)), call
this.server.listen(this.opts.socketPath, ...) and in the listen callback restore
process.umask(old) and remove the later fs.chmodSync(this.opts.socketPath,
0o600); ensure you still call this.server.off("error", reject) and resolve() in
the listen callback as before.
- Around line 53-58: The current stale-socket cleanup blindly calls
fs.unlinkSync(this.opts.socketPath) which can delete regular files; change it to
first call fs.lstatSync(this.opts.socketPath) (or async lstat) and check the
result with stats.isSocket(); only call fs.unlinkSync when isSocket() is true,
otherwise throw or surface an error (do not unlink) so misconfigured socketPath
won't delete non-socket files; preserve the existing ENOENT handling around
lstat/unlink so missing paths remain a no-op.
---
Nitpick comments:
In `@packages/pty-daemon/src/SessionStore/SessionStore.test.ts`:
- Around line 58-81: Add a new unit test in SessionStore.test.ts that covers the
case where appendOutput receives a single chunk larger than the configured
bufferCap: create a SessionStore with bufferCap 10, add a session via
SessionStore.add, call SessionStore.appendOutput with a 20-byte Buffer, then
assert that SessionStore.snapshotBuffer(session) contains only the last 10 bytes
of that chunk and that session.bufferBytes === 10; reference the SessionStore
class and its appendOutput and snapshotBuffer methods when locating where to add
the test.
In `@packages/pty-daemon/tsconfig.json`:
- Around line 4-10: Split runtime and test typings by creating a base tsconfig
that only includes Node types and applies to src, and a test-specific tsconfig
that extends the base and adds "bun-types" for tests: remove "bun-types" from
the current top-level "types" array (leave "node"), create a new
tsconfig.test.json (or tsconfig.extend.json) that uses "extends": "<base
tsconfig>" and sets "types": ["bun-types","node"] and "include": ["test"], and
ensure build/test tooling (tsc/IDE or npm scripts) uses the base tsconfig for
compiling runtime code and the test tsconfig for running type-checks on tests so
Bun typings are only available in tests.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2d9ddaac-bfa8-4852-8de7-c9f601a7c63b
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (22)
packages/pty-daemon/README.mdpackages/pty-daemon/package.jsonpackages/pty-daemon/src/Pty/Pty.test.tspackages/pty-daemon/src/Pty/Pty.tspackages/pty-daemon/src/Pty/index.tspackages/pty-daemon/src/Server/Server.tspackages/pty-daemon/src/Server/index.tspackages/pty-daemon/src/SessionStore/SessionStore.test.tspackages/pty-daemon/src/SessionStore/SessionStore.tspackages/pty-daemon/src/SessionStore/index.tspackages/pty-daemon/src/handlers/handlers.test.tspackages/pty-daemon/src/handlers/handlers.tspackages/pty-daemon/src/handlers/index.tspackages/pty-daemon/src/index.tspackages/pty-daemon/src/main.tspackages/pty-daemon/src/protocol/framing.test.tspackages/pty-daemon/src/protocol/framing.tspackages/pty-daemon/src/protocol/index.tspackages/pty-daemon/src/protocol/messages.tspackages/pty-daemon/src/protocol/version.tspackages/pty-daemon/test/integration.test.tspackages/pty-daemon/tsconfig.json
| "pty-daemon": "./src/main.ts" | ||
| }, | ||
| "engines": { | ||
| "node": ">=20" | ||
| }, | ||
| "scripts": { | ||
| "clean": "git clean -xdf .cache .turbo dist node_modules", | ||
| "start": "node --experimental-strip-types src/main.ts", | ||
| "typecheck": "tsc --noEmit --emitDeclarationOnly false", | ||
| "test": "bun test src/protocol src/SessionStore src/handlers src/Pty/Pty.test.ts", | ||
| "test:integration": "node --experimental-strip-types --test test/integration.test.ts" |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does Node.js v20 support running .tsfiles via--experimental-strip-types, and from which Node version is this behavior officially supported for CLI entrypoints?
💡 Result:
No, Node.js v20 does not support running .ts files via --experimental-strip-types. This feature was added in Node.js v22.6.0 as experimental. The behavior became officially supported (enabled by default without the flag, no experimental warning) starting from v22.18.0 and v23.6.0, and reached stable status in v25.2.0. Node v20 CLI documentation does not mention .ts support or the flag, and changelogs for v20 show no such addition. The PR and commit for the feature target v22.x.
Citations:
- 1: nodejs/node@35f92d9
- 2: https://nodejs.org/learn/typescript/run-natively
- 3: https://nodejs.org/en/blog/release/v22.6.0
- 4: https://nodejs.org/api/typescript.html
- 5: module: unflag --experimental-strip-types nodejs/node#56350
- 6: https://nodejs.org/docs/latest/api/typescript.html
- 7: https://github.com/nodejs/node/blob/master/doc/changelogs/CHANGELOG_V20.md
- 8: https://raw.githubusercontent.com/nodejs/node/v20.18.2/doc/changelogs/CHANGELOG_V20.md
- 9: https://nodejs.org/en/blog/release/v20.10.0
Fix Node.js version compatibility: --experimental-strip-types requires Node v22.6.0 or higher.
The engines field declares node: ">=20", but the entrypoint (line 17) and scripts (lines 24, 27) rely on --experimental-strip-types to run TypeScript directly. This feature does not exist in Node.js v20 or v21—it was added in v22.6.0. Any user on Node 20 or 21 will encounter immediate runtime failure.
Either bump the minimum Node version to >=22.6.0 (or >=22.18.0 for stable support without the experimental flag), or remove the .ts entrypoint and use a compatible transpilation strategy.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/pty-daemon/package.json` around lines 17 - 27, The package.json
declares "engines": { "node": ">=20" } but uses the Node flag
--experimental-strip-types in the "start" and "test:integration" scripts and a
.ts entrypoint ("pty-daemon": "./src/main.ts"), which requires Node >=22.6.0;
update package.json to either bump the engines field to "node": ">=22.6.0" (or
">=22.18.0" for stable support) or remove/replace usages of
--experimental-strip-types and the .ts entrypoint by switching scripts "start"
and "test:integration" to run transpiled JS (or a transpilation step) so Node
20/21 users won't fail. Ensure you change the "engines" value or the
"start"/"test:integration" script entries consistently and keep the "pty-daemon"
entrypoint aligned with the chosen approach.
| ``` | ||
| src/ | ||
| ├── main.ts # Node entrypoint: argv → Server.listen() | ||
| ├── index.ts # Public exports for host-service consumers | ||
| ├── protocol/ # Wire schemas + length-prefixed framing | ||
| │ ├── version.ts # CURRENT_PROTOCOL_VERSION + supported list | ||
| │ ├── messages.ts # ClientMessage / ServerMessage unions | ||
| │ ├── framing.ts # encodeFrame / FrameDecoder (4-byte BE prefix) | ||
| │ └── index.ts | ||
| ├── Pty/ # node-pty thin wrapper with dim validation | ||
| │ ├── Pty.ts | ||
| │ └── index.ts | ||
| ├── SessionStore/ # in-memory map + 64KB ring buffer per session | ||
| │ ├── SessionStore.ts | ||
| │ └── index.ts | ||
| ├── handlers/ # pure functions: open/input/resize/close/list/subscribe | ||
| │ ├── handlers.ts | ||
| │ └── index.ts | ||
| └── Server/ # AF_UNIX SOCK_STREAM accept loop, handshake, dispatch | ||
| ├── Server.ts | ||
| └── index.ts | ||
|
|
||
| test/ | ||
| └── integration.test.ts # node --test: real shells, real socket | ||
| ``` |
There was a problem hiding this comment.
Add a language tag to the fenced code block to satisfy markdownlint.
Line 20 should specify a fence language (text is sufficient) to resolve MD040.
Suggested patch
-```
+```text
src/
├── main.ts # Node entrypoint: argv → Server.listen()
...
└── integration.test.ts # node --test: real shells, real socket</details>
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 markdownlint-cli2 (0.22.1)</summary>
[warning] 20-20: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
Verify each finding against the current code and only fix it if needed.
In @packages/pty-daemon/README.md around lines 20 - 44, The README.md code fence
showing the src/ and test/ tree lacks a language tag which triggers markdownlint
MD040; update the fenced block around the directory listing to include a
language tag (e.g., change the opening totext) so the block becomes a
language-tagged fenced code block in the README (the block containing the lines
starting with "src/" and ending with "test/ integration.test.ts").
</details>
<!-- fingerprinting:phantom:poseidon:hawk:4c7fd963-090e-40d7-9231-084f16372bd1 -->
<!-- d98c2f50 -->
<!-- This is an auto-generated comment by CodeRabbit -->
| try { | ||
| session.pty.write(Buffer.from(msg.data, "base64")); | ||
| } catch (err) { |
There was a problem hiding this comment.
Reject malformed base64 before writing to the PTY.
Buffer.from(msg.data, "base64") is permissive, so garbage or truncated payloads can be silently normalized into different bytes and forwarded as shell input instead of returning a protocol error. This should fail closed.
Suggested validation
try {
- session.pty.write(Buffer.from(msg.data, "base64"));
+ if (
+ !/^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$/.test(
+ msg.data,
+ )
+ ) {
+ return errorFor(msg.id, "invalid base64 input", "EPROTO");
+ }
+ session.pty.write(Buffer.from(msg.data, "base64"));
} catch (err) {
return errorFor(msg.id, (err as Error).message, "EWRITE");
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| session.pty.write(Buffer.from(msg.data, "base64")); | |
| } catch (err) { | |
| try { | |
| if ( | |
| !/^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$/.test( | |
| msg.data, | |
| ) | |
| ) { | |
| return errorFor(msg.id, "invalid base64 input", "EPROTO"); | |
| } | |
| session.pty.write(Buffer.from(msg.data, "base64")); | |
| } catch (err) { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/pty-daemon/src/handlers/handlers.ts` around lines 71 - 73, The write
to the PTY currently uses Buffer.from(msg.data, "base64") which tolerates
malformed base64; you should validate msg.data before calling session.pty.write
to fail closed: implement a strict base64 check in the handler around the call
to session.pty.write (reference msg.data and session.pty.write in handlers.ts) —
for example, validate with a regex for valid base64 chars and correct padding or
decode to a Buffer then re-encode (buffer.toString("base64") vs msg.data
normalized by trimming padding) and reject if they differ; on validation failure
return/send a protocol error and do not call session.pty.write. Ensure the
existing catch path no longer lets malformed payloads reach the PTY.
| else if (arg.startsWith("--buffer-bytes=")) { | ||
| args.bufferBytes = Number.parseInt( | ||
| arg.slice("--buffer-bytes=".length), | ||
| 10, | ||
| ); | ||
| } |
There was a problem hiding this comment.
Validate --buffer-bytes before passing it into Server.
On Line 27-Line 31, parseInt can yield NaN/non-positive values, and Line 45 forwards it unguarded. Reject invalid values early with a clear CLI error.
Suggested patch
else if (arg.startsWith("--buffer-bytes=")) {
- args.bufferBytes = Number.parseInt(
+ const parsed = Number.parseInt(
arg.slice("--buffer-bytes=".length),
10,
);
+ if (!Number.isInteger(parsed) || parsed <= 0) {
+ throw new Error("--buffer-bytes must be a positive integer");
+ }
+ args.bufferBytes = parsed;
}Also applies to: 45-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/pty-daemon/src/main.ts` around lines 26 - 31, The parsed
--buffer-bytes value from the arg.startsWith("--buffer-bytes=") branch is not
validated before being stored in args.bufferBytes and passed into Server; modify
that branch in packages/pty-daemon/src/main.ts to parse the value, verify
Number.isFinite(value) && Number.isInteger(value) && value > 0 (or use > = 1 per
requirements), and if invalid print a clear CLI error and exit(1); ensure the
validated value is what gets assigned to args.bufferBytes and forwarded to the
Server constructor (where args.bufferBytes is used) so no NaN/non-positive
values reach Server.
| // Stale-socket cleanup: remove any prior socket file at this path. | ||
| try { | ||
| fs.unlinkSync(this.opts.socketPath); | ||
| } catch (err) { | ||
| if ((err as NodeJS.ErrnoException).code !== "ENOENT") throw err; | ||
| } |
There was a problem hiding this comment.
Only remove a pre-existing socket here.
unlinkSync(this.opts.socketPath) will also delete a regular file or symlink if socketPath is misconfigured, which turns a startup mistake into data loss. Please lstat first and only unlink when the existing entry is actually a socket.
Suggested guard
// Stale-socket cleanup: remove any prior socket file at this path.
try {
- fs.unlinkSync(this.opts.socketPath);
+ const stat = fs.lstatSync(this.opts.socketPath);
+ if (!stat.isSocket()) {
+ throw new Error(
+ `refusing to unlink non-socket path: ${this.opts.socketPath}`,
+ );
+ }
+ fs.unlinkSync(this.opts.socketPath);
} catch (err) {
if ((err as NodeJS.ErrnoException).code !== "ENOENT") throw err;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Stale-socket cleanup: remove any prior socket file at this path. | |
| try { | |
| fs.unlinkSync(this.opts.socketPath); | |
| } catch (err) { | |
| if ((err as NodeJS.ErrnoException).code !== "ENOENT") throw err; | |
| } | |
| // Stale-socket cleanup: remove any prior socket file at this path. | |
| try { | |
| const stat = fs.lstatSync(this.opts.socketPath); | |
| if (!stat.isSocket()) { | |
| throw new Error( | |
| `refusing to unlink non-socket path: ${this.opts.socketPath}`, | |
| ); | |
| } | |
| fs.unlinkSync(this.opts.socketPath); | |
| } catch (err) { | |
| if ((err as NodeJS.ErrnoException).code !== "ENOENT") throw err; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/pty-daemon/src/Server/Server.ts` around lines 53 - 58, The current
stale-socket cleanup blindly calls fs.unlinkSync(this.opts.socketPath) which can
delete regular files; change it to first call fs.lstatSync(this.opts.socketPath)
(or async lstat) and check the result with stats.isSocket(); only call
fs.unlinkSync when isSocket() is true, otherwise throw or surface an error (do
not unlink) so misconfigured socketPath won't delete non-socket files; preserve
the existing ENOENT handling around lstat/unlink so missing paths remain a
no-op.
| this.server.listen(this.opts.socketPath, () => { | ||
| this.server.off("error", reject); | ||
| resolve(); | ||
| }); | ||
| }); | ||
| // Owner-only access. The socket file IS the auth boundary. | ||
| fs.chmodSync(this.opts.socketPath, 0o600); |
There was a problem hiding this comment.
The socket auth boundary is racy during startup.
The server binds first and tightens permissions afterward, so another local user can connect in the window before chmodSync(0o600) lands. Since this daemon relies on socket mode for auth, the socket needs to be created under restrictive access from the start, not hardened after bind.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/pty-daemon/src/Server/Server.ts` around lines 61 - 67, The socket
file is being hardened after bind which is racy; instead set a restrictive umask
before creating the socket so the socket is created with owner-only permissions
atomically. Wrap the this.server.listen call by saving the current umask (const
old = process.umask(0o177)), call this.server.listen(this.opts.socketPath, ...)
and in the listen callback restore process.umask(old) and remove the later
fs.chmodSync(this.opts.socketPath, 0o600); ensure you still call
this.server.off("error", reject) and resolve() in the listen callback as before.
Adds an exhaustive control-plane integration test that exercises every usage pattern host-service can throw at the daemon end-to-end (real shells, real Unix socket), plus the production build pipeline matching the host-service pattern. Test coverage (28 integration tests, all passing in ~2.5s): - Handshake variants (non-hello first, unsupported version, mutual picking, duplicate hello) - Session lifecycle (bad dims, duplicate id, ENOENT on missing, instant-exit, SIGKILL hung shell) - I/O patterns (resize during streaming, burst output, multi-byte UTF-8) - Multi-client fan-out (two subscribers, unsubscribe stops delivery, dropped subscriber doesn't crash) - Detach + reattach (late subscriber gets replay, full disconnect → new conn → continues live) - Hostile input (malformed frames, oversized frames, input on dead session) - Concurrency (20 sessions on one conn, 10 conns in parallel) - Server shutdown (in-flight clients disconnect cleanly) - Frame splitting across TCP chunks Reusable test client extracted to test/helpers/client.ts (waitFor, collect, sendRaw, onClose). Found and fixed during the suite: Server.close() now kills owned PTYs synchronously so the daemon process can actually exit (open master fds were keeping the event loop alive). Aligns with the v1-lessons "synchronous teardown only" rule. Production build: build.ts mirrors packages/host-service/build.ts — Bun.build target=node, externalizes node-pty, emits dist/pty-daemon.js that runs under Electron's bundled Node via process.execPath. No new runtime in the desktop bundle. Bun is build-only; same shape as host-service today.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
packages/pty-daemon/README.md (1)
32-61:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winAdd a language tag to the layout fence.
This block still triggers MD040;
textis sufficient here.Suggested patch
-``` +```text src/ ├── main.ts # Node entrypoint: argv → Server.listen() ... └── control-plane.test.ts # exhaustive control-plane coverage (25 tests)</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@packages/pty-daemon/README.mdaround lines 32 - 61, The README code block is
missing a language tag which triggers MD040; update the fenced block around the
directory layout in README.md to include a language tag (e.g., change the triple
backticks to ```text) so the linter recognizes it as a code block; locate the
block that shows the src/ and test/ tree (the multi-line layout fence) and add
the "text" tag to the opening fence.</details> </blockquote></details> </blockquote></details> <details> <summary>🤖 Prompt for all review comments with AI agents</summary>Verify each finding against the current code and only fix it if needed.
Inline comments:
In@packages/pty-daemon/test/helpers/client.ts:
- Around line 70-96: The waitFor() logic leaves pending waiters in the waiters
array when the socket closes or errors, causing callers to hang until their
timeout; update the socket close and error handling to reject and cleanup all
pending waiters immediately: inside the socket.on("close", ...) and the
socket.once("error", ...) handlers (and remove the current socket.once("error",
reject) usage), iterate the waiters array, clear each waiter.timer, call
waiter.reject with a descriptive Error (e.g. "socket closed" or the received
error), then empty the waiters array and invoke existing closeCbs so no waiter
remains waiting on a dead socket (refer to waitFor, waiters, socket, closeCbs).- Around line 98-118: The collect function currently rescans from
collected.length which is the number of matched messages and can cause
duplicates; instead snapshot a start index once when collect begins (e.g., const
start = messages.length) and use that start cursor when iterating new messages
in the onMsg listener and the final sweep, updating the cursor as you consume
new messages (e.g., advance start = messages.length after pushing) so you only
process frames that arrived after collect() was called while retaining any
initially collected matches from messages.filter(predicate).
Duplicate comments:
In@packages/pty-daemon/README.md:
- Around line 32-61: The README code block is missing a language tag which
triggers MD040; update the fenced block around the directory layout in README.md
to include a language tag (e.g., change the triple backticks to ```text) so the
linter recognizes it as a code block; locate the block that shows the src/ and
test/ tree (the multi-line layout fence) and add the "text" tag to the opening
fence.</details> <details> <summary>🪄 Autofix (Beta)</summary> Fix all unresolved CodeRabbit comments on this PR: - [ ] <!-- {"checkboxId": "4b0d0e0a-96d7-4f10-b296-3a18ea78f0b9"} --> Push a commit to this branch (recommended) - [ ] <!-- {"checkboxId": "ff5b1114-7d8c-49e6-8ac1-43f82af23a33"} --> Create a new PR with the fixes </details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: defaults **Review profile**: CHILL **Plan**: Pro **Run ID**: `e30f4b78-80d5-40ab-ac8f-a8ad2d17b0d7` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 3e329bb59e2c41f96797a647709cac7dc0185f19 and b8784ea722aa6c0efdac0ecfa0b50ce239959f3f. </details> <details> <summary>📒 Files selected for processing (7)</summary> * `packages/pty-daemon/README.md` * `packages/pty-daemon/build.ts` * `packages/pty-daemon/package.json` * `packages/pty-daemon/src/Server/Server.ts` * `packages/pty-daemon/test/control-plane.test.ts` * `packages/pty-daemon/test/helpers/client.ts` * `packages/pty-daemon/test/integration.test.ts` </details> <details> <summary>✅ Files skipped from review due to trivial changes (1)</summary> * packages/pty-daemon/src/Server/Server.ts </details> <details> <summary>🚧 Files skipped from review as they are similar to previous changes (1)</summary> * packages/pty-daemon/package.json </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| socket.on("close", () => { | ||
| isClosed = true; | ||
| for (const cb of closeCbs) cb(); | ||
| }); | ||
| socket.once("error", reject); | ||
| socket.once("connect", () => { | ||
| socket.off("error", reject); | ||
| resolve({ | ||
| socket, | ||
| messages, | ||
| send(m) { | ||
| if (!socket.destroyed) socket.write(encodeFrame(m)); | ||
| }, | ||
| sendRaw(buf) { | ||
| if (!socket.destroyed) socket.write(buf); | ||
| }, | ||
| waitFor(predicate, ms = 5000) { | ||
| return new Promise<ServerMessage>((res, rej) => { | ||
| const found = messages.find(predicate); | ||
| if (found) return res(found); | ||
| const timer = setTimeout(() => { | ||
| const i = waiters.findIndex((w) => w.predicate === predicate); | ||
| if (i >= 0) waiters.splice(i, 1); | ||
| rej(new Error(`waitFor timed out after ${ms}ms`)); | ||
| }, ms); | ||
| waiters.push({ predicate, resolve: res, reject: rej, timer }); | ||
| }); |
There was a problem hiding this comment.
Fail fast when the socket closes.
waitFor() only times out today; if the daemon drops the connection first, callers sit on the timeout instead of seeing the real failure immediately. Reject all pending waiters from the close/error path so dead sockets do not turn into 5s hangs.
Suggested fix
export function connect(socketPath: string): Promise<DaemonClient> {
return new Promise((resolve, reject) => {
const socket = net.createConnection({ path: socketPath });
const decoder = new FrameDecoder();
const messages: ServerMessage[] = [];
const waiters: Waiter[] = [];
const closeCbs: Array<() => void> = [];
let isClosed = false;
+
+ const failWaiters = (err: Error) => {
+ for (const w of waiters) {
+ clearTimeout(w.timer);
+ w.reject(err);
+ }
+ waiters.length = 0;
+ };
@@
socket.on("close", () => {
isClosed = true;
+ failWaiters(new Error("socket closed"));
for (const cb of closeCbs) cb();
});
+ socket.on("error", (err) => {
+ failWaiters(err);
+ });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/pty-daemon/test/helpers/client.ts` around lines 70 - 96, The
waitFor() logic leaves pending waiters in the waiters array when the socket
closes or errors, causing callers to hang until their timeout; update the socket
close and error handling to reject and cleanup all pending waiters immediately:
inside the socket.on("close", ...) and the socket.once("error", ...) handlers
(and remove the current socket.once("error", reject) usage), iterate the waiters
array, clear each waiter.timer, call waiter.reject with a descriptive Error
(e.g. "socket closed" or the received error), then empty the waiters array and
invoke existing closeCbs so no waiter remains waiting on a dead socket (refer to
waitFor, waiters, socket, closeCbs).
| collect(predicate, ms) { | ||
| return new Promise<ServerMessage[]>((res) => { | ||
| const collected: ServerMessage[] = messages.filter(predicate); | ||
| const onMsg = (chunk: Buffer) => { | ||
| void chunk; | ||
| for (let i = collected.length; i < messages.length; i++) { | ||
| const m = messages[i]; | ||
| if (m && predicate(m)) collected.push(m); | ||
| } | ||
| }; | ||
| socket.on("data", onMsg); | ||
| setTimeout(() => { | ||
| socket.off("data", onMsg); | ||
| // Final sweep in case of late drains. | ||
| for (let i = collected.length; i < messages.length; i++) { | ||
| const m = messages[i]; | ||
| if (m && predicate(m)) collected.push(m); | ||
| } | ||
| res(collected); | ||
| }, ms); | ||
| }); |
There was a problem hiding this comment.
Snapshot the collection cursor once.
collected.length is the number of matches, not the point where collect() started. If the log already contains matches, the listener re-scans old frames and can duplicate them in the returned array.
Suggested fix
collect(predicate, ms) {
return new Promise<ServerMessage[]>((res) => {
+ const startIndex = messages.length;
const collected: ServerMessage[] = messages.filter(predicate);
- const onMsg = (chunk: Buffer) => {
- void chunk;
- for (let i = collected.length; i < messages.length; i++) {
+ const onMsg = () => {
+ for (let i = startIndex; i < messages.length; i++) {
const m = messages[i];
if (m && predicate(m)) collected.push(m);
}
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/pty-daemon/test/helpers/client.ts` around lines 98 - 118, The
collect function currently rescans from collected.length which is the number of
matched messages and can cause duplicates; instead snapshot a start index once
when collect begins (e.g., const start = messages.length) and use that start
cursor when iterating new messages in the onMsg listener and the final sweep,
updating the cursor as you consume new messages (e.g., advance start =
messages.length after pushing) so you only process frames that arrived after
collect() was called while retaining any initially collected matches from
messages.filter(predicate).
Summary
Implements Phase 1 of
apps/desktop/plans/20260429-pty-daemon-implementation.md(under review in #3851). This PR adds the standalone daemon package with no host-service changes; integration is a follow-up PR so both can be reviewed independently.What's new
packages/pty-daemon/— a long-lived PTY-owning Node process that talks to host-service over a Unix socket. Routine host-service upgrades will not touch shells once integration lands.protocol/(framing, messages, version)hello/hello-ackhandshake, allClientMessage/ServerMessageschemasPty/node-ptywrapper with cols/rows validationSessionStore/Map<sessionId, Session>+ 64 KB ring buffer per session. No persistence, no SQLitehandlers/open / input / resize / close / list / subscribe / unsubscribe. Stateless from the client's perspectiveServer/0600auth boundary, per-connection subscription set, output/exit fan-outmain.tsServer.listen(), signal handling, graceful shutdownRuntime: Node, not Bun
While building this I confirmed that node-pty 1.1 + Bun 1.3 don't get along: the master fd's
tty.ReadStreamcloses immediately under Bun andonData/onExitsilently never fire. The daemon ships as a Node script in the desktop app bundle; host-service stays on Bun. Documented in README.md and package.jsonengines.node.v1 lessons applied
The plan's "do not repeat" list is honored throughout:
HistoryWriteris gone.Source LOC sits at ~870 (over the 500-700 plan target). The overage is the wire-protocol layer (~225 LOC of message types + framing) that v1's in-process daemon didn't have to pay for.
Tests
bun test(24/24),bun run test:integration(6/6),bun run typecheckclean. End-to-end smoke also verified: starts withsrw-------socket, accepts connections, completes the protocol.Test plan
src/protocol/messages.ts) — pin v1 schema before integration PR depends on itServer.ts(the load-bearing class)bun run test:integrationlocally to confirm shell spawn works on your machineOut of scope (follow-ups)
DaemonClient,terminal.tsrefactor to byte-relay, ring-buffer migration off host-service, coordinator adoption logic, manifest plumbing, telemetry events.child_process.spawnstdio, needed only once we change the daemon protocol/binary.Summary by cubic
Adds the standalone
@superset/pty-daemon: a long‑lived Node PTY daemon with a versioned Unix‑socket protocol. Includes a production build and a full integration suite; host‑service integration comes next.New Features
Ptywrapper with cols/rows validation; raw byte writes.SessionStorewith ~64 KB ring buffer per session.Serverwith 0600 socket perms, per‑connection subscriptions, output/exit fan‑out; close() now kills owned PTYs synchronously; graceful shutdown.pty-daemon(--socket,--buffer-bytes); production bundle viabuild.ts→dist/pty-daemon.js(Node target).Dependencies
@superset/pty-daemonwithnode-pty@1.1.0.@superset/pty-daemon/protocol.Written for commit b8784ea. Summary will update on new commits. Review in cubic
Summary by CodeRabbit