Skip to content

Added Claude SDK endpoint#1250

Merged
Kitenite merged 4 commits into
mainfrom
kitenite/ai-chat-ui-impl
Feb 6, 2026
Merged

Added Claude SDK endpoint#1250
Kitenite merged 4 commits into
mainfrom
kitenite/ai-chat-ui-impl

Conversation

@Kitenite
Copy link
Copy Markdown
Collaborator

@Kitenite Kitenite commented Feb 6, 2026

Description

Related Issues

Type of Change

  • Bug fix
  • New feature
  • Documentation
  • Refactor
  • Other (please describe):

Testing

Screenshots (if applicable)

Additional Notes

Summary by CodeRabbit

  • New Features

    • Claude agent endpoint with streaming responses via Server-Sent Events (SSE).
    • Multi-turn conversation support with per-session state for resumed interactions.
    • Health check reporting agent status, binary availability, and active sessions.
    • Agent starts alongside the main proxy and shuts down gracefully.
  • Chores

    • Added AI-related dependencies for the new agent integration.
    • Documentation updated to mark the Claude agent work as implemented.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Feb 6, 2026

📝 Walkthrough

Walkthrough

Adds a Claude Agent Hono HTTP endpoint and a converter, streams Claude SDK responses as SSE in TanStack AI chunk format, adds two dependencies, and starts a separate agent server alongside the existing streams proxy with coordinated startup/shutdown.

Changes

Cohort / File(s) Summary
Dependencies
apps/streams/package.json
Added @anthropic-ai/claude-agent-sdk (^0.2.19) and @tanstack/ai (^0.3.0).
Claude Agent Endpoint
apps/streams/src/claude-agent.ts
New Hono app exposing POST /query (SSE) backed by Claude Agent SDK; validates requests, extracts latest user prompt, streams SDK messages as SSE, maintains sessionId→claudeSessionId mapping with TTL/eviction, supports aborts and system-init handling, provides health endpoint, exports claudeAgentApp.
Message Format Converter
apps/streams/src/sdk-to-ai-chunks.ts
New stateful converter translating Claude SDK stream events (content block lifecycle, deltas, tool calls, thinking, errors) into TanStack AI StreamChunk events; exports createConverter() and ConversionState.
Server Integration
apps/streams/src/index.ts
Imports claudeAgentApp, introduces CLAUDE_AGENT_PORT (default 9090), starts a separate agentServer alongside the proxyServer, and updates startup/shutdown to close both servers.
Documentation
docs/ai-chat-plan.md
Marked Claude Agent Endpoint phase as DONE and recorded the two new files and package/index updates.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant HonoApp as Hono App\n(claude-agent.ts)
    participant SDK as Claude Agent SDK
    participant Converter as Message Converter\n(sdk-to-ai-chunks.ts)
    participant SSE as SSE Stream

    Client->>HonoApp: POST /query (messages, sessionId, opts)
    HonoApp->>HonoApp: validate request, select latest user prompt, load session mapping
    HonoApp->>SDK: query(prompt, env, options) (streaming)
    SDK-->>HonoApp: stream of SDKMessage events
    loop for each SDKMessage
        HonoApp->>Converter: convert(SDKMessage)
        Converter-->>HonoApp: StreamChunk[] (START/ARGS/CONTENT/FINISH/ERROR)
        HonoApp->>SSE: send chunks as SSE data events
    end
    SSE-->>Client: SSE stream (TanStack AI / AG-UI chunks) and final [DONE]
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly Related PRs

Poem

🐰 I nibble bytes and chase the stream,
Claude hums replies in SSE gleam,
Blocks turn to chunks and hop along,
Sessions resume, the flow stays strong,
A merry rabbit cheers the team. 🥕

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is entirely a template with no substantive content filled in; all sections are empty or contain only placeholder comments. Fill in the description template with details about the Claude SDK endpoint implementation, mark the type of change as 'New feature', and document the testing approach and any known limitations.
Docstring Coverage ⚠️ Warning Docstring coverage is 45.45% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Added Claude SDK endpoint' directly and clearly summarizes the main change—implementing a new Claude agent endpoint using the Claude SDK.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch kitenite/ai-chat-ui-impl

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/streams/src/index.ts (1)

26-40: ⚠️ Potential issue | 🟡 Minor

Graceful shutdown does not close the proxy or agent HTTP servers.

serve() returns a Server handle, but it's not captured for either the proxy (line 26) or the agent (line 31). On SIGINT, only the durable stream server is stopped — in-flight SSE connections on the proxy and agent ports are abandoned without draining.

Proposed fix
-serve({ fetch: app.fetch, port: PORT }, (info) => {
+const proxyServer = serve({ fetch: app.fetch, port: PORT }, (info) => {
 	console.log(`[streams] Proxy running on http://localhost:${info.port}`);
 });

 // Start Claude agent endpoint
-serve({ fetch: claudeAgentApp.fetch, port: AGENT_PORT }, (info) => {
+const agentServer = serve({ fetch: claudeAgentApp.fetch, port: AGENT_PORT }, (info) => {
 	console.log(
 		`[streams] Claude agent endpoint on http://localhost:${info.port}`,
 	);
 });

 // Graceful shutdown
 process.on("SIGINT", async () => {
+	proxyServer.close();
+	agentServer.close();
 	await durableStreamServer.stop();
 	process.exit(0);
 });
🤖 Fix all issues with AI agents
In `@apps/streams/src/claude-agent.ts`:
- Line 26: The claudeSessions Map is unbounded and never evicted; update session
lifecycle handling so entries are removed when a session completes or errors.
Specifically, after you add entries to claudeSessions on the "system-init" path,
ensure you remove the corresponding key when the associated result finishes or
errors (where the code handles the final result/error for a session), or replace
claudeSessions with a bounded cache (LRU or TTL) implementation; reference the
claudeSessions Map and the places that process "system-init" and the
result/error callbacks to add explicit cleanup or swap in a TTL/LRU-backed map.
- Around line 60-73: The inline magic values in the query options should be
extracted to module-level constants and the permission behavior made
configurable: replace the hardcoded model string, maxTurns number, and
permissionMode string used in the query(...) options (the call that uses
claudeSessionId, cwd, binaryPath, queryEnv, and AbortController) with named
constants (e.g., DEFAULT_CLAUDE_MODEL, DEFAULT_MAX_TURNS,
DEFAULT_PERMISSION_MODE) defined at the top of the file; add a config flag or
environment variable to control permissionMode instead of always using
"bypassPermissions", and add a brief in-code comment documenting the security
implications of bypassPermissions and recommending it be disabled or restricted
for non-local endpoints.
- Around line 49-55: The current construction of queryEnv spreads process.env
when agentEnv is missing, leaking all server env vars to the Claude subprocess;
instead build queryEnv by starting from an explicit allowlist of only the
variables the binary needs (e.g., NODE_ENV, PATH, any CLAUDE_* vars) and then
merge in agentEnv if provided. Modify the logic around queryEnv, agentEnv, and
the assignment of queryEnv.CLAUDE_CODE_ENTRYPOINT so that you initialize
queryEnv from a small allowlist (or a constant ALLOWED_CLAUDE_ENV array) and
only copy those keys from process.env, then overlay agentEnv on top if present,
and finally set CLAUDE_CODE_ENTRYPOINT = "sdk-ts".
- Around line 29-35: The handler currently uses c.req.json<T>() (assigned to
body) which only asserts types at compile time; add a Zod schema (e.g., define a
z.object with messages:
z.array(z.object({role:z.string(),content:z.string()})).optional or required per
spec, stream: z.boolean().optional, sessionId: z.string().optional, cwd:
z.string().optional, env: z.record(z.string()).optional) and validate the
incoming payload with schema.safeParse before using it; if validation fails
return an immediate 400 response with the parsed error details so downstream
code in claude-agent.ts uses a guaranteed-shape body.
- Line 71: The AbortController created and passed into query() is not being
triggered on abort, so wire it up: create a local const abortController = new
AbortController(), pass abortController.signal to the SDK request if the SDK
expects a signal (or ensure the options include the controller under the exact
key the SDK expects), and in the existing abort handler that calls
result.interrupt()/result.close() also call abortController.abort() so the
controller's signal actually cancels the underlying request/subprocess.

In `@apps/streams/src/sdk-to-ai-chunks.ts`:
- Around line 316-322: The code silently swallows JSON.parse errors when
building parsedInput for tool calls inside the block.type === "tool_use" branch;
update the try/catch around JSON.parse(block.argsAccumulator || "{}") to log the
parsing error and relevant context (at least block.toolCallId, block.toolName,
and the raw argsAccumulator) instead of silently falling back to {}—use the
module's existing logger (e.g., processLogger.error) or console.error if no
logger is available, then keep parsedInput = {} after logging so behavior is
unchanged except for diagnostics.
- Around line 394-438: In handleResultMessage, avoid emitting both RUN_ERROR and
RUN_FINISHED for the same message: when message.subtype?.startsWith("error")
push the RUN_ERROR chunk (with runId/state.runId and error details from
message.subtype) and then return the chunks immediately (or otherwise skip
creating the RUN_FINISHED chunk); ensure you preserve any usage data only for
non-error paths so the client’s StreamProcessor does not see a finish after an
error.
- Line 18: The module imports StreamChunk from `@tanstack/ai` but the repo also
defines a local StreamChunk interface in types.ts that downstream files
(stream-writer.ts, protocol.ts) use, causing a type mismatch; either remove the
local StreamChunk and replace its uses with the `@tanstack/ai` StreamChunk across
the codebase, or change sdk-to-ai-chunks.ts to return the local
types.StreamChunk (instead of `@tanstack/ai.StreamChunk`) so all files share the
same loose shape; update the import statements and the return type (in
sdk-to-ai-chunks.ts functions) and run type-check to ensure stream-writer.ts and
protocol.ts compile with the chosen single authoritative StreamChunk type.
- Around line 288-297: The thinking_delta branch currently emits a STEP_FINISHED
for every token which breaks the step lifecycle; update the thinking delta
handling so that in the switch/case for "thinking_delta" in
handleContentBlockDelta you emit an incremental event (e.g., a new event type
STEP_DELTA or similar) with delta.thinking (satisfying StreamChunk) instead of
STEP_FINISHED, and then emit the single STEP_FINISHED only from
handleContentBlockStop when the thinking block actually ends; refer to the
existing symbols handleContentBlockStart, handleContentBlockDelta,
handleContentBlockStop, the "thinking_delta" case, and the
STEP_FINISHED/StreamChunk shape to implement this change consistently.
🧹 Nitpick comments (3)
apps/streams/src/sdk-to-ai-chunks.ts (1)

377-383: toolName set to empty string for tool results — may confuse downstream consumers.

When emitting TOOL_CALL_END for user-message tool results, toolName is hardcoded to "". The SDK's tool_result block doesn't carry the tool name, so this is understandable, but downstream code may rely on toolName being present. Consider adding a brief comment explaining why it's empty, or using a sentinel like "unknown" if the StreamProcessor requires a non-empty value.

apps/streams/src/claude-agent.ts (1)

115-147: Multiple empty catch blocks swallow errors silently.

Lines 131, 138, and 145 have empty catch blocks. While these are defensive (controller/result may already be closed), the guideline says to at minimum log errors. A debug-level log helps diagnose issues in production.

Proposed fix — add minimal logging
 				try {
 					controller.enqueue(
 						encoder.encode(`data: ${JSON.stringify(errorChunk)}\n\n`),
 					);
 					controller.enqueue(encoder.encode("data: [DONE]\n\n"));
-				} catch {
-					// Controller may already be closed
+				} catch (enqueueErr) {
+					console.debug("[claude-agent] Failed to enqueue error chunk (controller likely closed):", enqueueErr);
 				}
apps/streams/package.json (1)

16-16: Pre-1.0 dependencies use caret ranges — minor bumps may break.

Both @anthropic-ai/claude-agent-sdk (latest: 0.2.29) and @tanstack/ai (latest: 0.3.0) are pre-1.0. Under semver, ^0.2.19 allows 0.3.0 which can introduce breaking changes. This is consistent with other 0.x deps in the file (e.g., @tanstack/db), but worth flagging since the Claude Agent SDK is still evolving. Consider pinning more tightly (e.g., ~0.2.19) if you encounter instability.

Also applies to: 20-20

Comment thread apps/streams/src/claude-agent.ts Outdated
Comment thread apps/streams/src/claude-agent.ts Outdated
Comment thread apps/streams/src/claude-agent.ts
Comment thread apps/streams/src/claude-agent.ts
Comment thread apps/streams/src/claude-agent.ts Outdated
* - RUN_ERROR — error during execution
*/

import type { StreamChunk } from "@tanstack/ai";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check all StreamChunk references and their context
rg -n --type=ts "StreamChunk" -C2 -g '!node_modules/**'

Repository: superset-sh/superset

Length of output: 16834


Remove or align StreamChunk type imports across the module.

The converter imports StreamChunk from @tanstack/ai and uses it to type-check returned chunks with satisfies, but apps/streams/src/types.ts exports a local StreamChunk interface with a loose shape ({ type: string; [key: string]: unknown }). Downstream code in stream-writer.ts and protocol.ts imports the local definition, creating a type inconsistency. The local catch-all type defeats the type safety benefits of @tanstack/ai's strict union of event types.

Either:

  1. Remove the local StreamChunk from types.ts and import from @tanstack/ai everywhere, or
  2. Have sdk-to-ai-chunks.ts return the local StreamChunk type if loose typing is intentional.
🤖 Prompt for AI Agents
In `@apps/streams/src/sdk-to-ai-chunks.ts` at line 18, The module imports
StreamChunk from `@tanstack/ai` but the repo also defines a local StreamChunk
interface in types.ts that downstream files (stream-writer.ts, protocol.ts) use,
causing a type mismatch; either remove the local StreamChunk and replace its
uses with the `@tanstack/ai` StreamChunk across the codebase, or change
sdk-to-ai-chunks.ts to return the local types.StreamChunk (instead of
`@tanstack/ai.StreamChunk`) so all files share the same loose shape; update the
import statements and the return type (in sdk-to-ai-chunks.ts functions) and run
type-check to ensure stream-writer.ts and protocol.ts compile with the chosen
single authoritative StreamChunk type.

Comment on lines +288 to +297
case "thinking_delta": {
const stepId = `thinking-${index}`;
return [
{
type: "STEP_FINISHED",
stepId,
delta: delta.thinking,
timestamp: now,
} satisfies StreamChunk,
];
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's understand the file structure and find relevant type definitions
find . -type f -name "*.ts" -o -name "*.tsx" | grep -E "(stream|chunk|sdk)" | head -20

Repository: superset-sh/superset

Length of output: 832


🏁 Script executed:

# Look at the full context of the file to understand the pattern
wc -l apps/streams/src/sdk-to-ai-chunks.ts

Repository: superset-sh/superset

Length of output: 104


🏁 Script executed:

# Search for StreamChunk type definition
rg "type StreamChunk|interface StreamChunk" --type ts --type tsx -A 20

Repository: superset-sh/superset

Length of output: 91


🏁 Script executed:

# Look for STEP_FINISHED and STEP_DELTA event types
rg "STEP_FINISHED|STEP_DELTA" --type ts --type tsx

Repository: superset-sh/superset

Length of output: 91


🏁 Script executed:

# Check the full case statement structure to see how other delta events are handled
cat -n apps/streams/src/sdk-to-ai-chunks.ts | sed -n '280,320p'

Repository: superset-sh/superset

Length of output: 1197


🏁 Script executed:

# Search for StreamChunk definition
rg "type StreamChunk|interface StreamChunk" -A 30 -B 2

Repository: superset-sh/superset

Length of output: 3816


🏁 Script executed:

# Look for all event types used (STEP_FINISHED, STEP_DELTA, etc.)
rg "type: \"[A-Z_]+\"" apps/streams/src/sdk-to-ai-chunks.ts

Repository: superset-sh/superset

Length of output: 312


🏁 Script executed:

# Search for protocol definitions
cat -n apps/streams/src/protocol.ts | head -100

Repository: superset-sh/superset

Length of output: 3570


🏁 Script executed:

# Look at more context around thinking_delta and tool_call_delta to understand pattern
cat -n apps/streams/src/sdk-to-ai-chunks.ts | sed -n '250,310p'

Repository: superset-sh/superset

Length of output: 1696


🏁 Script executed:

# Look for any references to STEP_DELTA to see if it exists in the codebase
rg "STEP_DELTA" --type ts

Repository: superset-sh/superset

Length of output: 46


🏁 Script executed:

# Look for how thinking blocks/steps are finalized (when they actually finish)
rg "thinking|STEP_STARTED|STEP_FINISHED" -B 3 -A 3 apps/streams/src/sdk-to-ai-chunks.ts | head -80

Repository: superset-sh/superset

Length of output: 1694


🏁 Script executed:

# Check content block stop handling for thinking blocks
cat -n apps/streams/src/sdk-to-ai-chunks.ts | sed -n '306,360p'

Repository: superset-sh/superset

Length of output: 1805


🏁 Script executed:

# Look for how the step lifecycle should work
rg "STEP_STARTED.*STEP_FINISHED" -B 5 -A 5

Repository: superset-sh/superset

Length of output: 46


thinking_delta emits STEP_FINISHED on every delta — this is semantically incorrect and breaks the step lifecycle.

Each thinking token emits STEP_FINISHED, but STEP_FINISHED should only be emitted once when the thinking step actually completes. Currently:

  • handleContentBlockStart emits STEP_STARTED for thinking blocks
  • handleContentBlockDelta emits STEP_FINISHED repeatedly on each thinking_delta
  • handleContentBlockStop returns empty (no actual completion event)

This inverts the step lifecycle. The pattern used elsewhere (text deltas → TEXT_MESSAGE_CONTENT, tool args → TOOL_CALL_ARGS) streams incremental updates; thinking deltas should follow the same pattern with a distinct event type (e.g., STEP_DELTA) rather than reusing the completion marker. Emit STEP_FINISHED once in handleContentBlockStop when the thinking block actually ends.

🤖 Prompt for AI Agents
In `@apps/streams/src/sdk-to-ai-chunks.ts` around lines 288 - 297, The
thinking_delta branch currently emits a STEP_FINISHED for every token which
breaks the step lifecycle; update the thinking delta handling so that in the
switch/case for "thinking_delta" in handleContentBlockDelta you emit an
incremental event (e.g., a new event type STEP_DELTA or similar) with
delta.thinking (satisfying StreamChunk) instead of STEP_FINISHED, and then emit
the single STEP_FINISHED only from handleContentBlockStop when the thinking
block actually ends; refer to the existing symbols handleContentBlockStart,
handleContentBlockDelta, handleContentBlockStop, the "thinking_delta" case, and
the STEP_FINISHED/StreamChunk shape to implement this change consistently.

Comment thread apps/streams/src/sdk-to-ai-chunks.ts
Comment thread apps/streams/src/sdk-to-ai-chunks.ts
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Feb 6, 2026

🧹 Preview Cleanup Complete

The following preview resources have been cleaned up:

  • ✅ Neon database branch
  • ✅ Electric Fly.io app

Thank you for your contribution! 🎉

- Add Zod request validation matching existing codebase patterns
- Fix unbounded session map with TTL eviction (24h) and size cap (1000)
- Wire AbortController to request signal abort handler
- Fix RUN_ERROR + RUN_FINISHED double emit (early return after error)
- Add logging in silent catch blocks per AGENTS.md guidelines
- Extract magic numbers to named constants
- Capture serve() handles for graceful shutdown
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@apps/streams/src/claude-agent.ts`:
- Around line 100-109: The handler for app.post("/") currently calls
c.req.json() which can throw on malformed JSON before agentRequestSchema
validation; wrap the await c.req.json() call in a try-catch inside the
app.post("/") handler, catch JSON parse errors and return c.json({ error:
"Invalid JSON", details: err.message }, 400), then only run
agentRequestSchema.safeParse on the parsed body; keep existing logic for schema
validation and responses (refer to agentRequestSchema and the app.post("/")
request flow) so malformed JSON produces a clean 400 instead of a 500.
- Around line 191-213: The catch block handling non-AbortError exceptions in the
streaming logic (inside the async streaming function in claude-agent.ts where
converter.state.runId, controller, and encoder are used) writes a RUN_ERROR SSE
to the client but never logs the actual exception server-side; update that catch
block to log the error (e.g., via processLogger.error or console.error)
including the error object/stack and contextual info such as
converter.state.runId and a short message before or immediately after attempting
to enqueue the RUN_ERROR and [DONE] events so the server retains full diagnostic
details if enqueuing fails.

In `@apps/streams/src/sdk-to-ai-chunks.ts`:
- Around line 348-394: The function handleUserMessage currently dereferences
message.message.content without guarding against message.message being
null/undefined; add a defensive null-check at the top of handleUserMessage
(e.g., ensure message.message exists and is the expected shape or bail out)
before reading content, then proceed only when content is an array; reference
the existing SDKUserMessage type and the local variable content inside
handleUserMessage to locate where to add the guard and return an empty array
early if message.message is missing or invalid.
🧹 Nitpick comments (6)
apps/streams/src/sdk-to-ai-chunks.ts (2)

146-167: Type-narrowing via as casts rather than runtime checks on external data.

convertMessage narrows SDK messages using as casts (e.g., message as SDKPartialAssistantMessage) after switching on message.type. Since these messages come from an external SDK, if the shape ever diverges (missing event, missing message.content, etc.), this will produce runtime errors deep in handlers rather than at the boundary. The switch on message.type is a good start, but the inner fields are trusted implicitly.

Consider adding lightweight guards (e.g., checking "event" in message before casting) for the critical paths, or use Zod .safeParse for the SDK message types. Based on learnings, validate external API data as untrusted by handling missing fields and unexpected shapes with tolerant parsing and explicit fallbacks.


58-63: Catch-all variant in SDKMessage union weakens exhaustiveness checking.

The | { type: string; [key: string]: unknown } variant (line 63) overlaps with every other union member, so TypeScript can never flag an unhandled message.type in a switch. This is fine for forward-compatibility with unknown SDK message types, but it means the default branch silently absorbs any typo or new message type.

If forward-compatibility is intentional, this is acceptable — just worth noting that exhaustiveness checks are disabled.

apps/streams/src/claude-agent.ts (2)

58-76: Session eviction performs O(n log n) sort on every setClaudeSessionId call when over capacity.

evictStaleSessions() is called on every set (line 87). The TTL sweep is O(n), and when still over capacity, it sorts all entries. With SESSION_MAX_SIZE = 1000, this is fine for now, but if the cap increases, consider a more efficient eviction strategy (e.g., LRU linked list or a min-heap by lastAccessedAt).


170-178: System init message handling casts to Record<string, unknown> — consider a type guard.

Line 171 casts to Record<string, unknown> and then accesses .type, .subtype, and .session_id with additional casts. A small type guard or check (e.g., typeof msg.session_id === "string") at line 173 would be more defensive, since the SDK message shape is external.

apps/streams/src/index.ts (2)

40-46: No SIGTERM handler — only SIGINT is caught.

Container orchestrators (Docker, Kubernetes) send SIGTERM for graceful shutdown, not SIGINT. This is an existing pattern in the file, so not necessarily in scope for this PR, but worth noting for operational correctness.


6-8: No port-conflict detection between PORT, INTERNAL_PORT, and AGENT_PORT.

If any two of these resolve to the same value (all default to different ports, but can be overridden via env), one serve() call will fail with an EADDRINUSE error. Consider a startup check or at least a log warning.

Comment thread apps/streams/src/claude-agent.ts
Comment thread apps/streams/src/claude-agent.ts
Comment thread apps/streams/src/sdk-to-ai-chunks.ts
…ssage

- Wrap c.req.json() in try-catch for clean 400 on invalid JSON
- Log non-abort stream errors server-side before sending to client
- Add null guard on message.message in handleUserMessage
Dual StreamChunk types exist: strict union from @tanstack/ai (used by
sdk-to-ai-chunks.ts) and loose catch-all in types.ts (used by protocol.ts).
Works at runtime via JSON boundary but undermines type safety. Documented
fix path for a future cleanup PR.
@Kitenite Kitenite merged commit 2e4c92a into main Feb 6, 2026
13 checks passed
@Kitenite Kitenite deleted the kitenite/ai-chat-ui-impl branch February 6, 2026 19:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant