Skip to content

feat: add shared ai-chat package#1202

Merged
Kitenite merged 5 commits into
mainfrom
split-ai-chat/shared-package
Feb 4, 2026
Merged

feat: add shared ai-chat package#1202
Kitenite merged 5 commits into
mainfrom
split-ai-chat/shared-package

Conversation

@Kitenite
Copy link
Copy Markdown
Collaborator

@Kitenite Kitenite commented Feb 4, 2026

Summary

  • Add packages/ai-chat with shared stream client, materialization layer, and hooks (useChatSession, useCollectionData)
  • Add shared ChatInput and PresenceBar components
  • Add type definitions for AI chat messages and sessions
  • Add streams URL constant to packages/shared

Context

Split from ai-chat branch. This is the foundation package — desktop UI (PR #4), mobile (PR #5) depend on this.

Test plan

  • Verify packages/ai-chat builds cleanly
  • Verify no import errors from consumers

Summary by CodeRabbit

  • New Features

    • ChatInput and PresenceBar UI components with typing indicators and avatar overflow
    • End-to-end real-time chat: durable streaming client, session actions, message materialization pipeline, schemas, and a useChatSession hook
    • SSR-safe collection subscription hook for live data
  • Chores

    • New ai-chat package manifest and TypeScript config; dependency version bumps across apps/packages
  • Feature Flags

    • Added AI chat feature flag ("ai-chat")
  • Tests

    • Added tests for message materialization ordering

Add packages/ai-chat with:
- Stream client and materialization layer (durable-streams)
- useChatSession and useCollectionData hooks
- ChatInput and PresenceBar shared components
- Type definitions for AI chat messages and sessions
- Shared constants for streams URL
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Feb 4, 2026

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

Adds a new @superset/ai-chat package: package manifest, TypeScript config, UI components (ChatInput, PresenceBar), streaming primitives (client, actions, schema, materialize), React hooks (useChatSession, useCollectionData), shared types, tests, and a feature flag entry.

Changes

Cohort / File(s) Summary
Package config & tooling
packages/ai-chat/package.json, packages/ai-chat/tsconfig.json
New package manifest with exports, deps, scripts and a TypeScript project config for the ai-chat package.
UI components & exports
packages/ai-chat/src/components/ChatInput/ChatInput.tsx, packages/ai-chat/src/components/ChatInput/index.ts, packages/ai-chat/src/components/PresenceBar/PresenceBar.tsx, packages/ai-chat/src/components/PresenceBar/index.ts, packages/ai-chat/src/components/index.ts
Adds ChatInput (controlled/uncontrolled, typing indicator, submit handling, auto-resize) and PresenceBar (avatars, overflow count, typing indicator) plus barrel exports.
Streaming core & client
packages/ai-chat/src/stream/actions.ts, packages/ai-chat/src/stream/client.ts, packages/ai-chat/src/stream/index.ts, packages/ai-chat/src/stream/schema.ts
Implements stream actions (createStream, session actions), durable chat client (lifecycle, presence, drafts, sendMessage), zod schemas for chunks/presence/drafts, and a stream API barrel export.
Materialization & tests
packages/ai-chat/src/stream/materialize.ts, packages/ai-chat/src/stream/materialize.test.ts
Adds materializeMessages pipeline converting raw stream chunks into MessageRow turns (content blocks, tool results, ordering) and tests verifying ordering behavior.
Collection utilities & hook
packages/ai-chat/src/stream/useCollectionData.ts, packages/ai-chat/src/stream/useChatSession.ts
Adds SSR-safe useCollectionData for TanStack DB collections and useChatSession hook integrating client, subscriptions, materialization, presence, drafts, and actions (sendMessage, setDraft, connect/disconnect).
Types & root export
packages/ai-chat/src/types.ts, packages/ai-chat/src/index.ts
Introduces shared stream/chat types and a root re-export surface exporting components, stream, and types.
Feature flag
packages/shared/src/constants.ts
Adds AI_CHAT: "ai-chat" to FEATURE_FLAGS.
Dependency bumps & minor changes
apps/api/package.json, packages/trpc/package.json, apps/docs/package.json, apps/marketing/package.json, apps/web/package.json, packages/ui/package.json, apps/desktop/src/renderer/index.tsx
Bumps @anthropic-ai/sdk and lucide-react versions across packages; minor ipcRenderer guard/cleanup adjustment in desktop renderer.

Sequence Diagram

sequenceDiagram
    participant UI as React Component
    participant Hook as useChatSession Hook
    participant Client as DurableChatClient
    participant Collections as Collections (chunks, presence, drafts)
    participant Server as Stream Backend

    UI->>Hook: initialize(options)
    Hook->>Client: create or use client
    Client->>Collections: expose collections
    Client->>Server: connect()/preload()
    Server-->>Collections: hydrate initial data

    UI->>Hook: sendMessage(content)
    Hook->>Client: sendMessage(content)
    Client->>Collections: append chunk
    Collections->>Server: POST events

    Collections-->>Hook: change events
    Hook->>Hook: materializeMessages(chunks)
    Hook-->>UI: messages, streamingMessage, users, drafts

    UI->>Hook: setDraft(content)
    Hook->>Client: updateDraft(content)
    Client->>Collections: update draft
    Collections->>Server: POST draft event

    UI->>Hook: disconnect()
    Hook->>Client: disconnect()
    Client->>Collections: remove presence
    Client->>Server: signal leave
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I hopped through streams and TypeScript rows,
Avatars, drafts, and typing glows.
I stitched the client, the hook, the view,
Messages materialize, fresh as dew.
Carrots of code — a small hop for you.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: add shared ai-chat package' clearly and concisely summarizes the main change: introducing a new shared ai-chat package with components, hooks, and types.
Description check ✅ Passed The PR description covers the primary changes (new package, components, hooks, types) and includes context about dependencies and a test plan, though some template sections are not filled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch split-ai-chat/shared-package

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🤖 Fix all issues with AI agents
In `@packages/ai-chat/package.json`:
- Line 25: The package.json currently lists "react" under "dependencies" and
also under "peerDependencies", which can produce duplicate React instances;
remove the "react" entry from the "dependencies" section and add it to
"devDependencies" (or move it there) while leaving the "react" entry only in
"peerDependencies" so consumers resolve React from their own install; update
package.json's "dependencies", "devDependencies", and keep the existing
"peerDependencies" block intact to reflect this change.

In `@packages/ai-chat/src/components/ChatInput/ChatInput.tsx`:
- Around line 88-100: The typing timeout stored in typingTimeoutRef inside the
ChatInput component is never cleared on unmount; add a cleanup effect in the
ChatInput component that checks typingTimeoutRef.current and calls
clearTimeout(...) and sets typingTimeoutRef.current = null so any pending
timeout is cancelled on unmount and the onTypingChange callback cannot fire
after unmounting. Ensure the effect has an empty dependency array so it only
runs on mount/unmount and references typingTimeoutRef and onTypingChange by
stable refs or directly if they are stable in the component.

In `@packages/ai-chat/src/stream/client.ts`:
- Around line 231-241: In disconnect(), avoid silently swallowing errors from
the fire-and-forget this._removePresence() call; change the .catch(() => {}) to
capture and log the error (e.g., this.options.logger?.error(...) or
console.error) so failures in _removePresence() are visible while still not
awaiting it; update the call located in the disconnect() method that checks
this.options.user && this._isConnected and invokes this._removePresence() so it
logs the caught error instead of ignoring it.

In `@packages/ai-chat/src/stream/materialize.ts`:
- Around line 293-316: The catch in the content_block_stop handler silently
swallows JSON.parse errors for BetaToolUseBlock inputs; update the catch for the
block type "tool_use" (in the content_block_stop case where you access
jsonAccumulators.get(index) and blocks[index]) to log the failure including the
index and the accumulated string (and the parse error message) before continuing
(or rethrowing if desired) so you retain diagnostics for malformed JSON; ensure
you reference jsonAccumulators, blocks, index and BetaToolUseBlock in the log
context.
- Around line 125-131: The turn-boundary logic only splits when chunkType ===
"stream_event" and lastRenderingType === "user", causing a non-streaming
assistant chunk to merge into the previous tool-result turn; update the
condition in the block that checks chunkType/lastRenderingType (around
chunkType, lastRenderingType, currentTurnChunks, materializeTurn, messages) so
it treats either chunkType === "stream_event" OR chunkType === "assistant"
(i.e., include assistant chunks) when lastRenderingType === "user", and then
push materializeTurn(currentTurnChunks) and reset currentTurnChunks as before.

In `@packages/ai-chat/src/stream/useChatSession.ts`:
- Around line 214-221: The useMemo for draft currently depends only on draftRows
but reads clientOptions.user, causing stale memo when the user changes; update
the dependency array to include clientOptions.user (or more narrowly
clientOptions.user?.userId) so the memo recalculates when the user changes,
keeping the existing logic in the draft useMemo (referencing useMemo, draft,
clientOptions.user, draftRows, and StreamDraft).
🧹 Nitpick comments (9)
packages/ai-chat/src/components/ChatInput/ChatInput.tsx (1)

85-85: Extract magic numbers to named constants.

The values 200 (max textarea height) and 2000 (typing debounce ms) are hardcoded inline. Per coding guidelines, extract these to named constants at module top for clarity and maintainability.

Proposed refactor

Add at the top of the file after imports:

const MAX_TEXTAREA_HEIGHT_PX = 200;
const TYPING_DEBOUNCE_MS = 2000;

Then update usages:

-				textarea.style.height = `${Math.min(textarea.scrollHeight, 200)}px`;
+				textarea.style.height = `${Math.min(textarea.scrollHeight, MAX_TEXTAREA_HEIGHT_PX)}px`;
 			typingTimeoutRef.current = setTimeout(() => {
 				onTypingChange?.(false);
-			}, 2000);
+			}, TYPING_DEBOUNCE_MS);

As per coding guidelines: "Extract hardcoded magic numbers, strings, and enums to named constants at module top instead of leaving them inline in logic".

Also applies to: 95-97

packages/ai-chat/src/components/PresenceBar/PresenceBar.tsx (2)

15-22: Edge case: getInitials may return empty string for malformed names.

If name contains leading/trailing spaces or multiple consecutive spaces, split(" ") produces empty strings, and n[0] on an empty string returns undefined, which becomes "undefined" after joining. Consider filtering empty parts.

♻️ Suggested fix
 function getInitials(name: string): string {
 	return name
 		.split(" ")
+		.filter(Boolean)
 		.map((n) => n[0])
 		.join("")
 		.toUpperCase()
 		.slice(0, 2);
 }

44-60: Consider extracting the max visible avatars constant.

The value 5 appears twice (lines 44 and 56). Extracting to a named constant improves maintainability.

♻️ Suggested refactor
+const MAX_VISIBLE_AVATARS = 5;
+
 export function PresenceBar({
 	viewers,
 	typingUsers,
 	className,
 }: PresenceBarProps) {
 	// ...
-						{viewers.slice(0, 5).map((user) => (
+						{viewers.slice(0, MAX_VISIBLE_AVATARS).map((user) => (
 							// ...
 						))}
 					</div>
-					{viewers.length > 5 && (
+					{viewers.length > MAX_VISIBLE_AVATARS && (
 						<span className="text-xs text-muted-foreground">
-							+{viewers.length - 5}
+							+{viewers.length - MAX_VISIBLE_AVATARS}
 						</span>
 					)}
packages/ai-chat/src/stream/useCollectionData.ts (1)

47-63: Redundant ref initialization pattern.

The subscribeRef is initialized with a function (lines 47-53), then immediately overwritten on every render (lines 56-63). The initial value is never used. The same applies to getSnapshotRef (lines 67-94). Consider removing the initial function body and using a placeholder, or consolidating the logic.

♻️ Simplified pattern
-	// Subscribe callback - increments version to signal data changed.
-	// Stored in ref to maintain stable reference for useSyncExternalStore.
-	const subscribeRef = useRef((onStoreChange: () => void): (() => void) => {
-		const subscription = collection.subscribeChanges(() => {
-			versionRef.current++;
-			onStoreChange();
-		});
-		return () => subscription.unsubscribe();
-	});
-
-	// Update subscribe ref when collection changes
-	subscribeRef.current = (onStoreChange: () => void): (() => void) => {
+	// Subscribe callback - stored in ref for stable identity with useSyncExternalStore
+	const subscribeRef = useRef<(onStoreChange: () => void) => () => void>(null!);
+	subscribeRef.current = (onStoreChange: () => void): (() => void) => {
 		const subscription = collection.subscribeChanges(() => {
 			versionRef.current++;
 			console.log(`[ai-chat/collection] change detected, version=${versionRef.current}, size=${collection.size}`);
 			onStoreChange();
 		});
 		return () => subscription.unsubscribe();
 	};
packages/ai-chat/src/stream/actions.ts (1)

33-43: Error message lacks response context for debugging.

When the POST fails, the error only includes the status code. Including the response body (if available) would help diagnose issues, especially for 4xx/5xx errors with descriptive messages.

♻️ Enhanced error context
 async function appendToStream(url: string, events: unknown[]): Promise<void> {
 	const response = await fetch(url, {
 		method: "POST",
 		headers: { "Content-Type": "application/json" },
 		body: JSON.stringify(events),
 	});
 
 	if (!response.ok) {
-		throw new Error(`Failed to append to stream: ${response.status}`);
+		const body = await response.text().catch(() => "");
+		throw new Error(
+			`Failed to append to stream: ${response.status}${body ? ` - ${body}` : ""}`,
+		);
 	}
 }
packages/ai-chat/src/stream/client.ts (2)

319-338: Mutable state via type cast circumvents type safety.

Line 321 uses a cast to mutate this.options.user. Consider storing user in a separate mutable instance field rather than mutating the options object.

♻️ Cleaner mutable state pattern
 export class DurableChatClient {
 	readonly sessionId: string;
 	readonly actorId: string;
+	private _user: SessionUser | null;
 
 	private readonly options: DurableChatClientOptions;
 	// ...
 
 	constructor(options: DurableChatClientOptions) {
 		this.options = options;
+		this._user = options.user ?? null;
 		// ...
 	}
 
+	get user(): SessionUser | null {
+		return this._user;
+	}
+
 	setUser(user: SessionUser | null): void {
-		const previousUser = this.options.user;
-		(this.options as { user: SessionUser | null }).user = user;
+		const previousUser = this._user;
+		this._user = user;
 		// ... rest unchanged, but use this._user instead of this.options.user
 	}

380-393: Error response body not included in error message.

Same issue as in actions.ts - the error message only includes status code. Consider including response body for better debugging.

♻️ Enhanced error context
 	private async _appendToStream(events: unknown[]): Promise<void> {
 		const response = await fetch(
 			`${this.options.proxyUrl}/streams/${this.sessionId}`,
 			{
 				method: "POST",
 				headers: { "Content-Type": "application/json" },
 				body: JSON.stringify(events),
 			},
 		);
 
 		if (!response.ok) {
-			throw new Error(`Failed to append to stream: ${response.status}`);
+			const body = await response.text().catch(() => "");
+			throw new Error(
+				`Failed to append to stream: ${response.status}${body ? ` - ${body}` : ""}`,
+			);
 		}
 	}
packages/ai-chat/src/stream/useChatSession.ts (1)

180-195: Verbose debug logging may impact performance in production.

The logging at lines 183-195 iterates over all messages and logs detailed state on every materialization. Consider guarding with a debug flag or reducing verbosity for production builds.

♻️ Conditional debug logging
+const DEBUG = process.env.NODE_ENV === "development";
+
 // In the useMemo:
-		if (all.length > 0) {
+		if (DEBUG && all.length > 0) {
 			console.log(
 				`[ai-chat/session] all messages:`,
 				all.map((m) => ({
 					id: m.id.slice(0, 8),
 					role: m.role,
 					isComplete: m.isComplete,
 					isStreaming: m.isStreaming,
 					contentLen: m.content.length,
 					blocks: m.contentBlocks.length,
 				})),
 			);
 		}
packages/ai-chat/src/stream/materialize.ts (1)

88-121: Hoist chunk/message type literals into module constants.

This will reduce typos and make future refactors safer.

♻️ Suggested refactor
+const CHUNK_TYPE_USER_INPUT = "user_input" as const;
+const CHUNK_TYPE_STREAM_EVENT = "stream_event" as const;
+const CHUNK_TYPE_ASSISTANT = "assistant" as const;
+const CHUNK_TYPE_USER = "user" as const;
+const ROLE_USER = "user" as const;
+const ROLE_ASSISTANT = "assistant" as const;
+const DEFAULT_ASSISTANT_ACTOR_ID = "claude" as const;
-		if (chunkType === "user_input") {
+		if (chunkType === CHUNK_TYPE_USER_INPUT) {
...
-			chunkType === "stream_event" ||
-			chunkType === "assistant" ||
-			chunkType === "user";
+			chunkType === CHUNK_TYPE_STREAM_EVENT ||
+			chunkType === CHUNK_TYPE_ASSISTANT ||
+			chunkType === CHUNK_TYPE_USER;
-				role: "user",
+				role: ROLE_USER,
...
-		role: "assistant",
+		role: ROLE_ASSISTANT,
-		actorId: "claude",
+		actorId: DEFAULT_ASSISTANT_ACTOR_ID,

As per coding guidelines, extract hardcoded magic numbers, strings, and enums to named constants at module top instead of leaving them inline in logic.

Comment thread packages/ai-chat/package.json Outdated
Comment on lines +88 to +100
// Typing indicator with debounce
if (typingTimeoutRef.current) {
clearTimeout(typingTimeoutRef.current);
}

if (newValue.trim()) {
onTypingChange?.(true);
typingTimeoutRef.current = setTimeout(() => {
onTypingChange?.(false);
}, 2000);
} else {
onTypingChange?.(false);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Memory leak: Clear typing timeout on unmount.

The typingTimeoutRef timeout is never cleared when the component unmounts. If unmount occurs while the timeout is pending, the callback fires on an unmounted component, potentially causing state updates on unmounted components or memory leaks.

Proposed fix: Add cleanup effect
+import { type KeyboardEvent, useCallback, useEffect, useRef, useState } from "react";
-import { type KeyboardEvent, useCallback, useRef, useState } from "react";

Add a cleanup effect after the refs:

 	const textareaRef = useRef<HTMLTextAreaElement>(null);
 	const typingTimeoutRef = useRef<ReturnType<typeof setTimeout> | null>(null);

+	// Cleanup timeout on unmount
+	useEffect(() => {
+		return () => {
+			if (typingTimeoutRef.current) {
+				clearTimeout(typingTimeoutRef.current);
+			}
+		};
+	}, []);
+
 	// Support both controlled and uncontrolled modes
🤖 Prompt for AI Agents
In `@packages/ai-chat/src/components/ChatInput/ChatInput.tsx` around lines 88 -
100, The typing timeout stored in typingTimeoutRef inside the ChatInput
component is never cleared on unmount; add a cleanup effect in the ChatInput
component that checks typingTimeoutRef.current and calls clearTimeout(...) and
sets typingTimeoutRef.current = null so any pending timeout is cancelled on
unmount and the onTypingChange callback cannot fire after unmounting. Ensure the
effect has an empty dependency array so it only runs on mount/unmount and
references typingTimeoutRef and onTypingChange by stable refs or directly if
they are stable in the component.

Comment on lines +231 to +241
disconnect(): void {
// Remove presence before disconnecting
if (this.options.user && this._isConnected) {
this._removePresence().catch(() => {});
}

this._db.close();
this._abortController.abort();
this._isConnected = false;
this._setConnectionStatus("disconnected");
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Silent error swallowing in disconnect().

Line 234 uses .catch(() => {}) which silently swallows errors from _removePresence(). Per coding guidelines, errors should at minimum be logged. The fire-and-forget pattern is understandable here, but a log would aid debugging disconnection issues.

🛡️ Suggested fix
 	disconnect(): void {
 		// Remove presence before disconnecting
 		if (this.options.user && this._isConnected) {
-			this._removePresence().catch(() => {});
+			this._removePresence().catch((err) => {
+				console.warn(`[ai-chat/client] failed to remove presence on disconnect:`, err);
+			});
 		}
 
 		this._db.close();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
disconnect(): void {
// Remove presence before disconnecting
if (this.options.user && this._isConnected) {
this._removePresence().catch(() => {});
}
this._db.close();
this._abortController.abort();
this._isConnected = false;
this._setConnectionStatus("disconnected");
}
disconnect(): void {
// Remove presence before disconnecting
if (this.options.user && this._isConnected) {
this._removePresence().catch((err) => {
console.warn(`[ai-chat/client] failed to remove presence on disconnect:`, err);
});
}
this._db.close();
this._abortController.abort();
this._isConnected = false;
this._setConnectionStatus("disconnected");
}
🤖 Prompt for AI Agents
In `@packages/ai-chat/src/stream/client.ts` around lines 231 - 241, In
disconnect(), avoid silently swallowing errors from the fire-and-forget
this._removePresence() call; change the .catch(() => {}) to capture and log the
error (e.g., this.options.logger?.error(...) or console.error) so failures in
_removePresence() are visible while still not awaiting it; update the call
located in the disconnect() method that checks this.options.user &&
this._isConnected and invokes this._removePresence() so it logs the caught error
instead of ignoring it.

Comment thread packages/ai-chat/src/stream/materialize.ts Outdated
Comment on lines +293 to +316
try {
(block as BetaToolUseBlock).input = JSON.parse(accumulated);
} catch {
// Partial JSON
}
}
} else if (
delta.type === "thinking_delta" &&
block.type === "thinking"
) {
(block as { thinking: string }).thinking += delta.thinking;
}
break;
}

case "content_block_stop": {
const index = (event as { index: number }).index;
const accumulated = jsonAccumulators.get(index);
const block = blocks[index];
if (accumulated && block?.type === "tool_use") {
try {
(block as BetaToolUseBlock).input = JSON.parse(accumulated);
} catch {
// Best effort
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Log final JSON parse failures for tool_use inputs.

The catch blocks currently swallow parse errors. If malformed JSON persists to content_block_stop, you’ll lose diagnostics. Logging once at stop keeps signal without spamming.

🧭 Suggested change
 				if (accumulated && block?.type === "tool_use") {
 					try {
 						(block as BetaToolUseBlock).input = JSON.parse(accumulated);
 					} catch {
-						// Best effort
+						console.warn(
+							`[ai-chat/materialize] tool_use JSON parse failed at block ${index}`,
+						);
 					}
 					jsonAccumulators.delete(index);
 				}

As per coding guidelines, never swallow errors silently; at minimum log errors with context before rethrowing or handling them explicitly.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try {
(block as BetaToolUseBlock).input = JSON.parse(accumulated);
} catch {
// Partial JSON
}
}
} else if (
delta.type === "thinking_delta" &&
block.type === "thinking"
) {
(block as { thinking: string }).thinking += delta.thinking;
}
break;
}
case "content_block_stop": {
const index = (event as { index: number }).index;
const accumulated = jsonAccumulators.get(index);
const block = blocks[index];
if (accumulated && block?.type === "tool_use") {
try {
(block as BetaToolUseBlock).input = JSON.parse(accumulated);
} catch {
// Best effort
try {
(block as BetaToolUseBlock).input = JSON.parse(accumulated);
} catch {
// Partial JSON
}
} else if (
delta.type === "thinking_delta" &&
block.type === "thinking"
) {
(block as { thinking: string }).thinking += delta.thinking;
}
break;
}
case "content_block_stop": {
const index = (event as { index: number }).index;
const accumulated = jsonAccumulators.get(index);
const block = blocks[index];
if (accumulated && block?.type === "tool_use") {
try {
(block as BetaToolUseBlock).input = JSON.parse(accumulated);
} catch {
console.warn(
`[ai-chat/materialize] tool_use JSON parse failed at block ${index}`,
);
}
jsonAccumulators.delete(index);
🤖 Prompt for AI Agents
In `@packages/ai-chat/src/stream/materialize.ts` around lines 293 - 316, The catch
in the content_block_stop handler silently swallows JSON.parse errors for
BetaToolUseBlock inputs; update the catch for the block type "tool_use" (in the
content_block_stop case where you access jsonAccumulators.get(index) and
blocks[index]) to log the failure including the index and the accumulated string
(and the parse error message) before continuing (or rethrowing if desired) so
you retain diagnostics for malformed JSON; ensure you reference
jsonAccumulators, blocks, index and BetaToolUseBlock in the log context.

Comment on lines +214 to +221
const draft = useMemo((): string => {
const user = clientOptions.user;
if (!user) return "";
const myDraft = draftRows.find(
(d: StreamDraft) => d.userId === user.userId,
);
return myDraft?.content ?? "";
}, [draftRows]);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing dependency in useMemo for draft.

The memo reads clientOptions.user (line 215-216) but the dependency array only includes [draftRows]. If user changes without draftRows changing, the memo returns stale data.

🐛 Fix missing dependency
 	// Current user's draft
 	const draft = useMemo((): string => {
 		const user = clientOptions.user;
 		if (!user) return "";
 		const myDraft = draftRows.find(
 			(d: StreamDraft) => d.userId === user.userId,
 		);
 		return myDraft?.content ?? "";
-	}, [draftRows]);
+	}, [draftRows, clientOptions.user]);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const draft = useMemo((): string => {
const user = clientOptions.user;
if (!user) return "";
const myDraft = draftRows.find(
(d: StreamDraft) => d.userId === user.userId,
);
return myDraft?.content ?? "";
}, [draftRows]);
const draft = useMemo((): string => {
const user = clientOptions.user;
if (!user) return "";
const myDraft = draftRows.find(
(d: StreamDraft) => d.userId === user.userId,
);
return myDraft?.content ?? "";
}, [draftRows, clientOptions.user]);
🤖 Prompt for AI Agents
In `@packages/ai-chat/src/stream/useChatSession.ts` around lines 214 - 221, The
useMemo for draft currently depends only on draftRows but reads
clientOptions.user, causing stale memo when the user changes; update the
dependency array to include clientOptions.user (or more narrowly
clientOptions.user?.userId) so the memo recalculates when the user changes,
keeping the existing logic in the draft useMemo (referencing useMemo, draft,
clientOptions.user, draftRows, and StreamDraft).

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Feb 4, 2026

🚀 Preview Deployment

🔗 Preview Links

Service Status Link
Neon Database (Neon) View Branch
Fly.io Electric (Fly.io) View App
Vercel API (Vercel) Open Preview
Vercel Web (Vercel) Open Preview
Vercel Marketing (Vercel) Open Preview
Vercel Admin (Vercel) Open Preview
Vercel Docs (Vercel) Open Preview

Preview updates automatically with new commits

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🤖 Fix all issues with AI agents
In `@apps/api/package.json`:
- Line 14: The upgrade to `@anthropic-ai/sdk` ^0.72.1 changed the Messages API
structured output from output_format to output_config, so search the codebase
for the OutputFormat type/export and any uses of "output_format" (e.g., in calls
to the Messages API or helper functions) and replace them with the new
"output_config" shape and types (adjust imports from OutputFormat to the new
types or remove if renamed); update call sites that build structured output
requests to pass output_config instead of output_format and adapt tests/type
assertions to the new schema for structured outputs to restore compatibility
with the SDK v0.72.x API.

In `@packages/ai-chat/src/stream/client.ts`:
- Around line 386-399: Add a bounded timeout and simple retry/backoff around the
network call in _appendToStream to avoid hanging and improve resilience: wrap
the fetch in an AbortController whose signal is passed to fetch and set a
timeout (clear it after completion), retry the POST to
`${this.options.proxyUrl}/streams/${this.sessionId}` a few times with
incremental backoff on transient failures (network errors or non-ok responses
like 5xx), and surface a clear error if all attempts fail; update the method
name _appendToStream to use the AbortController signal, implement timeout
cleanup, and implement retry logic that only retries on network/5xx errors while
immediately throwing for client (4xx) errors.

In `@packages/ai-chat/src/stream/useCollectionData.ts`:
- Around line 100-108: Update the useSyncExternalStore call to pass the new
server snapshot callback: replace the third argument getSnapshotRef.current with
getServerSnapshotRef.current so the call becomes
useSyncExternalStore(subscribeRef.current, getSnapshotRef.current,
getServerSnapshotRef.current); ensure getServerSnapshotRef is defined and
initialized similarly to getSnapshotRef.
- Around line 37-45: The cached version and snapshot become stale when the
collection changes; inside useCollectionData add a useEffect that runs on
[collection] to reset versionRef.current = 0 and snapshotRef.current = {
version: -1, data: [] } so the next subscribe/getSnapshot cycle treats the new
collection as fresh; also add useEffect to the hook imports if not already
present and reference the existing versionRef and snapshotRef variables in that
effect.
- Around line 49-67: The current pattern sets subscribeRef.current twice which
causes the function captured by useSyncExternalStore to hold a stale closure
over the original collection; fix this by creating a stable subscribe function
that closes over the current collection (e.g. wrap the subscribe logic in
useCallback with collection in its dependency array or assign
subscribeRef.current inside a useEffect that runs when collection changes) so
that the function passed to useSyncExternalStore always references the
up-to-date collection; update the logic that calls
collection.subscribeChanges(), increments versionRef.current and returns the
unsubscribe to use that new stable callback (referencing subscribeRef,
useSyncExternalStore, collection.subscribeChanges, versionRef.current).
- Around line 71-98: getSnapshotRef currently closes over the old collection and
becomes stale when the collection prop changes; update it the same way as
subscribeRef by storing the latest collection in a ref (e.g.,
collectionRef.current) and reassigning getSnapshotRef.current to a new function
that reads from collectionRef.current (and still uses versionRef and
snapshotRef) so the snapshot function you pass to useSyncExternalStore always
reads the latest collection without a stale closure.
🧹 Nitpick comments (4)
apps/desktop/src/renderer/index.tsx (1)

44-55: Minor inconsistency in optional chaining between on and off.

Line 45 assumes .on() exists when window.ipcRenderer is truthy, but line 55 still defensively uses double optional chaining (?.off?.()). If the preload script exposes ipcRenderer, both methods should be present together.

Consider aligning the defensive checks for consistency:

♻️ Option A: Trust both methods exist (preferred if preload is reliable)
 if (import.meta.hot) {
 	import.meta.hot.dispose(() => {
 		unsubscribe();
-		window.ipcRenderer?.off?.("deep-link-navigate", handleDeepLink);
+		window.ipcRenderer?.off("deep-link-navigate", handleDeepLink);
 		cleanupBootErrorHandling();
 	});
 }
♻️ Option B: Defensively check both methods
-if (window.ipcRenderer) {
+if (window.ipcRenderer?.on) {
 	window.ipcRenderer.on("deep-link-navigate", handleDeepLink);
 } else {
packages/ai-chat/src/stream/client.ts (3)

27-40: UUID fallback uses Math.random() which is not cryptographically secure.

The fallback UUID generator uses Math.random(), which may produce predictable or colliding values in high-throughput scenarios. Since UUIDs are used as message keys, collisions could cause data loss or overwrites.

Consider using a more robust fallback for React Native environments.

♻️ Suggested improvement
+import { v4 as uuidv4 } from "uuid";
+
 /** UUID generator with fallback for environments without crypto.randomUUID (React Native). */
 function generateUUID(): string {
 	if (
 		typeof crypto !== "undefined" &&
 		typeof crypto.randomUUID === "function"
 	) {
 		return crypto.randomUUID();
 	}
-	return "xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx".replace(/[xy]/g, (c) => {
-		const r = (Math.random() * 16) | 0;
-		const v = c === "x" ? r : (r & 0x3) | 0x8;
-		return v.toString(16);
-	});
+	// Fallback to uuid package which uses crypto.getRandomValues when available
+	return uuidv4();
 }

Alternatively, if you want to avoid adding a dependency, consider using crypto.getRandomValues as a fallback before Math.random():

if (typeof crypto !== "undefined" && typeof crypto.getRandomValues === "function") {
  const bytes = new Uint8Array(16);
  crypto.getRandomValues(bytes);
  // ... format as UUID v4
}

144-146: Unsafe type assertion bypasses type checking.

The double cast as unknown as SessionCollections silences the type checker entirely. If the schema changes or createStreamDB returns a different shape, this will fail at runtime without compile-time warning.

Consider adding a runtime check or improving the type inference from createStreamDB.

🛡️ Suggested defensive approach
-		// Collections are available immediately
-		this._collections = this._db.collections as unknown as SessionCollections;
+		// Collections are available immediately
+		// Type assertion needed due to stream-db's generic collection typing
+		const collections = this._db.collections;
+		if (!collections.chunks || !collections.presence || !collections.drafts) {
+			throw new Error("Invalid schema: missing required collections");
+		}
+		this._collections = collections as unknown as SessionCollections;

266-295: sendMessage should clear the user's draft after successful send.

After a message is sent, the user's draft typically becomes stale. Consider clearing the draft automatically to prevent UI showing outdated draft content.

♻️ Suggested improvement
 		await this._appendToStream([
 			{
 				type: "chunk",
 				key: uuid,
 				value: {
 					type: "user_input",
 					content,
 					actorId: user.userId,
 					createdAt: new Date().toISOString(),
 				},
 				headers: { operation: "insert" },
 			},
 		]);
 		console.log(`[ai-chat/client] sendMessage POST complete`);
+
+		// Clear draft after successful send
+		await this.updateDraft("");

Comment thread apps/api/package.json
},
"dependencies": {
"@anthropic-ai/sdk": "^0.71.2",
"@anthropic-ai/sdk": "^0.72.1",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if version 0.72.1 exists and get package info
curl -s "https://registry.npmjs.org/@anthropic-ai/sdk/0.72.1" | jq -r '.version, .time.["0.72.1"], .dist.tarball // "Not found"' 2>/dev/null || echo "Version check failed"

# Also check the 0.71.2 version for comparison
echo "---"
curl -s "https://registry.npmjs.org/@anthropic-ai/sdk/0.71.2" | jq -r '.version, .time.["0.71.2"] // "Not found"' 2>/dev/null || echo "Version check failed"

Repository: superset-sh/superset

Length of output: 109


🌐 Web query:

@anthropic-ai/sdk 0.72.1 release notes breaking changes changelog

💡 Result:

For @anthropic-ai/sdk v0.72.1 (published ~late Jan 2026 per UNPKG’s “latest” tag), the canonical release notes are in the package’s CHANGELOG.md included in the npm tarball and browsable on UNPKG. (app.unpkg.com)

  • Changelog (v0.72.1): available at the package file CHANGELOG.md (in the v0.72.1 package listing). (app.unpkg.com)
  • Breaking changes: I wasn’t able to reliably extract the contents of the CHANGELOG.md in this environment (the “raw” fetch endpoint errors), so I can’t confirm whether v0.72.1 itself lists any breaking changes. (app.unpkg.com)

If you paste the ## 0.72.1 section from CHANGELOG.md here, I’ll summarize it and call out any breaking changes.

Citations:


🌐 Web query:

@anthropic-ai/sdk 0.72.1 security vulnerabilities advisories

💡 Result:

As of Feb 4, 2026, there are no published security advisories for the GitHub repo that backs @anthropic-ai/sdk (Anthropic’s TypeScript SDK), and no known direct vulnerabilities have been reported for @anthropic-ai/sdk@0.72.1 in major package vulnerability trackers:

  • GitHub Security Advisories (repo: anthropics/anthropic-sdk-typescript): “There aren’t any published security advisories.” [1]
  • Snyk (npm package @anthropic-ai/sdk): “No direct vulnerabilities have been found” and the “latest non vulnerable version” is 0.72.1. [2]
  • ReversingLabs (package scan): reports no known vulnerabilities detected for @anthropic-ai/sdk (latest). [3]

If you need “actionable” assurance, run a dependency scan (e.g., npm audit, Snyk/Dependabot) for transitive issues, since “no direct vulnerabilities” does not cover vulnerable dependencies. [2]


🏁 Script executed:

#!/bin/bash
# Try fetching CHANGELOG.md from UNPKG CDN
curl -s "https://unpkg.com/@anthropic-ai/sdk@0.72.1/CHANGELOG.md" | head -100

echo "---"

# Alternative: search GitHub repository for the changelog
curl -s "https://raw.githubusercontent.com/anthropics/anthropic-sdk-typescript/main/CHANGELOG.md" | grep -A 20 "## 0.72" | head -30

Repository: superset-sh/superset

Length of output: 9095


Test structured output format migration from output_format to output_config.

Version 0.72.0 includes a breaking change: the Messages API migrated from using output_format to output_config for structured outputs. If your code uses structured outputs or references the OutputFormat export, this upgrade will require code changes. Review the commit and test your implementation to ensure compatibility.

🤖 Prompt for AI Agents
In `@apps/api/package.json` at line 14, The upgrade to `@anthropic-ai/sdk` ^0.72.1
changed the Messages API structured output from output_format to output_config,
so search the codebase for the OutputFormat type/export and any uses of
"output_format" (e.g., in calls to the Messages API or helper functions) and
replace them with the new "output_config" shape and types (adjust imports from
OutputFormat to the new types or remove if renamed); update call sites that
build structured output requests to pass output_config instead of output_format
and adapt tests/type assertions to the new schema for structured outputs to
restore compatibility with the SDK v0.72.x API.

Comment on lines +386 to +399
private async _appendToStream(events: unknown[]): Promise<void> {
const response = await fetch(
`${this.options.proxyUrl}/streams/${this.sessionId}`,
{
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(events),
},
);

if (!response.ok) {
throw new Error(`Failed to append to stream: ${response.status}`);
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

_appendToStream lacks timeout and retries for network resilience.

The fetch call has no timeout configuration. A hung connection could block indefinitely. Per coding guidelines, external calls should have timeouts on request threads.

🛡️ Suggested fix with AbortSignal timeout
+	private static readonly STREAM_TIMEOUT_MS = 30_000;
+
 	private async _appendToStream(events: unknown[]): Promise<void> {
+		const timeoutSignal = AbortSignal.timeout(DurableChatClient.STREAM_TIMEOUT_MS);
 		const response = await fetch(
 			`${this.options.proxyUrl}/streams/${this.sessionId}`,
 			{
 				method: "POST",
 				headers: { "Content-Type": "application/json" },
 				body: JSON.stringify(events),
+				signal: timeoutSignal,
 			},
 		);
 
 		if (!response.ok) {
-			throw new Error(`Failed to append to stream: ${response.status}`);
+			const body = await response.text().catch(() => "");
+			throw new Error(`Failed to append to stream: ${response.status} ${body}`.trim());
 		}
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private async _appendToStream(events: unknown[]): Promise<void> {
const response = await fetch(
`${this.options.proxyUrl}/streams/${this.sessionId}`,
{
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(events),
},
);
if (!response.ok) {
throw new Error(`Failed to append to stream: ${response.status}`);
}
}
private static readonly STREAM_TIMEOUT_MS = 30_000;
private async _appendToStream(events: unknown[]): Promise<void> {
const timeoutSignal = AbortSignal.timeout(DurableChatClient.STREAM_TIMEOUT_MS);
const response = await fetch(
`${this.options.proxyUrl}/streams/${this.sessionId}`,
{
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(events),
signal: timeoutSignal,
},
);
if (!response.ok) {
const body = await response.text().catch(() => "");
throw new Error(`Failed to append to stream: ${response.status} ${body}`.trim());
}
}
🤖 Prompt for AI Agents
In `@packages/ai-chat/src/stream/client.ts` around lines 386 - 399, Add a bounded
timeout and simple retry/backoff around the network call in _appendToStream to
avoid hanging and improve resilience: wrap the fetch in an AbortController whose
signal is passed to fetch and set a timeout (clear it after completion), retry
the POST to `${this.options.proxyUrl}/streams/${this.sessionId}` a few times
with incremental backoff on transient failures (network errors or non-ok
responses like 5xx), and surface a clear error if all attempts fail; update the
method name _appendToStream to use the AbortController signal, implement timeout
cleanup, and implement retry logic that only retries on network/5xx errors while
immediately throwing for client (4xx) errors.

Comment thread packages/ai-chat/src/stream/useCollectionData.ts
Comment thread packages/ai-chat/src/stream/useCollectionData.ts Outdated
Comment thread packages/ai-chat/src/stream/useCollectionData.ts Outdated
Comment thread packages/ai-chat/src/stream/useCollectionData.ts Outdated
@Kitenite Kitenite merged commit 83d78c2 into main Feb 4, 2026
5 of 6 checks passed
@Kitenite Kitenite deleted the split-ai-chat/shared-package branch February 4, 2026 22:50
@coderabbitai coderabbitai Bot mentioned this pull request Feb 18, 2026
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant