Skip to content

fix(streams): streaming correctness, reliability, and performance overhaul#1391

Merged
Kitenite merged 38 commits into
mainfrom
kitenite/stream-debug
Feb 11, 2026
Merged

fix(streams): streaming correctness, reliability, and performance overhaul#1391
Kitenite merged 38 commits into
mainfrom
kitenite/stream-debug

Conversation

@Kitenite
Copy link
Copy Markdown
Collaborator

@Kitenite Kitenite commented Feb 11, 2026

Summary

Addresses 32 of 51 items from the streaming performance and reliability recommendations. Fixes critical correctness bugs, reduces streaming latency, adds retry/fallback resilience, and standardizes the API surface.

  • 10 critical correctness fixes: producer error propagation to finish, session lifecycle mutex, flush-before-reset ordering, single write path, abort signal support, finish error surfacing to UI
  • 8 performance improvements: eliminated /generations/start round trip, batch chunk endpoint with desktop-side coalescing (ChunkBatcher), producer tuning (1ms linger), Zod bypass on hot path, bounded memory queue
  • 8 reliability improvements: exponential retry on batch sends, producer health tracking with sync fallback, active generation tracking for single-writer enforcement, flush timeout
  • 6 protocol cleanups: structured error codes on all routes, sessionId/messageId in all responses, removed dead /generations/start endpoint, documented terminal semantics

Changes

apps/streams/src/protocol.ts

  • Per-session mutex (withSessionLock) for delete/reset serialization
  • Producer error tracking (recordProducerError/drainProducerErrors) — finish now throws on background errors
  • appendToStream single write path with producer health fallback to stream.append
  • writeChunks batch method, startGeneration/finishGeneration lifecycle
  • 10s flush timeout via Promise.race
  • User messages flush producer first, then write directly to stream for txid immediacy

apps/streams/src/routes/chunks.ts

  • Added /chunks/batch endpoint (skips Zod, lightweight array validation)
  • Removed /generations/start endpoint (auto-registers from first chunk)
  • Structured error codes (SESSION_NOT_FOUND, WRITE_FAILED, FINISH_FAILED, INVALID_BODY) and sessionId/messageId in all responses

apps/desktop/.../session-manager/

  • New ChunkBatcher class: 5ms linger, 50 max batch, 2000 max buffer, 3 retries with exponential backoff (50ms base)
  • sendBatch throws on non-ok so retry logic can catch transient failures
  • Client-side messageId generation (no /generations/start round trip)
  • Abort signal on batch sends, res.ok check on finish with error event emission

apps/streams/src/routes/sessions.ts, auth.ts, server.ts, types.ts

  • Structured error codes and sessionId in all error responses
  • Terminal semantics documented on StreamChunk: message-end = UI signal, /finish = server cleanup
  • Discovery listing updated (replaced generationsStart with chunksBatch)

Test Plan

  • Start a chat session, send a message, verify streaming renders smoothly
  • Interrupt mid-stream — verify abort cancels in-flight sends and UI shows interrupted state
  • Send a long response (many chunks) — verify no memory growth, batching visible in network tab
  • Kill/restart the proxy mid-stream — verify retry kicks in and finish surfaces error to UI
  • Delete a session while streaming — verify flush completes before cleanup
  • Reset a session while streaming — verify reset event follows all queued chunks

Summary by CodeRabbit

  • New Features

    • Centralized session lifecycle and agent orchestration for more reliable chat sessions
    • Batched, retryable chunk sending with a chunks/batch API and generation lifecycle controls
    • Streaming UI: animated assistant streaming indicator and pending-send timeout UX
  • Bug Fixes

    • Improved retry, timeout, and health handling for streaming, background agents, and txid waits
    • Inactivity watchdogs and guarded agent starts to reduce stalled runs
    • More structured error responses with operation-specific codes and session IDs
  • Documentation

    • Added streaming performance & reliability guidance
  • Tests

    • Added tests enforcing single-writer/generation rules for chunk routes
  • Chores

    • Updated streaming library dependency

…rrors

Track producer background errors per session. finishGeneration now
flushes, clears per-message seq state, and throws if any producer
errors occurred during the run. The finish route returns structured
error response (code: FINISH_FAILED) instead of silent success.
Extract recordProducerError and drainProducerErrors as private
helpers. Simplifies the onError callback and finishGeneration
method, making the error lifecycle (record → drain → throw) explicit.
Non-2xx responses from finish are now logged with the response body.
The finish request now sends the messageId so the server can clear
per-message seq state.
deleteSession is now async and awaits producer.flush() then
producer.detach() before cleaning up session state. Prevents
returning 204 while queued chunks are still in flight.
Ensures all queued chunks are durably written before the reset
event is appended, preventing reset from racing ahead of buffered
data. Also clears producer errors on reset.
Extract appendToStream helper that prefers the producer when
available, falling back to direct stream.append. writeChunk,
writeUserMessage, and writePresence all use this single write
path now. User messages and presence flush immediately for
durability while streaming chunks remain buffered.
Pass the agent abort signal to streaming chunk fetch calls so that
interrupting an agent cancels in-flight chunk sends immediately.
AbortError is silently swallowed since it's the expected outcome.
If /generations/finish returns non-2xx or the network request fails,
emit an explicit error event so the UI shows a visible failure
instead of silently appearing done.
Add a promise-chain based per-session lock so concurrent delete,
reset, and close operations serialize rather than race. Prevents
interleaved lifecycle transitions from corrupting session state.
Flush the producer first to preserve global ordering, then append
the user message directly to the stream. This avoids producer
queue latency that was causing txid timeout errors on the client
side (5s default timeout in stream-db awaitTxId).
Generate messageId client-side with crypto.randomUUID() instead of
blocking on POST /generations/start. Eliminates a full HTTP round
trip before the first token can stream.
Add writeChunks method to protocol and POST /chunks/batch endpoint
that accepts an array of chunks in a single HTTP request.

On the desktop side, replace the sequential per-chunk POST chain
with a ChunkBatcher that coalesces chunks within a 5ms window
(or 50-chunk max) before sending as a batch. This reduces HTTP
round trips from N to ~N/batch_size during active streaming.
Reduce producer lingerMs from 5ms to 1ms since the desktop
ChunkBatcher already coalesces at 5ms — avoids double-buffering
latency. Add 10s timeout to flushSession so flush/finish cannot
hang indefinitely on a stuck producer.
The /chunks/batch endpoint now does a lightweight array check
instead of full Zod schema validation on every chunk — this is
an authenticated internal path from the desktop client.

ChunkBatcher now has a maxBufferSize (default 2000) that drops
oldest chunks when the buffer exceeds the cap, preventing OOM
when the network or proxy is slower than the agent.
ChunkBatcher now retries failed sendBatch calls up to 3 times with
exponential backoff (50ms base). sendBatch callback throws on non-ok
responses so the retry logic can catch transient failures. AbortError
is rethrown immediately to respect cancellation.
Track per-session producer health. When a producer fires onError,
mark it unhealthy and route subsequent writes through direct
stream.append instead. A successful flush restores healthy status.
Prevents cascading failures when the producer is in a bad state.
#33)

Add startGeneration/getActiveGeneration/finishGeneration lifecycle to
protocol. Chunk routes auto-register the generation from the first
chunk if none is active. finishGeneration clears the active generation.
Reset and delete also clean up generation state.
Include sessionId (and messageId where applicable) in both success
and error responses from chunk and session routes for tracing.
Every error response now includes a machine-readable `code` field
(SESSION_NOT_FOUND, WRITE_FAILED, FINISH_FAILED, INVALID_BODY, etc.)
for deterministic client error handling.
30 of 51 items now done. Remaining items are larger architectural
changes (14, 29, 31), operational (16-18), and observability/test/
rollout work (35-50).
Generation is now auto-registered from the first chunk written.
Desktop generates messageId client-side, so this endpoint was
dead code. Replaced with chunksBatch in the discovery listing.
message-end chunk = UI signal (isLoading → false)
/generations/finish = server lifecycle cleanup (flush, seq clear, error drain)
Both are required and always sent in that order.
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Feb 11, 2026

🧹 Preview Cleanup Complete

The following preview resources have been cleaned up:

  • ✅ Neon database branch
  • ✅ Electric Fly.io app
  • ✅ Streams Fly.io app

Thank you for your contribution! 🎉

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Fix all issues with AI agents
In
`@apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/agent-execution.ts`:
- Around line 66-72: startSession and restoreSession accept permissionMode as an
unconstrained string which diverges from updateSessionConfig's enum validation
and agent-execution.ts then silently defaults to the permissive
"bypassPermissions"; fix by updating the input schemas for startSession and
restoreSession in index.ts to use the same zod enum/unions used by
updateSessionConfig (the same allowed values "default" | "acceptEdits" |
"bypassPermissions") so invalid strings are rejected, and change the nullish
coalescing in agent-execution.ts (permissionMode: (session.permissionMode as ...
) ?? "bypassPermissions") to either use a safer explicit default (e.g.,
"default") or leave undefined and require callers to supply a valid
mode—alternatively, if "bypassPermissions" is intentional for desktop, add a
clear comment next to that line explaining the security assumption.

In
`@apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/agent-runner.ts`:
- Around line 63-66: The race occurs because an old run's finally
unconditionally calls this.deps.runningAgents.delete(sessionId) and can remove a
newly written AbortController; in startAgent/abortExistingAgent flows fix this
by making the finally block conditional: read const current =
this.deps.runningAgents.get(sessionId) and only delete if current ===
abortController (the controller created in this startAgent), and optionally
update abortExistingAgent to remove the map entry only when it is aborting the
same controller instance; reference startAgent, abortExistingAgent,
this.deps.runningAgents, AbortController and the finally block to locate where
to add the identity check before deleting the map entry.

In
`@apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/session-lifecycle.ts`:
- Around line 146-187: The catch block in startSession swallows errors from
ensureSessionReady and deps.store.create so callers can't detect failures; after
logging and calling this.deps.emitSessionError({ sessionId, error: message })
rethrow the original error (or throw a new Error(message)) so the returned
promise rejects, ensuring callers of startSession receive the failure; reference
startSession, ensureSessionReady, deps.store.create and deps.emitSessionError
when applying the change.
- Around line 283-290: The DELETE fetch to
`${this.deps.proxyUrl}/v1/sessions/${sessionId}` currently ignores the response
which can cause local cleanup to proceed when the remote delete failed; update
the logic around the fetch call in the session lifecycle (the block using
this.deps.proxyUrl, sessionId, headers) to capture the Response, check
response.ok (and response.status), log non-ok responses with details (status and
body/text), and abort or handle local archiving/cleanup accordingly (e.g., throw
or return on failure) so remote and local state remain consistent.

In
`@apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/session-manager.ts`:
- Around line 25-28: The local redeclaration of StartAgentInput duplicates the
type from agent-runner.ts; remove the local interface StartAgentInput and
instead import StartAgentInput from agent-runner.ts (where it is exported) and
update any references in this file (e.g., in session-manager functions that
accept StartAgentInput) to use the imported type to avoid drift and duplication.
🧹 Nitpick comments (8)
apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/agent-stream-writer.ts (3)

89-133: Session recovery retries the full maxAttempts again — document or constrain.

After a session-not-found recovery (Line 122), the retry call on Line 124 uses the same maxAttempts as the original. For callers passing maxAttempts: 3, this means up to 6 total HTTP attempts (3 + recovery + 3). This is likely acceptable for resilience, but it could surprise callers. Consider adding a brief inline comment clarifying that the post-recovery retry budget is intentionally the same.


145-158: maxAttempts: 1 inside sendBatch — confirm this is intentional given ChunkBatcher's own retry.

The sendBatch callback uses maxAttempts: 1 for postWithSessionRecovery, meaning postJsonWithRetry won't retry internally. This is fine since ChunkBatcher.sendWithRetry retries the entire batch (up to maxRetries times, default 3). However, this means session recovery (on 404) also won't trigger inside batcher retries — the batcher will just retry the same call that got a 404. If the remote session disappears mid-stream, every batcher retry will fail with 404.

Consider whether session recovery should happen at least once within the batcher retry loop, e.g., by setting maxAttempts: 2.


218-223: Extract hardcoded "claude" actor and "message-end" chunk type to named constants.

"claude" (Line 220) and "message-end" (Line 269) are repeated magic strings that represent protocol-level values. Extracting them to module-level constants improves discoverability and prevents typos.

Proposed fix

At the top of the file (after line 9):

+const ACTOR_CLAUDE = "claude";
+const CHUNK_TYPE_MESSAGE_END = "message-end";

Then replace usages:

 		batcher.push({
 			messageId,
-			actorId: "claude",
+			actorId: ACTOR_CLAUDE,
 			role: "assistant",
 			chunk,
 		});
 		const terminalChunkPayload = {
 			messageId,
-			actorId: "claude",
+			actorId: ACTOR_CLAUDE,
 			role: "assistant",
-			chunk: { type: "message-end" as const },
+			chunk: { type: CHUNK_TYPE_MESSAGE_END as const },
 		};

As per coding guidelines, "Extract hardcoded magic numbers, strings, and enums to named constants at module top instead of leaving them inline in logic."

apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/session-types.ts (1)

1-15: Consider a stricter union type for permissionMode.

Both interfaces type permissionMode as string, but agent-execution.ts (Line 68–72) casts it to "default" | "acceptEdits" | "bypassPermissions". Defining a shared union type here would catch invalid values at compile time and remove the need for the as cast downstream.

Proposed change
+export type PermissionMode = "default" | "acceptEdits" | "bypassPermissions";
+
 export interface ActiveSession {
 	sessionId: string;
 	cwd: string;
 	model?: string;
-	permissionMode?: string;
+	permissionMode?: PermissionMode;
 	maxThinkingTokens?: number;
 }

 export interface EnsureSessionReadyInput {
 	sessionId: string;
 	cwd: string;
 	model?: string;
-	permissionMode?: string;
+	permissionMode?: PermissionMode;
 	maxThinkingTokens?: number;
 }

As per coding guidelines, "Maintain type safety by avoiding any types unless absolutely necessary."

apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/agent-runner.ts (1)

88-95: Type assertions are safe here but could be avoided.

watchdog as GenerationWatchdog and batcher as ChunkBatcher are safe since they're assigned on lines 80–81 before this code runs. However, you could avoid the casts by restructuring to keep them in scope after prepareStream:

Alternative structure (optional)
 			await this.execution.execute({
 				session,
 				sessionId,
 				prompt,
 				abortController,
 				onChunk: (chunk) => {
 					this.streamWriter.onAssistantChunk({
-						watchdog: watchdog as GenerationWatchdog,
-						batcher: batcher as ChunkBatcher,
+						watchdog: prepared.watchdog,
+						batcher: prepared.batcher,
 						messageId,
 						chunk,
 					});
 				},
 			});

This requires keeping prepared in scope rather than destructuring into nullable locals.

apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/agent-execution.ts (1)

108-127: Unused sessionId parameter — consider using it for logging.

_sessionId is destructured but ignored. The warning on Line 123 could include it for better diagnostics when multiple sessions are active.

Proposed improvement
 	resolvePermission({
-		sessionId: _sessionId,
+		sessionId,
 		toolUseId,
 		approved,
 		updatedInput,
 	}: ResolvePermissionInput): void {
 		const result = approved
 			? { ... }
 			: { behavior: "deny" as const, message: "User denied permission" };

 		const resolved = resolvePendingPermission({ toolUseId, result });
 		if (!resolved) {
 			console.warn(
-				`[chat/session] No pending permission for toolUseId=${toolUseId}`,
+				`[chat/session] No pending permission for toolUseId=${toolUseId} in session ${sessionId}`,
 			);
 		}
 	}
apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/session-lifecycle.ts (2)

297-302: updateSessionMeta uses positional parameters instead of an object.

Per coding guidelines: "Use object parameters for functions with 2 or more parameters."

Proposed fix
-	async updateSessionMeta(
-		sessionId: string,
-		patch: UpdateSessionMetaPatch,
-	): Promise<void> {
-		await this.deps.store.update(sessionId, patch);
+	async updateSessionMeta({
+		sessionId,
+		patch,
+	}: {
+		sessionId: string;
+		patch: UpdateSessionMetaPatch;
+	}): Promise<void> {
+		await this.deps.store.update(sessionId, patch);
 	}

This would also require updating the caller in session-manager.ts (Line 117).


304-341: Direct mutation of the ActiveSession object is fine for in-memory state but fragile.

updateAgentConfig mutates the session object stored in the Map directly. This works because the map holds references, but it means any code holding a reference to the session object will see the mutations. If this is the intended pattern, it's fine — just be aware that replacing the map implementation (e.g., with immutable state) would break this silently.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/desktop/src/renderer/screens/main/components/WorkspaceView/ContentView/TabsContent/TabView/ChatPane/ChatInterface/ChatInterface.tsx (1)

318-326: ⚠️ Potential issue | 🟡 Minor

handleStop should also clear the pending-send timer.

If the user stops during the window between send and stream start, the orphaned timer will fire, logging a misleading warning and redundantly clearing isSending.

Proposed fix
 const handleStop = useCallback(
   (e: React.MouseEvent) => {
     e.preventDefault();
+    clearSendPendingTimer();
     setIsSending(false);
     interruptAgent.mutate({ sessionId });
     stop();
   },
-  [interruptAgent, sessionId, stop],
+  [clearSendPendingTimer, interruptAgent, sessionId, stop],
 );
🧹 Nitpick comments (9)
apps/streams/src/routes/chunks.ts (3)

88-96: writeChunk takes 7 positional arguments — prefer an object parameter.

This call passes 7 positional args, making it easy to mis-order messageId/actorId/role. Consider refactoring writeChunk to accept a single options object, consistent with writeChunks on Line 197 which already uses { sessionId, chunks }.


94-94: as never casts suppress type-checking on the data written to streams.

Both the single-chunk path (Line 94) and batch path (Line 199) use as never to bypass the type system. This hides any mismatch between the parsed/unvalidated chunk shape and what writeChunk/writeChunks actually expects, defeating compile-time safety on the write path.

If the protocol method signatures accept a broader type than z.infer<typeof chunkBodySchema>, align the Zod schema or add an explicit cast to the correct target type so the compiler can still catch regressions.

Also applies to: 199-199


220-227: Silent catch for optional body parsing is acceptable but could be tightened.

Swallowing all errors (including unexpected ones like OOM) to treat messageId as optional is pragmatic. A minor improvement: catch only JSON parse / Zod errors by checking error instanceof z.ZodError || error instanceof SyntaxError, so truly unexpected failures still surface.

apps/streams/src/routes/chunks.test.ts (1)

19-121: Tests thoroughly cover generation-mismatch rejection — consider adding happy-path and other error-code tests.

The three mismatch scenarios are well-structured and verify both the HTTP status and the response shape, plus assert that no writes occur. To round out coverage for the new route logic, consider adding:

  • A happy-path test (no active generation → startGeneration called, chunk written, 200 returned).
  • SESSION_NOT_FOUND when getSession returns falsy.
  • INVALID_BODY for malformed payloads.
  • Finish endpoint (/:id/generations/finish) success and FINISH_FAILED paths.
apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/agent-stream-writer.ts (3)

61-69: String-based error detection is fragile — consider structured error matching.

isSessionNotFoundError relies on substrings like "status 404" and "session_not_found" in the error message. If the upstream HTTP client or server changes its error formatting even slightly, this detection silently breaks and recovery is skipped, leading to permanent failures instead of transparent retries.

If postJsonWithRetry can be made to throw a typed error (e.g., with a status or code property), matching on that would be more robust. Otherwise, a brief comment documenting the expected error format contract would help future maintainers.


145-168: Stale proxyHeaders after session recovery — subsequent batch sends reuse the original (potentially expired) headers.

The proxyHeaders closure is captured once when the batcher is created. If a batch send triggers session recovery inside postWithSessionRecovery, the retry uses freshly built headers — but the next batch send from the batcher still uses the original stale proxyHeaders. If auth tokens have expired, every subsequent send will fail → recover → retry, which is functionally correct but wasteful (O(n) recovery round-trips).

Consider making proxyHeaders mutable (e.g., stored in a { current: headers } ref object) and updating it after each successful recovery so subsequent sends benefit from the refresh.

Sketch
 private createChunkBatcher({
 	sessionId,
 	session,
 	proxyHeaders,
 	abortController,
 }: {
 	sessionId: string;
 	session: ActiveSession;
-	proxyHeaders: Record<string, string>;
+	proxyHeaders: { current: Record<string, string> };
 	abortController: AbortController;
 }): ChunkBatcher {
 	return new ChunkBatcher({
 		sendBatch: async (chunks) => {
 			await this.postWithSessionRecovery({
 				sessionId,
 				session,
 				url: `${this.deps.proxyUrl}/v1/sessions/${sessionId}/chunks/batch`,
-				headers: proxyHeaders,
+				headers: proxyHeaders.current,
 				body: { chunks },
 				maxAttempts: 1,
 				operation: "write chunk batch",
 				signal: abortController.signal,
+			}).then(() => {}, (err) => {
+				// If recovery happened, postWithSessionRecovery already used refreshed headers.
+				// We could update proxyHeaders.current here if needed.
+				throw err;
 			});
 		},

264-319: No abort signal passed to terminal chunk persistence — finalization can hang indefinitely on network issues.

persistTerminalChunk calls postWithSessionRecovery without a signal, meaning if the network is unreachable these calls (up to 3 attempts each, on two tiers) will block until the underlying HTTP client times out (or never, depending on configuration). If postJsonWithRetry doesn't enforce its own socket/response timeout, this could stall the entire finalization indefinitely.

If this is an intentional "try hard" design decision for the finalization path, a brief comment would clarify intent. Otherwise, consider passing a dedicated AbortSignal with a generous timeout (e.g., 30s) to bound the worst case.

apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/agent-runner.ts (2)

89-96: Type assertions (as GenerationWatchdog, as ChunkBatcher) bypass null safety.

watchdog and batcher are declared as | null on lines 71-72 but cast away inside the onChunk callback. This is safe only because onChunk is invoked after the prepareStream await assigns them. However, the assertions suppress the compiler's null-check, so a future refactor (e.g., making execute call onChunk during setup) would silently introduce a null-pointer bug.

A lightweight alternative: assign inside prepared scope and pass the non-null references directly.

Sketch — avoid the type assertions
 		const prepared = await this.streamWriter.prepareStream({
 			sessionId,
 			session,
 			abortController,
 		});
 		headers = prepared.headers;
 		batcher = prepared.batcher;
 		watchdog = prepared.watchdog;

 		await this.execution.execute({
 			session,
 			sessionId,
 			prompt,
 			abortController,
 			onChunk: (chunk) => {
 				this.streamWriter.onAssistantChunk({
-					watchdog: watchdog as GenerationWatchdog,
-					batcher: batcher as ChunkBatcher,
+					watchdog: prepared.watchdog,
+					batcher: prepared.batcher,
 					messageId,
 					chunk,
 				});
 			},
 		});

51-62: Early return on missing session silently drops the prompt — consider if the caller needs to know.

When the session isn't found, the method emits a session error and returns void — the caller (startAgent's invoker) gets a resolved promise with no indication the agent never ran. If the caller can handle a thrown error or a return value indicating failure, that would allow it to retry or surface the problem differently.

Not critical if the emitted session error is sufficient for the UI, but worth confirming the caller's contract.

@Kitenite Kitenite merged commit 0a59a84 into main Feb 11, 2026
12 of 14 checks passed
@Kitenite Kitenite deleted the kitenite/stream-debug branch February 11, 2026 18:15
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/streams/src/protocol.ts (1)

534-563: ⚠️ Potential issue | 🟠 Major

forkSession shallow-copies sourceState, sharing mutable DB/collection instances between sessions.

After createSession(targetSessionId) initializes a fresh sessionDB, line 550 immediately overwrites it with { ...sourceState, ... }, causing both sessions to share the same sessionDB, messages, modelMessages, and changeSubscription references. Closing or modifying one session's DB will corrupt the other.

This appears to be an incomplete implementation (per the TODO on line 558). Consider either:

  • Deep-initializing the target's state independently (using initializeSessionState's results and only copying data), or
  • Guarding forkSession with a NOT_IMPLEMENTED error until the copy logic is built.
🤖 Fix all issues with AI agents
In
`@apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/agent-stream-writer.ts`:
- Around line 373-396: In finalizeGeneration, if persistTerminalChunk(...)
returns false you must return early and not call finishGeneration; update
finalizeGeneration (the method that calls persistTerminalChunk and
finishGeneration) to check terminalChunkPersisted and, after emitting the
session error via this.deps.emitSessionError(...), immediately return so
finishGeneration is not invoked when persistTerminalChunk fails.

In
`@apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/proxy-requests.ts`:
- Around line 83-88: The nonRetryable flag on the ProxyRequestError currently
treats all 4xx responses as non-retryable; update the logic around the thrown
ProxyRequestError (the block that constructs message, status, code,
nonRetryable) to exclude known retryable 4xx codes such as 429 and 408 so they
are considered retryable (i.e., set nonRetryable to false for res.status === 429
or === 408), while keeping other 4xx codes non-retryable.

In `@apps/streams/src/protocol.ts`:
- Around line 57-63: recordProducerError currently always sets
this.producerHealthy.set(sessionId, false) even if deleteSession removed the
session, which can re-insert stale entries; update recordProducerError to first
verify the session exists in your maps (e.g. check
this.producerErrors.has(sessionId) or this.producerHealthy.has(sessionId)) and
only push the error and flip healthy-to-false when the session is present,
otherwise bail out; refer to the method name recordProducerError and the maps
producerErrors and producerHealthy to locate and guard the write.
- Around line 112-147: deleteSession currently omits cleaning up messageSeqs,
causing orphaned entries; before removing the maps at the end of deleteSession,
mirror resetSession by retrieving this.activeGenerationIds.get(sessionId) (or
the single active generation id used for the session) and for each generation id
remove/clear any message IDs stored in this.messageSeqs for that generation,
then proceed to delete this.activeGenerationIds, this.producers,
this.sessionStates, etc.; update deleteSession to perform this messageSeqs
cleanup (use the same logic as resetSession) so messageSeqs no longer
accumulates orphaned entries.
- Around line 354-383: finishGeneration currently only calls clearSeq when an
explicit messageId is provided, which leaves the active generation's sequence
counter in messageSeqs orphaned when finishGeneration is invoked with messageId
undefined; update finishGeneration so that when you delete the active generation
from activeGenerationIds you also clear its sequence counter: if messageId is
provided call clearSeq(messageId) as now, but when you detect activeMessageId
and either messageId is undefined or different, call clearSeq(activeMessageId)
before deleting activeGenerationIds.delete(sessionId) (avoid double-clearing
when messageId === activeMessageId).

In `@apps/streams/src/routes/auth.ts`:
- Around line 12-19: The login handler currently swallows JSON parse errors when
calling c.req.json(); update that catch block to log the error with context
(e.g., use console.error or the existing logging convention) before returning
the 400 response—mirror the logout handler's behavior which logs "[AUTH] Failed
to parse logout body:"; reference the login parse site (the try around
c.req.json(), and the sessionId variable) and include the parse error in the log
message (e.g., "[AUTH] Failed to parse login body:" + parseError) so the error
is not silently discarded.
🧹 Nitpick comments (12)
apps/streams/src/routes/chunks.ts (2)

108-116: writeChunk takes 7 positional arguments — prefer an object parameter.

This call is hard to read and easy to mis-order. The coding guidelines require object parameters for functions with 2+ parameters. The as never cast on chunk also silently discards type checking.

Consider refactoring writeChunk to accept an object:

Suggested call-site shape
-			await protocol.writeChunk(
-				stream,
-				sessionId,
-				messageId,
-				actorId,
-				role,
-				chunk as never,
-				txid,
-			);
+			await protocol.writeChunk({
+				stream,
+				sessionId,
+				messageId,
+				actorId,
+				role,
+				chunk,
+				txid,
+			});

This would also eliminate the need for the as never cast if the parameter type is properly typed in the protocol interface. As per coding guidelines, "Use object parameters for functions with 2 or more parameters instead of positional arguments".

#!/bin/bash
# Check the writeChunk signature in the protocol to understand the current interface
ast-grep --pattern 'writeChunk($$$) {
  $$$
}'
rg -n 'writeChunk' --type=ts -C2

245-255: Redundant guard — firstMessageId is guaranteed to be a non-empty string here.

The validation loop (Lines 154–219) already ensures every element has a string messageId, and Line 143 ensures the array is non-empty. So chunks[0]?.messageId is always a truthy string at this point, making the !firstMessageId branch dead code.

Simplification
-			const firstMessageId = chunks[0]?.messageId;
-			if (!firstMessageId) {
-				return c.json(
-					{
-						error: "Each chunk must include messageId",
-						code: "INVALID_BODY",
-						sessionId,
-					},
-					400,
-				);
-			}
+			const firstMessageId = chunks[0].messageId;
apps/streams/src/routes/auth.ts (5)

21-25: Redundant and misleading type assertion.

body is already typed on Line 11 with optional fields. The as cast here asserts actorId and deviceId are required strings, which the compiler trusts without proof. The runtime check on Line 27 is the actual guard. Remove the cast and destructure directly from body.

Proposed fix
-		const { actorId, deviceId, name } = body as {
-			actorId: string;
-			deviceId: string;
-			name?: string;
-		};
+		const { actorId, deviceId, name } = body;

51-52: Inconsistent log prefix — missing [AUTH] domain tag.

The logout error handler (Line 145) uses [AUTH] prefix but this login handler omits it. Per coding guidelines, use [domain/operation] consistently.

Proposed fix
-		console.error("Failed to login:", error);
+		console.error("[AUTH] Failed to login:", error);

As per coding guidelines, "Use prefixed console logging with consistent context pattern: [domain/operation] message."


68-79: Inconsistent JSON parsing — login uses c.req.json(), logout uses manual text() + JSON.parse().

Both achieve the same result. Consider aligning on c.req.json() for consistency with the login route.

Proposed fix
-		const rawBody = await c.req.text();
-
 		let body: { actorId?: string; deviceId?: string; allDevices?: boolean };
 		try {
-			body = JSON.parse(rawBody);
-		} catch (parseError) {
-			console.error("[AUTH] Failed to parse logout body:", parseError);
+			body = await c.req.json();
+		} catch (error) {
+			console.error("[AUTH] Failed to parse logout body:", error);
 			return c.json(
 				{ error: "Invalid JSON body", code: "INVALID_BODY", sessionId },
 				400,
 			);
 		}

50-50: Success responses omit sessionId.

Error responses consistently include sessionId, but success responses at Lines 50, 126, and 142 do not. If the intent (per PR objectives: "sessionId included in responses") is to always include it, these should be updated too.


11-36: Consider Zod schemas for body validation at this API boundary.

Both login and logout handlers use manual type annotations and if checks. Using Zod would eliminate the unsafe as cast, provide better type narrowing after .parse(), and align with the guideline to use Zod for API route bodies at boundaries.

Example for login:

import { z } from "zod";

const loginBodySchema = z.object({
  actorId: z.string().min(1),
  deviceId: z.string().min(1),
  name: z.string().optional(),
});

Then loginBodySchema.safeParse(body) gives you validated + narrowed types in one step.

As per coding guidelines, "Use Zod schemas for validating tRPC inputs and API route bodies at boundaries."

apps/streams/src/protocol.ts (3)

83-92: Extract producer configuration constants.

lingerMs: 1 and maxInFlight: 5 are tuning knobs that would be clearer as named constants alongside FLUSH_TIMEOUT_MS. As per coding guidelines, "Extract hardcoded magic numbers, strings, and enums to named constants at module top instead of leaving them inline in logic."


232-258: writeChunk has 7 positional parameters — prefer an object parameter.

writeChunks already uses the object-params pattern. writeChunk (and similarly writeUserMessage, writeToolResult, writeApprovalResponse, writePresence) still uses positional args with an unused _stream parameter. Consider aligning with the same object-param style to improve readability and make the unused _stream easier to remove. As per coding guidelines, "Use object parameters for functions with 2 or more parameters instead of positional arguments."


292-313: Fallback path in appendToStream is correct but consider the implication.

When the producer is unhealthy, each call falls through to stream.append(data) — a direct network write. In writeChunks, this means N sequential network round-trips in the degraded path. This is acceptable as a resilience fallback, but worth noting for observability (e.g., a log when falling back).

apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/agent-runner.ts (1)

92-99: Type assertions as GenerationWatchdog / as ChunkBatcher bypass null safety.

These are safe at runtime because onChunk is only invoked after prepareStream succeeds (which initializes both). However, the assertions mask the | null type. A small restructure could avoid them.

Optional: capture non-null references before execute
 			headers = prepared.headers;
 			batcher = prepared.batcher;
 			watchdog = prepared.watchdog;
+			const activeWatchdog = watchdog;
+			const activeBatcher = batcher;

 			await this.execution.execute({
 				session,
 				sessionId,
 				prompt,
 				abortController,
 				onChunk: (chunk) => {
 					this.streamWriter.onAssistantChunk({
-						watchdog: watchdog as GenerationWatchdog,
-						batcher: batcher as ChunkBatcher,
+						watchdog: activeWatchdog,
+						batcher: activeBatcher,
 						messageId,
 						chunk,
 					});
 				},
 			});
apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/agent-execution.ts (1)

35-41: Eliminate redundant sessionId parameter in ExecuteAgentInput.

ActiveSession already contains sessionId. Replace the separate sessionId parameter with session.sessionId throughout the execute() method (lines 61, 74, 88) to reduce redundancy and eliminate divergence risk.

Comment on lines +373 to +396
async finalizeGeneration({
sessionId,
session,
messageId,
headers,
}: FinalizeInput): Promise<void> {
if (!headers) return;

const terminalChunkPersisted = await this.persistTerminalChunk({
sessionId,
session,
messageId,
headers,
});
if (!terminalChunkPersisted) {
this.deps.emitSessionError({
sessionId,
error:
"Assistant completion marker failed to persist. Message may stay loading.",
});
}

await this.finishGeneration({ sessionId, session, messageId, headers });
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

rg -n "finish" apps/streams/src/routes/ --type=ts -C 5 | head -100

Repository: superset-sh/superset

Length of output: 3835


🏁 Script executed:

# Search for finishGeneration implementation
rg -n "finishGeneration" apps/desktop/src/lib/trpc/routers/ai-chat/ --type=ts -A 10

Repository: superset-sh/superset

Length of output: 1632


🏁 Script executed:

# Search for persistTerminalChunk implementation
rg -n "persistTerminalChunk" apps/desktop/src/lib/trpc/routers/ai-chat/ --type=ts -B 3 -A 10

Repository: superset-sh/superset

Length of output: 3199


🏁 Script executed:

# Get the full finishGeneration implementation
sed -n '339,370p' apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/agent-stream-writer.ts

Repository: superset-sh/superset

Length of output: 850


🏁 Script executed:

# Search for protocol.finishGeneration server implementation
rg -n "finishGeneration" apps/streams/src/ --type=ts -B 5 -A 20

Repository: superset-sh/superset

Length of output: 5059


🏁 Script executed:

# Check if there's any documentation or comments about terminal chunk requirements
rg -n "terminal" apps/streams/src/ --type=ts -B 3 -A 3

Repository: superset-sh/superset

Length of output: 576


Correct: finalizeGeneration should not call finishGeneration if persistTerminalChunk fails.

Per the protocol documentation, both the terminal chunk (message-end) and the finish signal are required and interdependent: the terminal chunk marks visual completion for the UI, while finish marks the durable-state cleanup boundary. Calling finish without a persisted terminal chunk violates this contract and leaves the generation in an inconsistent state—the server marks generation complete but the client/UI never receives the completion marker.

Return early if persistTerminalChunk fails instead of proceeding to finishGeneration.

🤖 Prompt for AI Agents
In
`@apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/agent-stream-writer.ts`
around lines 373 - 396, In finalizeGeneration, if persistTerminalChunk(...)
returns false you must return early and not call finishGeneration; update
finalizeGeneration (the method that calls persistTerminalChunk and
finishGeneration) to check terminalChunkPersisted and, after emitting the
session error via this.deps.emitSessionError(...), immediately return so
finishGeneration is not invoked when persistTerminalChunk fails.

Comment on lines +83 to +88
throw new ProxyRequestError({
message: `${operation} failed: status ${res.status}${detail ? ` (${detail.slice(0, 300)})` : ""}`,
status: res.status,
code,
nonRetryable: res.status >= 400 && res.status < 500,
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Blanket 4xx = non-retryable excludes retryable status codes like 429 and 408.

HTTP 429 (Too Many Requests) and 408 (Request Timeout) are conventionally retryable. The current check res.status >= 400 && res.status < 500 will short-circuit retries for these.

Proposed fix
+const RETRYABLE_4XX = new Set([408, 429]);
+
 throw new ProxyRequestError({
   message: `${operation} failed: status ${res.status}${detail ? ` (${detail.slice(0, 300)})` : ""}`,
   status: res.status,
   code,
-  nonRetryable: res.status >= 400 && res.status < 500,
+  nonRetryable: res.status >= 400 && res.status < 500 && !RETRYABLE_4XX.has(res.status),
 });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
throw new ProxyRequestError({
message: `${operation} failed: status ${res.status}${detail ? ` (${detail.slice(0, 300)})` : ""}`,
status: res.status,
code,
nonRetryable: res.status >= 400 && res.status < 500,
});
const RETRYABLE_4XX = new Set([408, 429]);
throw new ProxyRequestError({
message: `${operation} failed: status ${res.status}${detail ? ` (${detail.slice(0, 300)})` : ""}`,
status: res.status,
code,
nonRetryable: res.status >= 400 && res.status < 500 && !RETRYABLE_4XX.has(res.status),
});
🤖 Prompt for AI Agents
In
`@apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/proxy-requests.ts`
around lines 83 - 88, The nonRetryable flag on the ProxyRequestError currently
treats all 4xx responses as non-retryable; update the logic around the thrown
ProxyRequestError (the block that constructs message, status, code,
nonRetryable) to exclude known retryable 4xx codes such as 429 and 408 so they
are considered retryable (i.e., set nonRetryable to false for res.status === 429
or === 408), while keeping other 4xx codes non-retryable.

Comment on lines +57 to +63
private recordProducerError(sessionId: string, err: unknown): void {
const errors = this.producerErrors.get(sessionId);
if (errors) {
errors.push(err instanceof Error ? err : new Error(String(err)));
}
this.producerHealthy.set(sessionId, false);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

recordProducerError writes to producerHealthy even if the session was already deleted.

If the producer's onError callback fires after deleteSession has cleaned up all maps, line 62 will re-insert a stale entry into producerHealthy. Consider guarding with an existence check:

Proposed fix
 private recordProducerError(sessionId: string, err: unknown): void {
 	const errors = this.producerErrors.get(sessionId);
-	if (errors) {
-		errors.push(err instanceof Error ? err : new Error(String(err)));
+	if (!errors) {
+		// Session already deleted; ignore late callback
+		return;
 	}
+	errors.push(err instanceof Error ? err : new Error(String(err)));
 	this.producerHealthy.set(sessionId, false);
 }
🤖 Prompt for AI Agents
In `@apps/streams/src/protocol.ts` around lines 57 - 63, recordProducerError
currently always sets this.producerHealthy.set(sessionId, false) even if
deleteSession removed the session, which can re-insert stale entries; update
recordProducerError to first verify the session exists in your maps (e.g. check
this.producerErrors.has(sessionId) or this.producerHealthy.has(sessionId)) and
only push the error and flip healthy-to-false when the session is present,
otherwise bail out; refer to the method name recordProducerError and the maps
producerErrors and producerHealthy to locate and guard the write.

Comment on lines +112 to 147
async deleteSession(sessionId: string): Promise<void> {
return this.withSessionLock(sessionId, async () => {
const producer = this.producers.get(sessionId);
if (producer) {
try {
await producer.flush();
} catch (err) {
console.error(
`[protocol] Failed to flush producer for ${sessionId}:`,
err,
);
}
try {
await producer.detach();
} catch (err) {
console.error(
`[protocol] Failed to detach producer for ${sessionId}:`,
err,
);
}
this.producers.delete(sessionId);
}

const state = this.sessionStates.get(sessionId);
if (state) {
state.changeSubscription?.unsubscribe();
state.sessionDB.close();
}

this.streams.delete(sessionId);
this.sessionStates.delete(sessionId);
this.streams.delete(sessionId);
this.sessionStates.delete(sessionId);
this.producerErrors.delete(sessionId);
this.producerHealthy.delete(sessionId);
this.activeGenerationIds.delete(sessionId);
});
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

deleteSession does not clean up messageSeqs entries for the session.

resetSession carefully deletes messageSeq entries for active generations, but deleteSession skips this. Over repeated create/delete cycles, orphaned entries in messageSeqs will accumulate (keyed by messageId). Consider mirroring the cleanup logic from resetSession before deleting the other maps — or at minimum clearing the active generation's messageId.

Proposed fix (add before the map deletions)
+			const activeMessageId = this.activeGenerationIds.get(sessionId);
+			if (activeMessageId) {
+				this.messageSeqs.delete(activeMessageId);
+			}
+
 			this.streams.delete(sessionId);
🤖 Prompt for AI Agents
In `@apps/streams/src/protocol.ts` around lines 112 - 147, deleteSession currently
omits cleaning up messageSeqs, causing orphaned entries; before removing the
maps at the end of deleteSession, mirror resetSession by retrieving
this.activeGenerationIds.get(sessionId) (or the single active generation id used
for the session) and for each generation id remove/clear any message IDs stored
in this.messageSeqs for that generation, then proceed to delete
this.activeGenerationIds, this.producers, this.sessionStates, etc.; update
deleteSession to perform this messageSeqs cleanup (use the same logic as
resetSession) so messageSeqs no longer accumulates orphaned entries.

Comment on lines +354 to 383
async finishGeneration({
sessionId,
messageId,
}: {
sessionId: string;
messageId?: string;
}): Promise<void> {
await this.flushSession(sessionId);

if (messageId) {
this.clearSeq(messageId);
}
const activeMessageId = this.activeGenerationIds.get(sessionId);
if (!activeMessageId) {
// no-op
} else if (!messageId || messageId === activeMessageId) {
this.activeGenerationIds.delete(sessionId);
} else {
console.warn(
`[protocol] Ignoring stale finish for ${sessionId}: got ${messageId}, active is ${activeMessageId}`,
);
}

const errors = this.drainProducerErrors(sessionId);
if (errors.length > 0) {
throw new Error(
`Producer encountered ${errors.length} background error(s) during generation: ${errors.map((e) => e.message).join("; ")}`,
);
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

finishGeneration without messageId skips clearSeq for the active generation.

When messageId is undefined, line 363 skips clearSeq, but lines 369–370 still delete the active generation. This leaves the active messageId's sequence counter orphaned in messageSeqs. If that messageId is ever reused, it would resume from the old sequence number instead of 0.

Proposed fix
 async finishGeneration({
 	sessionId,
 	messageId,
 }: {
 	sessionId: string;
 	messageId?: string;
 }): Promise<void> {
 	await this.flushSession(sessionId);

-	if (messageId) {
-		this.clearSeq(messageId);
-	}
 	const activeMessageId = this.activeGenerationIds.get(sessionId);
+	const idToClear = messageId ?? activeMessageId;
+	if (idToClear) {
+		this.clearSeq(idToClear);
+	}
 	if (!activeMessageId) {
 		// no-op
 	} else if (!messageId || messageId === activeMessageId) {
🤖 Prompt for AI Agents
In `@apps/streams/src/protocol.ts` around lines 354 - 383, finishGeneration
currently only calls clearSeq when an explicit messageId is provided, which
leaves the active generation's sequence counter in messageSeqs orphaned when
finishGeneration is invoked with messageId undefined; update finishGeneration so
that when you delete the active generation from activeGenerationIds you also
clear its sequence counter: if messageId is provided call clearSeq(messageId) as
now, but when you detect activeMessageId and either messageId is undefined or
different, call clearSeq(activeMessageId) before deleting
activeGenerationIds.delete(sessionId) (avoid double-clearing when messageId ===
activeMessageId).

Comment on lines +12 to +19
try {
body = await c.req.json();
} catch {
return c.json(
{ error: "Invalid JSON body", code: "INVALID_BODY", sessionId },
400,
);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Login parse error is silently swallowed; logout logs it.

Line 14 discards the parse error without logging, while the equivalent logout handler (Line 74) correctly logs with console.error("[AUTH] Failed to parse logout body:", parseError). This violates the "never swallow errors silently" guideline.

Proposed fix
 		try {
 			body = await c.req.json();
-		} catch {
+		} catch (parseError) {
+			console.error("[AUTH] Failed to parse login body:", parseError);
 			return c.json(
 				{ error: "Invalid JSON body", code: "INVALID_BODY", sessionId },
 				400,
 			);
 		}

As per coding guidelines, "Never swallow errors silently; at minimum log errors with context before rethrowing or handling them explicitly."

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try {
body = await c.req.json();
} catch {
return c.json(
{ error: "Invalid JSON body", code: "INVALID_BODY", sessionId },
400,
);
}
try {
body = await c.req.json();
} catch (parseError) {
console.error("[AUTH] Failed to parse login body:", parseError);
return c.json(
{ error: "Invalid JSON body", code: "INVALID_BODY", sessionId },
400,
);
}
🤖 Prompt for AI Agents
In `@apps/streams/src/routes/auth.ts` around lines 12 - 19, The login handler
currently swallows JSON parse errors when calling c.req.json(); update that
catch block to log the error with context (e.g., use console.error or the
existing logging convention) before returning the 400 response—mirror the logout
handler's behavior which logs "[AUTH] Failed to parse logout body:"; reference
the login parse site (the try around c.req.json(), and the sessionId variable)
and include the parse error in the log message (e.g., "[AUTH] Failed to parse
login body:" + parseError) so the error is not silently discarded.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant