Skip to content

fix(tui): resolve streaming freeze from unhandled finish event, missing timeouts, and unguarded effects#4

Closed
coleleavitt wants to merge 1 commit intodevfrom
fix/tui-freeze-gc-pressure
Closed

fix(tui): resolve streaming freeze from unhandled finish event, missing timeouts, and unguarded effects#4
coleleavitt wants to merge 1 commit intodevfrom
fix/tui-freeze-gc-pressure

Conversation

@coleleavitt
Copy link
Owner

@coleleavitt coleleavitt commented Feb 27, 2026

Summary

  • Fix stream termination in processor.ts where for await (stream.fullStream) never exited on finish event, causing 0% CPU hangs after streaming completes
  • Increase SDK event batch window from 16ms to 50ms in sdk.tsx to reduce render frequency during high-throughput streaming
  • Add missing timeout to client.listTools() in mcp/index.ts to prevent indefinite hangs on unresponsive MCP servers
  • Add error handling for fire-and-forget effect Promises in db.ts to prevent silent crash propagation

Context

These are the non-reactive-store fixes from the original freeze investigation. The reactive store fix (sync.tsx delta coalescing) is in PR #3 as a separate concern.

Files Changed

File Change
src/session/processor.ts Break for-await loop on finish event
src/cli/cmd/tui/context/sdk.tsx Increase event batch window 16ms → 50ms
src/mcp/index.ts Add withTimeout() to client.listTools()
src/storage/db.ts Handle fire-and-forget effect Promise rejections

Testing

  • All 4 files pass LSP diagnostics (zero errors)
  • Full turbo typecheck passes across all 18 packages
  • Full turbo build succeeds

Summary by cubic

Fixes a streaming freeze that left sessions stuck after completion and reduces TUI render churn during high-throughput updates. Adds timeouts and guards to prevent hangs and silent errors with MCP and database effects.

  • Bug Fixes
    • Break for-await loop on finish in session/processor.ts to stop post-stream hangs.
    • Batch SDK events at 50ms in cli/cmd/tui/context/sdk.tsx to cut render frequency.
    • Add withTimeout() around client.listTools() in mcp/index.ts to avoid indefinite waits.
    • Wrap fire-and-forget effects in storage/db.ts with rejection handling and logging.

Written for commit 3cd4444. Summary will update on new commits.

Summary by CodeRabbit

  • Bug Fixes
    • Enhanced stream completion handling with graceful finish event support
    • Improved per-client timeout configuration for MCP tool operations
    • Strengthened error resilience in asynchronous side-effect execution
    • Optimized event batching to reduce rendering overhead

…ng timeouts, and unguarded effects

- Fix stream termination in processor.ts: for-await loop now breaks on finish event
- Increase SDK event batch window from 16ms to 50ms to reduce render churn
- Add withTimeout to MCP client.listTools() to prevent indefinite hangs
- Handle fire-and-forget effect Promise rejections in db.ts
@github-actions
Copy link

Thanks for your contribution!

This PR doesn't have a linked issue. All PRs must reference an existing issue.

Please:

  1. Open an issue describing the bug/feature (if one doesn't exist)
  2. Add Fixes #<number> or Closes #<number> to this PR description

See CONTRIBUTING.md for details.

@github-actions
Copy link

This PR doesn't fully meet our contributing guidelines and PR template.

What needs to be fixed:

  • PR description is missing required template sections. Please use the PR template.

Please edit this PR description to address the above within 2 hours, or it will be automatically closed.

If you believe this was flagged incorrectly, please let a maintainer know.

@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the application's stability and performance by addressing several critical issues that could lead to hangs, unresponsiveness, or silent failures. The changes ensure proper termination of streaming processes, optimize event handling for better UI responsiveness, prevent indefinite waits on external service calls, and improve error resilience for asynchronous operations.

Highlights

  • Stream Termination Fix: Resolved an issue in processor.ts where for await (stream.fullStream) would not exit on a finish event, leading to 0% CPU hangs after streaming completed.
  • SDK Event Batch Window Adjustment: Increased the SDK event batch window in sdk.tsx from 16ms to 50ms to reduce render frequency during high-throughput streaming operations.
  • Missing Timeout Added: Implemented a timeout for client.listTools() in mcp/index.ts to prevent indefinite hangs when interacting with unresponsive MCP servers.
  • Fire-and-Forget Promise Error Handling: Added error handling for fire-and-forget effect Promises in db.ts to prevent silent crash propagation.
Changelog
  • src/cli/cmd/tui/context/sdk.tsx
    • Increase event batch window 16ms → 50ms
  • src/mcp/index.ts
    • Add withTimeout() to client.listTools()
  • src/session/processor.ts
    • Break for-await loop on finish event
  • src/storage/db.ts
    • Handle fire-and-forget effect Promise rejections
Activity
  • No specific activity (comments, reviews, or progress updates) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link

coderabbitai bot commented Feb 27, 2026

📝 Walkthrough

Walkthrough

These changes adjust event batching thresholds to reduce immediate renders, add per-client timeout configuration for MCP tool discovery, introduce stream completion signaling in session processing, and wrap side-effect invocations with Promise-based error handling for improved fault tolerance.

Changes

Cohort / File(s) Summary
Event Batching
packages/opencode/src/cli/cmd/tui/context/sdk.tsx
Increased event batching threshold from 16ms to 50ms in handleEvent flush logic, reducing render frequency but increasing latency for batched events.
MCP Configuration & Timeout
packages/opencode/src/mcp/index.ts
Added per-client timeout derivation from config entry with fallback to defaultTimeout and DEFAULT_TIMEOUT; wraps client.listTools() call with computed timeout for improved timeout consistency.
Stream Completion Signaling
packages/opencode/src/session/processor.ts
Introduced finished boolean flag that terminates the processing loop on stream "finish" event, alongside existing needsCompaction condition for control flow exit.
Async Error Handling for Side Effects
packages/opencode/src/storage/db.ts
Wrapped effect function invocations with Promise.resolve(effect()).catch(...) pattern to enable per-effect error handling and logging instead of direct invocation.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 Batching faster, timeouts aligned,
Streams know when to exit, effects refined,
Configuration whispers per-client care,
Error handlers catch with flair!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main changes: fixing streaming freeze from unhandled finish event, missing timeouts, and unguarded effects across multiple files.
Description check ✅ Passed The PR description covers all required template sections with substantive content and clear explanations of each change.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/tui-freeze-gc-pressure

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-free-for-open-source-projects

Review Summary by Qodo

Fix TUI streaming freeze from unhandled finish event and missing timeouts

🐞 Bug fix

Grey Divider

Walkthroughs

Description
• Fix stream termination in processor.ts where finish event now properly breaks the for-await loop
• Add timeout handling to MCP client.listTools() to prevent indefinite hangs
• Increase SDK event batch window from 16ms to 50ms reducing render frequency
• Add error handling for fire-and-forget effect Promises in db.ts
Diagram
flowchart LR
  A["Stream Processing"] -->|finish event| B["Break Loop"]
  C["MCP Client"] -->|withTimeout| D["Prevent Hangs"]
  E["SDK Events"] -->|batch 50ms| F["Reduce Renders"]
  G["DB Effects"] -->|error handling| H["Prevent Crashes"]
Loading

Grey Divider

File Changes

1. packages/opencode/src/session/processor.ts 🐞 Bug fix +4/-1

Fix stream termination on finish event

• Add finished flag to track stream finish event
• Log finish event reception for debugging
• Break for-await loop when finished flag is set alongside needsCompaction check

packages/opencode/src/session/processor.ts


2. packages/opencode/src/mcp/index.ts 🐞 Bug fix +3/-1

Add timeout to MCP client.listTools call

• Extract timeout configuration from MCP config entry
• Apply withTimeout wrapper to client.listTools() call
• Use configured timeout or fall back to default timeout constant

packages/opencode/src/mcp/index.ts


3. packages/opencode/src/cli/cmd/tui/context/sdk.tsx ✨ Enhancement +3/-3

Increase SDK event batch window to 50ms

• Increase event batch window threshold from 16ms to 50ms
• Update setTimeout delay from 16ms to 50ms for batching
• Update comment to reflect new 50ms window

packages/opencode/src/cli/cmd/tui/context/sdk.tsx


View more (1)
4. packages/opencode/src/storage/db.ts Error handling +2/-2

Add error handling for effect Promises

• Wrap effect execution with Promise.resolve() to handle async effects
• Add catch handler to log errors from fire-and-forget effect Promises
• Apply error handling to both use() and transaction() code paths

packages/opencode/src/storage/db.ts


Grey Divider

Qodo Logo

@qodo-free-for-open-source-projects
Copy link

qodo-free-for-open-source-projects bot commented Feb 27, 2026

Code Review by Qodo

🐞 Bugs (3) 📘 Rule violations (2) 📎 Requirement gaps (0)

Grey Divider


Action required

1. catch uses implicit any 📘 Rule violation ✓ Correctness
Description
The new Promise rejection handlers introduce e as an implicit any (from Promise.catch), which
weakens type safety. This can hide mistakes when logging/inspecting errors and violates the no-any
rule.
Code

packages/opencode/src/storage/db.ts[125]

+        for (const effect of effects) Promise.resolve(effect()).catch((e) => log.error("effect failed", { error: e }))
Evidence
The compliance checklist forbids introducing any; the newly-added .catch((e) => ...) handlers
introduce an any-typed rejection reason parameter (per TS lib typing), which is new in these
lines.

AGENTS.md
packages/opencode/src/storage/db.ts[125-125]
packages/opencode/src/storage/db.ts[149-149]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
New `.catch((e) =&gt; ...)` handlers introduce an implicit `any` rejection reason, violating the requirement to avoid `any`.

## Issue Context
In TypeScript, `Promise.catch` typically types the rejection reason as `any`; explicitly annotating as `unknown` and narrowing preserves type safety.

## Fix Focus Areas
- packages/opencode/src/storage/db.ts[125-125]
- packages/opencode/src/storage/db.ts[149-149]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. MCP timeout leaks clients 🐞 Bug ⛯ Reliability
Description
On listTools() timeout/failure in MCP.tools(), the client is deleted from state without calling
client.close(), potentially leaking transports/sockets. This is especially risky now that
withTimeout() will trigger failures that previously would hang indefinitely.
Code

packages/opencode/src/mcp/index.ts[R580-585]

+        const mcpEntry = config[clientName]
+        const timeout = (isMcpConfigured(mcpEntry) ? mcpEntry.timeout : undefined) ?? defaultTimeout ?? DEFAULT_TIMEOUT
+        const toolsResult = await withTimeout(client.listTools(), timeout).catch((e) => {
          log.error("failed to get tools", { clientName, error: e.message })
          const failedStatus = {
            status: "failed" as const,
Evidence
In MCP.tools() the timeout catch path removes the client from the state map but never closes it.
Other code paths (disconnect() and create() when initial listTools() fails) do close clients
before removing/returning, indicating closing is expected for cleanup.

packages/opencode/src/mcp/index.ts[578-590]
packages/opencode/src/mcp/index.ts[554-564]
packages/opencode/src/mcp/index.ts[466-476]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
When `listTools()` times out/fails in `MCP.tools()`, the client is removed from state but never closed, potentially leaking resources.

### Issue Context
Other code paths (e.g., `disconnect()` and `create()` when initial `listTools()` fails) close clients, suggesting this is expected cleanup behavior.

### Fix Focus Areas
- packages/opencode/src/mcp/index.ts[578-590]

### Suggested changes
1) Before `delete s.clients[clientName]`, do:
- `await client.close().catch((error) =&gt; log.error(&quot;Failed to close MCP client&quot;, { clientName, error }))`

2) Make the error logging safe:
- Replace `error: e.message` with `error: e instanceof Error ? e.message : String(e)` (to avoid issues if `e` is `null`/non-object).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

3. finished boolean naming 📘 Rule violation ✓ Correctness
Description
A new boolean state flag is introduced as finished, which does not follow the
is_/has_/can_/should_ prefix convention. This reduces self-documentation and makes boolean
intent less immediately clear.
Code

packages/opencode/src/session/processor.ts[53]

+    let finished = false
Evidence
Compliance requires boolean variables to use appropriate prefixes; the PR adds a boolean named
finished and sets it later, showing it is a boolean flag that violates the naming convention.

Rule 2: Generic: Meaningful Naming and Self-Documenting Code
packages/opencode/src/session/processor.ts[53-53]
packages/opencode/src/session/processor.ts[341-342]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A new boolean flag is named `finished`, which does not follow the required boolean naming convention (`is_`, `has_`, `can_`, `should_`).

## Issue Context
This flag is used as a state indicator for stream completion; it should read clearly as a boolean when used in conditions.

## Fix Focus Areas
- packages/opencode/src/session/processor.ts[53-53]
- packages/opencode/src/session/processor.ts[341-342]
- packages/opencode/src/session/processor.ts[351-351]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


4. Batch window overshoots 🐞 Bug ➹ Performance
Description
The 50ms batching schedules setTimeout(flush, 50) instead of waiting only for the remaining time
in the 50ms window, so total time since last flush can approach ~100ms. This can noticeably increase
TUI latency under bursty streaming now that the window was increased from 16ms → 50ms.
Code

packages/opencode/src/cli/cmd/tui/context/sdk.tsx[R55-59]

+      // If we just flushed recently (within 50ms), batch this with future events
      // Otherwise, process immediately to avoid latency
-      if (elapsed < 16) {
-        timer = setTimeout(flush, 16)
+      if (elapsed < 50) {
+        timer = setTimeout(flush, 50)
        return
Evidence
elapsed is computed since the last flush, but the timeout is always a full 50ms from “now”, not
(50 - elapsed). Because flush() sets last = Date.now(), an event arriving late in the window
(e.g., 49ms after flush) will be delayed an additional 50ms (total ~99ms since last flush).

packages/opencode/src/cli/cmd/tui/context/sdk.tsx[36-62]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The batching logic checks `elapsed &lt; 50` but then always schedules `setTimeout(flush, 50)`, which delays by a full 50ms from the *event time* rather than the remaining time until the 50ms window closes.

### Issue Context
This becomes more user-visible after changing the window from 16ms to 50ms, because late-window events can now be delayed almost an extra 50ms.

### Fix Focus Areas
- packages/opencode/src/cli/cmd/tui/context/sdk.tsx[50-62]

### Suggested change
Compute remaining time:
- `const delay = Math.max(0, 50 - elapsed)`
- `timer = setTimeout(flush, delay)`
Optionally clamp to at least `1` to avoid synchronous-ish scheduling depending on runtime.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Advisory comments

5. MCP default timeout mismatch 🐞 Bug ✓ Correctness
Description
Config schema describes MCP timeout defaulting to 5000ms, but MCP code uses `DEFAULT_TIMEOUT =
30_000 (and this PR newly applies it to tools()` listTools calls). This mismatch can confuse
operators and make “default” behavior feel inconsistent.
Code

packages/opencode/src/mcp/index.ts[R580-582]

+        const mcpEntry = config[clientName]
+        const timeout = (isMcpConfigured(mcpEntry) ? mcpEntry.timeout : undefined) ?? defaultTimeout ?? DEFAULT_TIMEOUT
+        const toolsResult = await withTimeout(client.listTools(), timeout).catch((e) => {
Evidence
The MCP module defines a 30s default timeout, while config documentation claims a 5s default if
unspecified. Since this PR starts enforcing a timeout for tools() discovery, the discrepancy
becomes more user-visible.

packages/opencode/src/mcp/index.ts[27-30]
packages/opencode/src/config/config.ts[525-531]
packages/opencode/src/mcp/index.ts[580-582]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Config docs claim MCP `timeout` defaults to 5000ms, but runtime default is 30000ms, and the PR now applies that default to `tools()` discovery.

### Issue Context
This can lead to operators expecting a shorter timeout than they actually get.

### Fix Focus Areas
- packages/opencode/src/mcp/index.ts[27-30]
- packages/opencode/src/config/config.ts[525-531]

### Suggested change
Either:
- Update config descriptions to match the 30s default, OR
- Change `DEFAULT_TIMEOUT` / fallback logic so the effective default is 5000ms as documented.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request delivers several crucial fixes to address application freezes and hangs. The changes include properly terminating a stream processing loop, adding a necessary timeout to a client call, and safeguarding against unhandled promise rejections from side effects. A performance enhancement to batch UI events is also included. The fixes are well-implemented. I have one suggestion to improve code maintainability by using a constant for a hardcoded value.

Comment on lines +57 to +58
if (elapsed < 50) {
timer = setTimeout(flush, 50)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The batch window time 50 is used as a magic number in both the condition and the setTimeout call. To improve readability and maintainability, consider defining it as a constant, for example const BATCH_WINDOW_MS = 50;, at a higher scope (e.g., at the top of the init function) and using it here.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
packages/opencode/src/storage/db.ts (1)

125-125: Extract shared effect-drain logic to avoid drift.

Line 125 and Line 149 duplicate the same fire-and-forget error-handling pattern. A small helper keeps both paths consistent.

Proposed refactor
+  function runEffects(effects: (() => void | Promise<void>)[]) {
+    for (const effect of effects) {
+      Promise.resolve(effect()).catch((e) => log.error("effect failed", { error: e }))
+    }
+  }
+
   export function use<T>(callback: (trx: TxOrDb) => T): T {
@@
-        for (const effect of effects) Promise.resolve(effect()).catch((e) => log.error("effect failed", { error: e }))
+        runEffects(effects)
         return result
@@
-        for (const effect of effects) Promise.resolve(effect()).catch((e) => log.error("effect failed", { error: e }))
+        runEffects(effects)
         return result

Also applies to: 149-149

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/opencode/src/storage/db.ts` at line 125, Two duplicated
fire-and-forget loops call Promise.resolve(effect()).catch(...) — extract that
logic into a small helper (e.g., drainEffects or runFireAndForget) and use it in
both places to keep behavior consistent; the helper should accept an iterable of
effect functions (or promises), call Promise.resolve(effect()) for each, and log
errors via log.error("effect failed", { error: e }) on rejection so both the
block that iterates over effects and the other identical call reuse the same
implementation.
packages/opencode/src/cli/cmd/tui/context/sdk.tsx (1)

57-58: Cap the delay to the remaining batch window.

At Line 57–Line 58, scheduling a full 50ms even when elapsed is already close to 50ms can stretch effective flush latency close to ~100ms. Use the remaining window instead.

Proposed refactor
-      if (elapsed < 50) {
-        timer = setTimeout(flush, 50)
+      if (elapsed < 50) {
+        timer = setTimeout(flush, Math.max(0, 50 - elapsed))
         return
       }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/opencode/src/cli/cmd/tui/context/sdk.tsx` around lines 57 - 58, The
scheduling uses a fixed 50ms even when part of the batch window has already
elapsed; change the timeout to the remaining window so you don't push flush out
to ~100ms. In the block that checks elapsed and sets timer (variables: elapsed,
timer, flush), compute the remaining delay (e.g., 50 - elapsed), clamp it to a
non-negative value, and pass that remaining delay into setTimeout instead of the
constant 50 to ensure the flush runs at the end of the intended 50ms window.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/opencode/src/cli/cmd/tui/context/sdk.tsx`:
- Around line 57-58: The scheduling uses a fixed 50ms even when part of the
batch window has already elapsed; change the timeout to the remaining window so
you don't push flush out to ~100ms. In the block that checks elapsed and sets
timer (variables: elapsed, timer, flush), compute the remaining delay (e.g., 50
- elapsed), clamp it to a non-negative value, and pass that remaining delay into
setTimeout instead of the constant 50 to ensure the flush runs at the end of the
intended 50ms window.

In `@packages/opencode/src/storage/db.ts`:
- Line 125: Two duplicated fire-and-forget loops call
Promise.resolve(effect()).catch(...) — extract that logic into a small helper
(e.g., drainEffects or runFireAndForget) and use it in both places to keep
behavior consistent; the helper should accept an iterable of effect functions
(or promises), call Promise.resolve(effect()) for each, and log errors via
log.error("effect failed", { error: e }) on rejection so both the block that
iterates over effects and the other identical call reuse the same
implementation.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4205fbd and 3cd4444.

📒 Files selected for processing (4)
  • packages/opencode/src/cli/cmd/tui/context/sdk.tsx
  • packages/opencode/src/mcp/index.ts
  • packages/opencode/src/session/processor.ts
  • packages/opencode/src/storage/db.ts

Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 4 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/opencode/src/storage/db.ts">

<violation number="1" location="packages/opencode/src/storage/db.ts:125">
P1: `Promise.resolve(effect())` does not catch synchronous throws from `effect()`. If the effect function throws synchronously, the exception escapes before `Promise.resolve` wraps it, crashing the caller and skipping remaining effects. Use `Promise.resolve().then(() => effect())` to defer execution into the microtask queue, which catches both sync throws and async rejections.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

const effects: (() => void | Promise<void>)[] = []
const result = ctx.provide({ effects, tx: Client() }, () => callback(Client()))
for (const effect of effects) effect()
for (const effect of effects) Promise.resolve(effect()).catch((e) => log.error("effect failed", { error: e }))
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Promise.resolve(effect()) does not catch synchronous throws from effect(). If the effect function throws synchronously, the exception escapes before Promise.resolve wraps it, crashing the caller and skipping remaining effects. Use Promise.resolve().then(() => effect()) to defer execution into the microtask queue, which catches both sync throws and async rejections.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/opencode/src/storage/db.ts, line 125:

<comment>`Promise.resolve(effect())` does not catch synchronous throws from `effect()`. If the effect function throws synchronously, the exception escapes before `Promise.resolve` wraps it, crashing the caller and skipping remaining effects. Use `Promise.resolve().then(() => effect())` to defer execution into the microtask queue, which catches both sync throws and async rejections.</comment>

<file context>
@@ -122,7 +122,7 @@ export namespace Database {
         const effects: (() => void | Promise<void>)[] = []
         const result = ctx.provide({ effects, tx: Client() }, () => callback(Client()))
-        for (const effect of effects) effect()
+        for (const effect of effects) Promise.resolve(effect()).catch((e) => log.error("effect failed", { error: e }))
         return result
       }
</file context>
Suggested change
for (const effect of effects) Promise.resolve(effect()).catch((e) => log.error("effect failed", { error: e }))
for (const effect of effects) Promise.resolve().then(() => effect()).catch((e) => log.error("effect failed", { error: e }))
Fix with Cubic

const effects: (() => void | Promise<void>)[] = []
const result = ctx.provide({ effects, tx: Client() }, () => callback(Client()))
for (const effect of effects) effect()
for (const effect of effects) Promise.resolve(effect()).catch((e) => log.error("effect failed", { error: e }))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. catch uses implicit any 📘 Rule violation ✓ Correctness

The new Promise rejection handlers introduce e as an implicit any (from Promise.catch), which
weakens type safety. This can hide mistakes when logging/inspecting errors and violates the no-any
rule.
Agent Prompt
## Issue description
New `.catch((e) => ...)` handlers introduce an implicit `any` rejection reason, violating the requirement to avoid `any`.

## Issue Context
In TypeScript, `Promise.catch` typically types the rejection reason as `any`; explicitly annotating as `unknown` and narrowing preserves type safety.

## Fix Focus Areas
- packages/opencode/src/storage/db.ts[125-125]
- packages/opencode/src/storage/db.ts[149-149]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +580 to 585
const mcpEntry = config[clientName]
const timeout = (isMcpConfigured(mcpEntry) ? mcpEntry.timeout : undefined) ?? defaultTimeout ?? DEFAULT_TIMEOUT
const toolsResult = await withTimeout(client.listTools(), timeout).catch((e) => {
log.error("failed to get tools", { clientName, error: e.message })
const failedStatus = {
status: "failed" as const,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Mcp timeout leaks clients 🐞 Bug ⛯ Reliability

On listTools() timeout/failure in MCP.tools(), the client is deleted from state without calling
client.close(), potentially leaking transports/sockets. This is especially risky now that
withTimeout() will trigger failures that previously would hang indefinitely.
Agent Prompt
### Issue description
When `listTools()` times out/fails in `MCP.tools()`, the client is removed from state but never closed, potentially leaking resources.

### Issue Context
Other code paths (e.g., `disconnect()` and `create()` when initial `listTools()` fails) close clients, suggesting this is expected cleanup behavior.

### Fix Focus Areas
- packages/opencode/src/mcp/index.ts[578-590]

### Suggested changes
1) Before `delete s.clients[clientName]`, do:
- `await client.close().catch((error) => log.error("Failed to close MCP client", { clientName, error }))`

2) Make the error logging safe:
- Replace `error: e.message` with `error: e instanceof Error ? e.message : String(e)` (to avoid issues if `e` is `null`/non-object).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Copy link

@llamapreview llamapreview bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AI Code Review by LlamaPReview

🎯 TL;DR & Recommendation

Recommendation: Request Changes

This PR fixes a critical streaming hang bug that left sessions stuck after completion and adds timeouts to prevent indefinite waits on MCP servers, alongside defensive error handling and performance tuning.

🌟 Strengths

  • Effectively addresses root causes of observed freezes with targeted fixes.
  • Enhances system robustness through timeouts and error logging.
Priority File Category Impact Summary Anchors
P1 src/session/processor.ts Bug Fixes hang bug leaving sessions stuck after streaming
P1 src/mcp/index.ts Architecture Prevents indefinite hangs from unresponsive MCP servers
P2 src/storage/db.ts Maintainability Logs errors to prevent silent crashes from background effects
P2 src/cli/cmd/tui/context/sdk.tsx Performance Increases batching window; may reduce responsiveness but cut render churn path:src/cli/cmd/tui/context/sync.tsx

🔍 Notable Themes

  • Timeout and Error Handling: Multiple changes introduce timeouts and error logging to prevent hangs and silent failures, improving overall system resilience.
  • Performance vs. Responsiveness Trade-off: The increased batching window aims to reduce render churn but requires monitoring for potential latency impacts on UI responsiveness.

📈 Risk Diagram

This diagram illustrates the fix for the streaming hang bug where the loop now breaks on the finish event.

sequenceDiagram
    participant SP as Session Processor
    participant LS as LLM Stream

    SP->>LS: Start streaming
    loop Stream Events
        LS->>SP: Emit event (e.g., text, reasoning)
        alt Event is "finish"
            Note over SP: R1(P1): Sets finished flag and breaks loop to prevent hang
            SP->>SP: Break loop
        end
    end
    SP->>SP: Exit loop and continue
Loading
⚠️ **Unanchored Suggestions (Manual Review Recommended)**

The following suggestions could not be precisely anchored to a specific line in the diff. This can happen if the code is outside the changed lines, has been significantly refactored, or if the suggestion is a general observation. Please review them carefully in the context of the full file.


📁 File: src/session/processor.ts

The introduction of the finished flag and the break condition directly addresses a critical hang bug where the for await (const value of stream.fullStream) loop would never terminate after receiving a "finish" event, leaving the session processor stuck at 0% CPU. The change is deterministic and corrects an observable failure mode (the hang). The fix is localized to the streaming logic.

Related Code:

case "finish":
  log.info("stream finish event received")
  finished = true
  break

📁 File: src/mcp/index.ts

Adding a timeout to client.listTools() is an important robustness fix to prevent the entire tools-fetching operation from hanging indefinitely on unresponsive MCP servers. This prevents a caller (like the TUI) from being blocked. The timeout logic correctly respects a hierarchy of configurations (client-specific, experimental global, default), aligning with the existing architectural pattern for MCP timeouts.

Related Code:

const timeout = (isMcpConfigured(mcpEntry) ? mcpEntry.timeout : undefined) ?? defaultTimeout ?? DEFAULT_TIMEOUT
const toolsResult = await withTimeout(client.listTools(), timeout).catch((e) => {

📁 File: src/storage/db.ts

The change wraps fire-and-forget effect promises with a .catch() handler to log errors. Previously, an unhandled rejection in any of these effects would crash the Node.js process or bubble up silently. This is a defensive programming improvement that increases operational visibility and prevents silent crashes from background tasks. It correctly uses Promise.resolve() to handle both synchronous and asynchronous effects.

Related Code:

for (const effect of effects) Promise.resolve(effect()).catch((e) => log.error("effect failed", { error: e }))

📁 File: src/cli/cmd/tui/context/sdk.tsx

Speculative: Increasing the event batching window from 16ms to 50ms aims to reduce render churn during high-throughput streaming (like LLM token deltas). This change should be evaluated in the context of the related PR #3, which refactors sync.tsx to use more efficient setStore updates. The risk is that a 50ms delay could make the TUI feel less responsive for non-streaming UI events. The impact depends on the event volume and the efficiency gains from PR #3.

Related Code:

// If we just flushed recently (within 50ms), batch this with future events
// Otherwise, process immediately to avoid latency
if (elapsed < 50) {
  timer = setTimeout(flush, 50)
  return
}


💡 Have feedback? We'd love to hear it in our GitHub Discussions.
✨ This review was generated by LlamaPReview Advanced, which is free for all open-source projects. Learn more.

@github-actions
Copy link

This pull request has been automatically closed because it was not updated to meet our contributing guidelines within the 2-hour window.

Feel free to open a new pull request that follows our guidelines.

@github-actions github-actions bot closed this Feb 27, 2026
@coleleavitt coleleavitt deleted the fix/tui-freeze-gc-pressure branch March 4, 2026 21:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant