Skip to content

Replace eager start() loop with pull()-based backpressure#1803

Merged
SaltyAom merged 2 commits intoelysiajs:mainfrom
MarcelOlsen:fix/sse-gen-backpressure
Mar 16, 2026
Merged

Replace eager start() loop with pull()-based backpressure#1803
SaltyAom merged 2 commits intoelysiajs:mainfrom
MarcelOlsen:fix/sse-gen-backpressure

Conversation

@MarcelOlsen
Copy link
Copy Markdown
Contributor

@MarcelOlsen MarcelOlsen commented Mar 12, 2026

this resolves #1801

Problem

createStreamHandler uses new ReadableStream({ async start() }), which eagerly drains the entire generator into the stream's internal queue without waiting for the consumer. When a generator produces data faster than the consumer can read, such as when SSE proxies a large upstream, memory usage can grow unbounded. This issue was reported as an out-of-memory (OOM) error, escalating from 78 MB to 14.4 GB in seconds.

Root Cause

The Streams specification calls start() only once, immediately, before any reads occur. Placing a for await...of loop there does not create backpressure; instead, the loop races ahead of the consumer and fills the internal queue without limit.

The solution is to use pull(), which the runtime calls only when the consumer is ready for the next chunk, resulting in one invocation per read.

Solution

Extract an explicit AsyncIterator before constructing the ReadableStream, then divide the single async start() into three callbacks:

  • start() (sync) — registers the abort handler and enqueues the already-extracted initial value
  • async pull() — advances the iterator by one step on demand
  • cancel() — calls iterator.return() to clean up the generator

This approach also removes the isBrowser + setTimeout(0) workaround introduced in #741. That hack was necessary because back-to-back enqueue() calls in the same microtask caused chunks to merge in the consumer's buffer. With pull(), each enqueue occurs in a separate microtask driven by a consumer read, preventing the original issue from recurring.

Summary by CodeRabbit

  • Bug Fixes

    • Improved streaming/backpressure behavior to avoid eager consumption of producers, with more reliable abort/cancel handling and consistent SSE/JSON chunk formatting.
  • Tests

    • Added regression tests validating backpressure for proxied SSE streams and lazy generators to ensure producers are not drained faster than consumers.

…ateStreamHandler

Prevents OOM when generator produces faster than consumer reads.
Removes isBrowser/setTimeout(0) hack — no longer needed with demand-driven pull().
@MarcelOlsen
Copy link
Copy Markdown
Contributor Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 12, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 12, 2026

Walkthrough

Refactors streaming adapter to drive data via a pull()-driven AsyncIterator with an explicit iterator, end guard, abort/cancel handling, and unified chunk formatting (toSSE/JSON/string). Adds regression tests to verify backpressure prevents eager generator draining.

Changes

Cohort / File(s) Summary
AsyncIterator Refactor
src/adapter/utils.ts
Replaced start()-based eager consumption with a pull()-driven AsyncIterator. Introduced iterator, end guard, inline pull(controller) and abort/cancel paths that call iterator.return(). Unified chunk processing using chunk.toSSE() or JSON/stringification. Removed browser-specific timing workaround.
Regression Tests
test/response/stream.test.ts
Added tests verifying SSE backpressure and that lazy generators are not eagerly drained (reads limited chunks, asserts upstream generator did not fully progress).

Sequence Diagram(s)

sequenceDiagram
  participant Producer as Producer\n(async generator)
  participant Adapter as Stream Adapter\n(pull + iterator)
  participant Consumer as Consumer\n(readable stream)

  Consumer->>Adapter: read / request
  Adapter->>Producer: iterator.next()
  Producer-->>Adapter: chunk
  Adapter->>Consumer: enqueue(chunk formatted)
  Note over Adapter,Producer: On abort/cancel -> Adapter sets end, calls iterator.return()
  Consumer->>Adapter: cancel/abort
  Adapter->>Producer: iterator.return()
  Producer-->>Adapter: done
  Adapter->>Consumer: close
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related issues

Possibly related PRs

Poem

🐰 A pull, not a push—now streams behave,
Generators yield only what consumers crave.
Abort and cancel keep order in place,
Chunks dress as SSE or JSON with grace.
Hop, little rabbit, and celebrate the pace!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Replace eager start() loop with pull()-based backpressure' accurately describes the main architectural change: converting from a synchronous start() loop to a pull-based approach with proper backpressure handling.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new bot commented Mar 12, 2026

Open in StackBlitz

npm i https://pkg.pr.new/elysia@1803

commit: 1f03f0b

@MarcelOlsen
Copy link
Copy Markdown
Contributor Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 12, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@MarcelOlsen MarcelOlsen marked this pull request as ready for review March 12, 2026 23:24
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
test/response/stream.test.ts (1)

644-678: Well-designed regression test for the proxying OOM scenario.

The test effectively validates that the upstream generator respects backpressure when a slow consumer reads from a proxy. The structure clearly demonstrates the issue from #1801.

Minor improvement: reader.cancel() returns a Promise. Awaiting it ensures iterator.return() completes before the test ends, preventing potential resource leaks or flakiness.

🔧 Suggested improvement
-		reader.cancel()
+		await reader.cancel()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/response/stream.test.ts` around lines 644 - 678, The test currently
calls reader.cancel() without awaiting it, which can leave the upstream async
iterator's cleanup (iterator.return()) unfinished and cause resource leaks or
flakiness; update the test to await the cancellation Promise returned by
reader.cancel() (i.e., change the final call to await reader.cancel()) so the
stream/iterator cleanup completes before the test exits.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@test/response/stream.test.ts`:
- Around line 644-678: The test currently calls reader.cancel() without awaiting
it, which can leave the upstream async iterator's cleanup (iterator.return())
unfinished and cause resource leaks or flakiness; update the test to await the
cancellation Promise returned by reader.cancel() (i.e., change the final call to
await reader.cancel()) so the stream/iterator cleanup completes before the test
exits.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: cd671303-7713-4238-9c65-f7f0fb7b9896

📥 Commits

Reviewing files that changed from the base of the PR and between 79caf99 and f27c48b.

📒 Files selected for processing (1)
  • test/response/stream.test.ts

@SaltyAom
Copy link
Copy Markdown
Member

this is fire

@SaltyAom SaltyAom merged commit 651cc64 into elysiajs:main Mar 16, 2026
2 checks passed
remorses added a commit to remorses/spiceflow that referenced this pull request Mar 17, 2026
… streaming

The handleStream method used `for await...of` inside ReadableStream.start(),
which eagerly drains the async generator regardless of consumer speed — no
backpressure, potential OOM when proxying large SSE streams.

The fix:
- start() is now synchronous: sets up abort handler, ping interval, enqueues init value
- pull() calls iterator.next() one value at a time, only when consumer is ready
- cancel() properly cleans up via shared idempotent cleanup() closure
- cleanup() clears ping interval, removes abort listener, terminates iterator
- error enqueue is guarded so cleanup + close always run even if stream is closed

Added regression test asserting generator is not drained ahead of consumer.

Ref: elysiajs/elysia#1803
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

SSE async generator has zero backpressure OOM on large streams

2 participants