Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3.x: Add eager truncation to bounded replay() to avoid item retention #6532

Merged
merged 2 commits into from
Jun 24, 2019

Conversation

akarnokd
Copy link
Member

@akarnokd akarnokd commented Jun 21, 2019

This PR adds the eagerTruncate option to the replay operator so that the head node will lose the item reference it holds upon truncation.

The bounded buffers in replay implement a linked list that when truncated, moves the head reference forward along the links atomically. This allows late consumers to pick up the head and follow the links between them to get items replayed. However, the truncation may happen concurrently with a consumer working on some prior nodes so if the truncation would null out the value, the consumer reaching the same node would see null as well and fail.

image

To avoid this type of retention, the head node has to be refreshed with a new node still pointing to the next node in the chain but without any value.

image

The reason this is not the default is that it requires an additional allocation for each new incoming value when the buffer is full, which would reduce performance in cases where the excess retention is not a problem.

Overloads to both the direct and function-variants of both Flowable.replay() and Observable.replay() have been added. To avoid too many overloads, only one extra overload has been added extending the signature of the longest parameterized method per each bounds mode (size, time, time+size).

Their unit test files have been cloned so that both the non-eager (original) behavior and the eager behavior is tested separately.

Fixes #6475

@akarnokd akarnokd added this to the 3.0 milestone Jun 21, 2019
@codecov
Copy link

codecov bot commented Jun 21, 2019

Codecov Report

Merging #6532 into 3.x will decrease coverage by <.01%.
The diff coverage is 98.37%.

Impacted file tree graph

@@             Coverage Diff              @@
##                3.x    #6532      +/-   ##
============================================
- Coverage     98.26%   98.26%   -0.01%     
- Complexity     6185     6199      +14     
============================================
  Files           680      680              
  Lines         44883    44954      +71     
  Branches       6193     6197       +4     
============================================
+ Hits          44106    44175      +69     
+ Misses          247      244       -3     
- Partials        530      535       +5
Impacted Files Coverage Δ Complexity Δ
src/main/java/io/reactivex/Flowable.java 100% <100%> (ø) 572 <6> (+6) ⬆️
...nal/operators/flowable/FlowableInternalHelper.java 100% <100%> (ø) 15 <3> (ø) ⬇️
src/main/java/io/reactivex/Observable.java 100% <100%> (ø) 547 <6> (+6) ⬆️
...ex/internal/operators/flowable/FlowableReplay.java 94.46% <100%> (+0.11%) 20 <2> (ø) ⬇️
...operators/observable/ObservableInternalHelper.java 100% <100%> (ø) 15 <3> (ø) ⬇️
...nternal/operators/observable/ObservableReplay.java 97.89% <90%> (-0.49%) 20 <2> (ø)
...nternal/operators/observable/ObservableCreate.java 93.16% <0%> (-3.42%) 2% <0%> (ø)
...rnal/operators/flowable/FlowableFlatMapSingle.java 92.93% <0%> (-2.72%) 2% <0%> (ø)
...activex/internal/schedulers/ScheduledRunnable.java 98.07% <0%> (-1.93%) 29% <0%> (-1%)
...rnal/operators/observable/ObservableSwitchMap.java 94.68% <0%> (-1.6%) 3% <0%> (ø)
... and 24 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e3b695a...ef4f6b2. Read the comment docs.

This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant