Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Broadcast notify on Buffer.Close #302

Merged
merged 2 commits into from
Jul 22, 2024
Merged

Conversation

edaniels
Copy link
Member

Description

This changes Close from being a "signaler" to a "broadcaster" of the notify condition. There's a small, but possible chance that between Unlock and select at the bottom of Read that we will have concurrent reads in the same position which means only once can receive a notify. If a Close happens in between this time, one or more reads will be blocked. One such case where this happen is for an srtp and srtcp reader.

In a simpler case, one read in the same position may miss a notify event during a Close. This affects any basic use of a *packetio.Buffer.

Reference issue

Fixes #298 and #299 (maybe)

I reproduced this with synchronization points at the critical point mentioned such that we start 2 reads, close before the select, and then continue the reads, at which point the test will time out. The new test, as such, is non-deterministic without the manual sync points.

Copy link

codecov bot commented Jul 22, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 83.10%. Comparing base (08506c5) to head (926442b).

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #302      +/-   ##
==========================================
+ Coverage   83.07%   83.10%   +0.02%     
==========================================
  Files          39       39              
  Lines        2730     2728       -2     
==========================================
- Hits         2268     2267       -1     
+ Misses        336      335       -1     
  Partials      126      126              
Flag Coverage Δ
go 82.93% <100.00%> (+0.02%) ⬆️
wasm 65.12% <100.00%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@@ -287,10 +287,7 @@ func (b *Buffer) Close() (err error) {
b.mutex.Unlock()

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this "section" is where the bad things happen

@edaniels edaniels merged commit e139c8a into pion:master Jul 22, 2024
14 checks passed
@edaniels edaniels deleted the bcast_notify branch July 22, 2024 16:31
@paulwe
Copy link
Contributor

paulwe commented Jul 22, 2024

there was an assumption that concurrent reads would not happen so there may be other issues. if we read, write, read, close and the read, write, and read goroutines are all in the same gap between the critical section and channel op we could panic writing to a closed channel if close closes the channel before write sends.

@edaniels
Copy link
Member Author

there was an assumption that concurrent reads would not happen so there may be other issues. if we read, write, read, close and the read, write, and read goroutines are all in the same gap between the critical section and channel op we could panic writing to a closed channel if close closes the channel before write sends.

ah you're right on write being buggy now. I'll work on a more holistic fix for that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Potential regression in v2.2.5, indefinitely stuck packetio.(*Buffer).Read
2 participants