Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🕑 Support playout delay extension #134

Merged
merged 14 commits into from
Dec 16, 2021
Merged

Conversation

danstiner
Copy link
Collaborator

@danstiner danstiner commented Dec 8, 2021

TODO

  • Test on a production server
  • Do not send playout delay on every packet, once the client has received one playout delay it will keep using that delay. Probably sending on the first N packets would be enough.

Summary

This PR enables support for the experimental playout delay extension described here: https://webrtc.googlesource.com/src/+/refs/heads/main/docs/native-code/rtp-hdrext/playout-delay

The intention is to test playout delay as a workaround for the issue discussed in #101, at least until we can look deeper at changing Chrome to correctly determine a playout delay.

Requirements

Other Notes

Suggest trying a minimum in the range of 200-400 and a maximum of at least 2000, but no more than 10,000.

Expected outcome is that rendering will be delayed by the min value set. You can try setting a large minimum to see the effect (e.g. 5000), or look at chrome://webrtc-internals/ like the following image.

This shows a stream with a min playout of 200ms. The delay jumps up to 250ms at one point, but never drops below the configured 200ms min delay.
image

@danstiner
Copy link
Collaborator Author

danstiner commented Dec 8, 2021

Testing approach was to run Linux VMs locally and simulate a slow client connection. This takes a couple hours of setup but is pretty straightforward. Note the network device I use is eth1, this may be different for your setup.

Scenario 1

Limit egress traffic to a connection just barely fast enough to handle a 6mbps stream. The 320kbit burst buffer lets the burst of packets on keyframes "back up" and be delivered milliseconds later when there is bandwidth available again, similar to a real rate limited connection. (Note with my test stream there was no packet loss, you may have to tweak your test stream, burst size, and/or rate to achieve similar results).

tc qdisc add dev eth1 root tbf rate 6mbit burst 320kbit latency 200ms

Tested with sample stream I've been using that is mostly stable white noise with a small changing clock. (This leads to tiny p-frames with large i-frames, which is the scenario causing dropped frames we are trying to address with this change).

Outcome with no delay enabled. You can see brief pauses in the clock every two seconds that corresponds with keyframes. (The pauses can be even worse under real conditions, depending on your available bandwidth, type of content, encoder settings, other traffic on your network, etc etc)

pull134-nodelay-6mbps-rate-limit.mp4

Outcome with min delay of 400ms set. There are no pauses. chrome://webrtc-internals/ can also be used to confirm this by looking at number of dropped frames in both outcomes.

pull134-delay400ms-6mbps-rate-limit.mp4

To cleanup traffic control rules, either reboot or do:

tc qdisc del dev eth1 root

References:

@danstiner danstiner marked this pull request as ready for review December 15, 2021 21:56
@danstiner danstiner requested a review from haydenmc December 16, 2021 01:04
src/Configuration.cpp Outdated Show resolved Hide resolved
Copy link
Member

@haydenmc haydenmc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If GitHub's CR tools were good, I'd mark this "approved with suggestions" 😁

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants