Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
In an ideal scenario, the emulator generates a video frame and an audio chunk with its internal frame rate. For example, if the emulator runs a game at 60 FPS, it will produce 16 ms worth of audio and a video frame with each tick (or call of the run function). Then we need to send all this data to the user's browser, which becomes tricky with WebRTC audio. The WebRTC standard supports only Opus-encoded audio for high-quality sound. The encoder and decoder (the audio player in the browser) have a limitation: they can only operate on fixed audio frames or predefined chunks of audio, which are 5, 10, 20, 40, or 60 ms in length. Due to this limitation, we have to wait at least two ticks until the first whole audio chunk can be packed into predefined frames. If we have 16 ms of audio and one fixed buffer, we send 10 ms right away and have to wait for 4 ms to add to the remaining 6 ms. This will lead to a constant 6 ms delay between audio and video. To mitigate this issue, we can set the smallest frame size as a buffer, i.e., 5 ms. This will decrease the latency to 1 ms, but we will send 3 packets of data in this manner for 16 ms. A slightly better way is to create several buffers and dynamically select the next buffer so that the audio fits optimally, minimizing the number of network packets sent to users. This frames thing essentially accomplishes that. In the options, we can select multiple (or one) Opus buffers to store audio and choose from. They should be defined from the largest to the smallest. And that's it.
- Loading branch information