Skip to content

Conversation

colinmarc
Copy link
Contributor

Fixes #259.

I finally got around to this after promising to PR it in #259 a year ago. 😅

This is based on colinmarc/pulseaudio-rs#2, which obviously needs to land before this. However, I developed both PRs in parallel. If you're feeling generous and would like to review that as well, I would welcome any feedback.

The first commit is unrelated and a bit opinionated, but it seemed nicer. Let me know if I should move that to a separate PR or just drop it.

@colinmarc colinmarc force-pushed the pulse branch 2 times, most recently from 72c1c2f to a76575b Compare February 27, 2025 15:29
@jacksongoode
Copy link

jacksongoode commented Jun 5, 2025

@colinmarc How does this PR differ from #938? Or were both created independent of each other? Disregard this, I just misread the titles :)

@colinmarc
Copy link
Contributor Author

colinmarc commented Jun 5, 2025

@colinmarc How does this PR differ from #938? Or were both created independent of each other?

Pipewire and Pulseaudio are completely different protocols. Pulseaudio is the established linux audio server, and Pipewire is the new hotness. This PR implements the Pulseaudio protocol, while the other PR implements the Pipewire protocol.

The other useful thing to know is that Pipewire (the server) supports the Pulse protocol as a first-class thing, and that this library has been tested with both audio servers. That means merging this would be enough to handle both cases. The PA server does not support the Pipewire protocol.

Finally, this is just my opinion, but I think the Pipewire protocol is also significantly more complicated.

@jacksongoode
Copy link

I actually misread the issue and would have removed my comment if you weren't so quick to respond! 🤣

The other useful thing to know is that Pipewire (the server) supports the Pulse protocol as a first-class thing, and that this library has been tested with both audio servers. That means merging this would be enough to handle both cases. The PA server does not support the Pipewire protocol.

Right, since Pipewire could just interpret cpal through its PulseAudio interface. Thank you for the explanation :)

@roderickvd
Copy link
Member

@colinmarc new maintainer here and doing backlog grooming. So sorry this did not get picked up before, because it seems very worthwhile! Would you be so kind to resolve the conflicts so we can pick it up again?

@colinmarc colinmarc force-pushed the pulse branch 2 times, most recently from 4aab015 to cf32277 Compare August 2, 2025 09:58
@colinmarc
Copy link
Contributor Author

Great :) Just rebased and tests look good. Let me know if you want the first change as a separate PR (or feel free to just drop it).

@roderickvd
Copy link
Member

Wow, amazing turnaround time! 👍

Coming weeks I don't have access to a machine to test it myself, which, as much I believe you 😉 I would like to do. So for now I'm going to trigger an AI review - hope it's going to bring more value than hallucinations - and take some time for a code review over some days. This is a big contribution.

Let me know if you want the first change as a separate PR (or feel free to just drop it).

Yes, that'd be good if you could extract it.

@roderickvd roderickvd requested a review from Copilot August 3, 2025 21:04
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds PulseAudio support to CPAL as a new audio backend, addressing issue #259. The implementation introduces a new host type for PulseAudio/PipeWire compatibility and refactors the existing platform macro to improve type safety and maintainability.

  • Adds comprehensive PulseAudio backend with input/output stream support and timing information
  • Refactors the impl_platform_host! macro to use concrete types instead of module names for better type safety
  • Updates examples to support both JACK and PulseAudio host selection

Reviewed Changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
src/platform/mod.rs Refactors macro to use concrete types and adds PulseAudio host integration
src/host/pulseaudio/mod.rs Implements PulseAudio host, device enumeration, and stream configuration
src/host/pulseaudio/stream.rs Implements PulseAudio playback and record stream handling with timing
src/host/null/mod.rs Simplifies null host implementation using standard iterators
src/host/mod.rs Adds conditional compilation for PulseAudio module
src/error.rs Adds InvalidUtf8 variant to DeviceNameError
examples/feedback.rs Updates example to support PulseAudio host selection
examples/beep.rs Updates example to support PulseAudio host selection
Cargo.toml Adds pulseaudio and futures dependencies

let bps = sample_spec.format.bytes_per_sample();
let n_samples = buf.len() / bps;
let data =
unsafe { Data::from_parts(buf.as_ptr() as *mut _, n_samples, sample_format) };
Copy link

Copilot AI Aug 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Casting a const pointer to mutable is undefined behavior. The input buffer should remain const since it's read-only data. Consider using a different approach that doesn't violate pointer constness.

Suggested change
unsafe { Data::from_parts(buf.as_ptr() as *mut _, n_samples, sample_format) };
Data::from_const_parts(buf.as_ptr(), n_samples, sample_format);

Copilot uses AI. Check for mistakes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is done in many places in the codebase, and I don't have a better way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be nice if we could document how we ensure it's safety:

  // SAFETY: We verify that:
  // - buf.as_ptr() points to valid memory for at least n_samples * bytes_per_sample
  // - n_samples is calculated from buf.len() / bytes_per_sample, ensuring validity
  // - The buffer remains valid for the duration of the callback
  // - sample_format matches the actual data layout in the buffer


// Spawn a thread to drive the stream future.
let stream_clone = stream.clone();
let _worker_thread = std::thread::spawn(move || block_on(stream_clone.play_all()));
Copy link

Copilot AI Aug 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The worker thread handle is dropped immediately, which means there's no way to properly join or manage the thread lifecycle. Consider storing the handle or using a different pattern for thread management.

Copilot uses AI. Check for mistakes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It exits when the stream finishes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be good to document that thread lifecycle.

Comment on lines +176 to +206
std::thread::spawn(move || loop {
let Ok(timing_info) = block_on(stream_clone.timing_info()) else {
break;
};

store_latency(
&latency_clone,
sample_spec,
timing_info.sink_usec,
timing_info.write_offset,
timing_info.read_offset,
);

std::thread::sleep(time::Duration::from_millis(100));
});

Copy link

Copilot AI Aug 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to the playback stream, this latency monitoring thread for record streams has no exit condition and will run indefinitely. Consider adding a way to signal thread termination when the stream is dropped.

Suggested change
std::thread::spawn(move || loop {
let Ok(timing_info) = block_on(stream_clone.timing_info()) else {
break;
};
store_latency(
&latency_clone,
sample_spec,
timing_info.sink_usec,
timing_info.write_offset,
timing_info.read_offset,
);
std::thread::sleep(time::Duration::from_millis(100));
});
std::thread::spawn(move || {
loop {
if shutdown_flag_thread.load(atomic::Ordering::Relaxed) {
break;
}
let Ok(timing_info) = block_on(stream_clone.timing_info()) else {
break;
};
store_latency(
&latency_clone,
sample_spec,
timing_info.sink_usec,
timing_info.write_offset,
timing_info.read_offset,
);
std::thread::sleep(time::Duration::from_millis(100));
}
});

Copilot uses AI. Check for mistakes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is incorrect, it exits when the stream finishes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yet should the handles not be joined or fused on drop for proper cleanup?

I remember we had to the same here in librespot: librespot-org/librespot@3ce9854 (you can forget about all the parking_lot stuff and skip right to the drop implementation at the bottom).

That thing in librespot usually didn't trigger, but it panicked in certain Tokio contexts.

Or am I missing that this is done in another way already?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do something like that, but only if we care about propogating panics. I think in this case that would be counterproductive.

@narodnik
Copy link

Amazing pull request, but just one comment:

in src/host/pulseaudio/mod.rs it looks like the app name is hardcoded to "cpal-pulseaudio":

        let client =
            pulseaudio::Client::from_env(c"cpal-pulseaudio").map_err(|_| HostUnavailable)?;

This means apps in the volume mixer will all show up as cpal-pulseaudio, when actually you'd want them to have their own name. It would be cool if I'm able to set this as well as other meta-data like the stream description.

@colinmarc
Copy link
Contributor Author

Amazing pull request, but just one comment:

in src/host/pulseaudio/mod.rs it looks like the app name is hardcoded to "cpal-pulseaudio":

        let client =
            pulseaudio::Client::from_env(c"cpal-pulseaudio").map_err(|_| HostUnavailable)?;

This means apps in the volume mixer will all show up as cpal-pulseaudio, when actually you'd want them to have their own name. It would be cool if I'm able to set this as well as other meta-data like the stream description.

Thanks - where should I pull that from? I don't see a way to parameterize that on the generic host API.

@jwagner
Copy link
Contributor

jwagner commented Aug 22, 2025

Hey @colinmarc, I didn't go through the code in any detail, but I gave your branch a quick test and it does work in my application. Pretty cool!

@colinmarc
Copy link
Contributor Author

Cool, I rebased and added some fixes.

Yes, that'd be good if you could extract it.

👉 #1004 👈

@jwagner
Copy link
Contributor

jwagner commented Aug 26, 2025

My application now sometimes hangs when running it with the pulseaudio host.

#1  0x0000555555ce1717 in std::thread::park ()
#2  0x00005555558779dd in std::thread::local::LocalKey<T>::with ()
#3  0x000055555586e62d in futures_executor::local_pool::block_on ()
#4  0x00005555558720e1 in <cpal::host::pulseaudio::stream::Stream as cpal::traits::StreamTrait>::play ()
#5  0x0000555555868fd0 in <cpal::platform::platform_impl::Stream as cpal::traits::StreamTrait>::play ()

Is as much useful information as I can share right now, I wasn't able to reproduce it outside of my application yet. I also don't know what causes it.

@colinmarc
Copy link
Contributor Author

My application now sometimes hangs when running it with the pulseaudio host.

#1  0x0000555555ce1717 in std::thread::park ()
#2  0x00005555558779dd in std::thread::local::LocalKey<T>::with ()
#3  0x000055555586e62d in futures_executor::local_pool::block_on ()
#4  0x00005555558720e1 in <cpal::host::pulseaudio::stream::Stream as cpal::traits::StreamTrait>::play ()
#5  0x0000555555868fd0 in <cpal::platform::platform_impl::Stream as cpal::traits::StreamTrait>::play ()

Is as much useful information as I can share right now, I wasn't able to reproduce it outside of my application yet. I also don't know what causes it.

Please run in debug and share the source, if possible. As it stands there's no way for me to know whether it's a bug in this PR, pulsaudio-rs, or your app.

@jwagner
Copy link
Contributor

jwagner commented Aug 26, 2025

Thanks for the quick reply. I managed to get it to happen with a debug binary, I can't share the code of the application in which it happens but I'll try to reproduce it with standalone code or one of the examples but I can't get to it right now. Just wanted to let you know that there might be something.

#0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x00005555573982a7 in std::sys::pal::unix::futex::futex_wait () at library/std/src/sys/pal/unix/futex.rs:72
#2  std::sys::sync::thread_parking::futex::Parker::park () at library/std/src/sys/sync/thread_parking/futex.rs:55
#3  std::thread::Thread::park () at library/std/src/thread/mod.rs:1446
#4  std::thread::park () at library/std/src/thread/mod.rs:1083
#5  0x0000555556699507 in futures_executor::local_pool::run_executor::{closure#0}<core::result::Result<(), pulseaudio::client::ClientError>, futures_executor::local_pool::block_on::{closure_env#0}<pulseaudio::client::record_stream::{impl#1}::started::{async_fn_env#0}>> (thread_notify=0x7ffff7d3ef98)
    at /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/futures-executor-0.3.31/src/local_pool.rs:99
#6  0x00005555566a2be4 in std::thread::local::LocalKey<alloc::sync::Arc<futures_executor::local_pool::ThreadNotify, alloc::alloc::Global>>::try_with<alloc::sync::Arc<futures_executor::local_pool::ThreadNotify, alloc::alloc::Global>, futures_executor::local_pool::run_executor::{closure_env#0}<core::result::Result<(), pulseaudio::client::ClientError>, futures_executor::local_pool::block_on::{closure_env#0}<pulseaudio::client::record_stream::{impl#1}::started::{async_fn_env#0}>>, core::result::Result<(), pulseaudio::client::ClientError>> (self=0x555557a330c0, f=...)
    at /usr/local/rustup/toolchains/1.89.0-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/local.rs:315
#7  0x00005555566a2333 in std::thread::local::LocalKey<alloc::sync::Arc<futures_executor::local_pool::ThreadNotify, alloc::alloc::Global>>::with<alloc::sync::Arc<futures_executor::local_pool::ThreadNotify, alloc::alloc::Global>, futures_executor::local_pool::run_executor::{closure_env#0}<core::result::Result<(), pulseaudio::client::ClientError>, futures_executor::local_pool::block_on::{closure_env#0}<pulseaudio::client::record_stream::{impl#1}::started::{async_fn_env#0}>>, core::result::Result<(), pulseaudio::client::ClientError>> (self=0x555557a330c0, f=...)
    at /usr/local/rustup/toolchains/1.89.0-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/local.rs:279
#8  0x000055555669907c in futures_executor::local_pool::run_executor<core::result::Result<(), pulseaudio::client::ClientError>, futures_executor::local_pool::block_on::{closure_env#0}<pulseaudio::client::record_stream::{impl#1}::started::{async_fn_env#0}>> (f=...)
    at /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/futures-executor-0.3.31/src/local_pool.rs:86
#9  0x0000555556699b0a in futures_executor::local_pool::block_on<pulseaudio::client::record_stream::{impl#1}::started::{async_fn_env#0}> (f=...)
    at /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/futures-executor-0.3.31/src/local_pool.rs:316
#10 0x000055555668b4a4 in cpal::host::pulseaudio::stream::{impl#0}::play (self=0x7ffffffee2f8) at src/host/pulseaudio/stream.rs:31
#11 0x00005555566a42f2 in cpal::platform::platform_impl::{impl#10}::play (self=0x7ffffffee2f0) at src/platform/mod.rs:490

rust log output

[2025-08-26T11:17:16Z DEBUG my code] building input stream F32, StreamConfig { channels: 2, sample_rate: SampleRate(48000), buffer_size: Fixed(512) } 11
// call to build_input_stream
[2025-08-26T11:17:16Z DEBUG pulseaudio::client::reactor] CLIENT [1026]: CreateRecordStream(RecordStreamParams { sample_spec: SampleSpec { format: Float32Le, channels: 2, sample_rate: 48000 }, channel_map: [FrontLeft, FrontRight], source_index: Some(71), source_name: None, buffer_attr: BufferAttr { max_length: 4096, target_length: 4096, pre_buffering: 4294967295, minimum_request_length: 4294967295, fragment_size: 4294967295 }, flags: StreamFlags { start_corked: true, no_remap_channels: false, no_remix_channels: false, fix_format: false, fix_rate: false, fix_channels: false, no_move: false, variable_rate: false, peak_detect: false, start_muted: None, adjust_latency: false, early_requests: false, no_inhibit_auto_suspend: false, fail_on_suspend: false, relative_volume: false, passthrough: false }, direct_on_input_index: None, cvolume: None, props: {}, formats: [] })
[2025-08-26T11:17:16Z DEBUG pulseaudio::client::reactor] SERVER [1026]: Reply
[New Thread 0x7fffdd7fa6c0 (LWP 75906)]
[2025-08-26T11:17:16Z DEBUG my code] initalizing stream
// call to play() which never returns
[2025-08-26T11:17:16Z DEBUG pulseaudio::client::reactor] CLIENT [1027]: GetRecordLatency(LatencyParams { channel: 0, now: SystemTime { tv_sec: 1756207036, tv_nsec: 697746000 } })
[2025-08-26T11:17:16Z DEBUG pulseaudio::client::reactor] CLIENT [1028]: CorkRecordStream(CorkStreamParams { channel: 0, cork: false })
[2025-08-26T11:17:16Z DEBUG pulseaudio::client::reactor] SERVER [1027]: Reply
[2025-08-26T11:17:16Z DEBUG pulseaudio::client::reactor] SERVER [1028]: Reply
[2025-08-26T11:17:16Z DEBUG pulseaudio::client::reactor] CLIENT [1029]: GetRecordLatency(LatencyParams { channel: 0, now: SystemTime { tv_sec: 1756207036, tv_nsec: 806186135 } })
[2025-08-26T11:17:16Z DEBUG pulseaudio::client::reactor] SERVER [1029]: Reply
[2025-08-26T11:17:16Z DEBUG pulseaudio::client::reactor] CLIENT [1030]: GetRecordLatency(LatencyParams { channel: 0, now: SystemTime { tv_sec: 1756207036, tv_nsec: 915999910 } })
[2025-08-26T11:17:16Z DEBUG pulseaudio::client::reactor] SERVER [1030]: Reply
[2025-08-26T11:17:17Z DEBUG pulseaudio::client::reactor] CLIENT [1031]: GetRecordLatency(LatencyParams { channel: 0, now: SystemTime { tv_sec: 1756207037, tv_nsec: 25555174 } })
[2025-08-26T11:17:17Z DEBUG pulseaudio::client::reactor] SERVER [1031]: Reply
[2025-08-26T11:17:17Z DEBUG pulseaudio::client::reactor] CLIENT [1032]: GetRecordLatency(LatencyParams { channel: 0, now: SystemTime { tv_sec: 1756207037, tv_nsec: 135355354 } })
...keeps on polling GetRecordLatency

I can more or less reliably reproduce the issue in my application now by switching between devices. I tried adapting the record_wav sample to get the same result but failed so far.

@jwagner
Copy link
Contributor

jwagner commented Aug 26, 2025

That was maddening to isolate but I can finally reproduce it outside of my application. The issue seems to happen when alsa and pulse audio devices exist at the same time, even if only one device has an active stream.

Here is a crudely modified version of record_wav to reproduce the issue:
https://github.com/jwagner/cpal/blob/reproduce-pulseaudio-freeze/examples/record_wav.rs

I think arguably the main issue here is that the alsa device keeps an open handle around when it is not needed and not in the pulse integration. Even though it would be really nice if that failed with an error or timeout when the device is locked.

Not keeping the devices around has it's issues too. As far as I know there is no reliable way to refer to a device in cpal other than keeping a reference to it. The only alternative seems to be to find the device by name, but I don't think there is any guarantee that the names are unique.

Copy link
Member

@roderickvd roderickvd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's a couple of thoughts on the thread management. Thanks for you guys vetting this PR too. Let me know when you feel it's good to go.

let bps = sample_spec.format.bytes_per_sample();
let n_samples = buf.len() / bps;
let data =
unsafe { Data::from_parts(buf.as_ptr() as *mut _, n_samples, sample_format) };
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be nice if we could document how we ensure it's safety:

  // SAFETY: We verify that:
  // - buf.as_ptr() points to valid memory for at least n_samples * bytes_per_sample
  // - n_samples is calculated from buf.len() / bytes_per_sample, ensuring validity
  // - The buffer remains valid for the duration of the callback
  // - sample_format matches the actual data layout in the buffer


// Spawn a thread to drive the stream future.
let stream_clone = stream.clone();
let _worker_thread = std::thread::spawn(move || block_on(stream_clone.play_all()));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be good to document that thread lifecycle.

Comment on lines +176 to +206
std::thread::spawn(move || loop {
let Ok(timing_info) = block_on(stream_clone.timing_info()) else {
break;
};

store_latency(
&latency_clone,
sample_spec,
timing_info.sink_usec,
timing_info.write_offset,
timing_info.read_offset,
);

std::thread::sleep(time::Duration::from_millis(100));
});

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yet should the handles not be joined or fused on drop for proper cleanup?

I remember we had to the same here in librespot: librespot-org/librespot@3ce9854 (you can forget about all the parking_lot stuff and skip right to the drop implementation at the bottom).

That thing in librespot usually didn't trigger, but it panicked in certain Tokio contexts.

Or am I missing that this is done in another way already?

This adds support for PulseAudio on hosts with a PA or PipeWire server
(the latter via pipewire-pulse).

Since the underlying client is async, some amount of bridging has to be
done.
@colinmarc
Copy link
Contributor Author

Pushed fixes for the documentation issues you brought up in review.

For the hang @jwagner diagnosed (thank you for chasing that down!), we would need to add a timeout to cpal for waiting for the stream to start. I'm not sure what's an appropriate value for that timeout, since devices can be all sorts of things, even over a network. From pulseaudio-rs's perspective, we sent the uncork command, and the started() future resolves when we get the first bytes from the recording device. That never happens, and there's no further information at all from the pulse daemon - no error or anything to relay back. The daemon probably stuck on a lock itself.

On the other hand, the issue is a bit niche, and, as @jwagner said, probably a bug on the ALSA side. Most people will use either pulse or ALSA, not both. So we could just open an issue for someone else to take a look at in the future, and merge this as-is.

Copy link
Member

@roderickvd roderickvd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@colinmarc I appreciate you sticking through to get this right: adding a new host is quite a thing. Thinking about it again I'm wondering about the blocking behavior. Can you help me think through whether we're not missing edge cases that would warrant a separate worker thread with more channel-based communication?

A few smaller points beside that more architectural question, too.

Thanks again 🙏

// Run for 3 seconds before closing.
println!("Playing for 3 seconds... ");
std::thread::sleep(std::time::Duration::from_secs(3));
// Run for 10 seconds before closing.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the change to 10 seconds?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my testing, pulseaudio has a lot more recording delay. Probably it's configurable, but idk if it's worth muddying the example.

/// See the [`BackendSpecificError`] docs for more information about this error variant.
BackendSpecific { err: BackendSpecificError },
/// The name is not valid UTF-8.
InvalidUtf8,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we'd use String::from_utf8_lossy and dispense with this errorr variant?

@@ -0,0 +1,395 @@
extern crate pulseaudio;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not necessary in Rust 2021.

@@ -0,0 +1,395 @@
extern crate pulseaudio;

use futures::executor::block_on;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry in advance for raising the big question: instead of blocking in many places, should we not spawn a dedicated thread and/or use a channel-based approach? We're trying to not block in other hosts too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shouldn't block any other hosts, because cpal isn't async (right?). And we do use dedicated threads in this PR for exactly that reason. Please tell me if I'm misunderstanding.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case it helps clear up any misunderstanding, calling stream.play synchronously uncorks the stream (it waits until the server responds to the request, which should be instantaneous), and then returns. The stream is driven by a dedicated thread. That should be similar to how the other hosts work.


fn devices(&self) -> Result<Self::Devices, DevicesError> {
let sinks = block_on(self.client.list_sinks()).map_err(|_| BackendSpecificError {
description: "Failed to list sinks".to_owned(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could carry over the error like format!("Failed to list sinks: {e}").

let bps = sample_spec.format.bytes_per_sample();
let n_samples = buf.len() / bps;

// SAFETY: We verify that:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where do we verify that? Should we add an assertion that buf.len() % bps == 0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heh, this was your suggested comment, I didn't really read it before adding it.

params: protocol::PlaybackStreamParams,
sample_format: SampleFormat,
mut data_callback: D,
_error_callback: E,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should capture errors from the worker thread and send them to the error callback. This could be done by the channels approach.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we don't need channels, since the callback is Send.

// Spawn a thread to drive the stream future. It will exit automatically
// when the stream is stopped by the user.
let stream_clone = stream.clone();
let _worker_thread = std::thread::spawn(move || block_on(stream_clone.play_all()));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like play_all can return errors that we may want to send to the error callback?

Also - thinking hard here about edge cases - would it be possible to try and drop the stream before it finishes playing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dropping the stream means that the play_all future gets resolved and the thread exits.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To fully explicate this: here is the future returned by play_all: https://github.com/colinmarc/pulseaudio-rs/blob/main/src/client/playback_stream.rs#L129-L136

Here is where we store the sender: https://github.com/colinmarc/pulseaudio-rs/blob/main/src/client/reactor.rs#L31

And here is where we drop it:

https://github.com/colinmarc/pulseaudio-rs/blob/main/src/client/playback_stream.rs#L181

In an extreme case if the pulse daemon crashes before responding or something, we might leak a single thread, but we'll have bigger issues playing audio in that case. And dropping the Client object will clear all the pending futures.

let stream_clone = stream.clone();
let latency_clone = current_latency_micros.clone();
std::thread::spawn(move || loop {
let Ok(timing_info) = block_on(stream_clone.timing_info()) else {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kind of the same question; could this thread outlive the stream?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above; the future is internally a oneshot and dropping the stream resolves it.


// We always consider the full buffer filled, because cpal's
// user-facing api doesn't allow for short writes.
// TODO: should we preemptively zero the output buffer before
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we should, for example like we do with ALSA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Pulse Audio support
5 participants