Skip to content

Conversation

pubrrr
Copy link
Contributor

@pubrrr pubrrr commented Mar 4, 2022

Objective

add TaskPool::spawn_pollable so that the consumer does not need a block_on method

Solution

add TaskPool::spawn_pollable that uses a receiver to get the result of the async task.

This is basically a rework of PR #2691 with the review comments incorporated. In PR #4062 this was considered the better option.
Also see #2677 and #4045.

@github-actions github-actions bot added the S-Needs-Triage This issue needs to be labelled label Mar 4, 2022
@pubrrr pubrrr force-pushed the pollableTask branch 2 times, most recently from 4bfe52c to 9a2a7e8 Compare March 4, 2022 15:29
@pubrrr
Copy link
Contributor Author

pubrrr commented Mar 4, 2022

@DJMcNab you're welcome to provide feedback :)

@alice-i-cecile alice-i-cecile added A-Tasks Tools for parallel and async work C-Usability A targeted quality-of-life change that makes Bevy easier to use and removed S-Needs-Triage This issue needs to be labelled labels Mar 4, 2022
@pubrrr pubrrr force-pushed the pollableTask branch 3 times, most recently from 6bdf4ee to 681afa1 Compare March 8, 2022 16:00
/// on every frame update without blocking on a future
#[derive(Debug)]
pub struct PollableTask<T> {
receiver: Receiver<T>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is using channels actually preferable to calling future::block_on? Has anyone run benchmarks? Will this work on wasm? (pretty sure future::block_on doesn't, but its still worth checking)
Are there alternatives that don't involve async channels (which will allocate and use atomics)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That looks like #2650.

I haven't run the numbers on it though. We need to get data into the ecs from outside the ecs, which means that it needs to live somewhere with a stable address so that both sides can talk to each other. Currently, that's where the Task lives, in this PR it's in the channel internals.

Adding new items can also happen at any time relative to the system code happening, which means we definitely need atomics in some form.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There were these benches from the original author, hanabi1224.

https://github.com/hanabi1224/bevy/blob/accee1bb4e1b3ea8c628199c0fd4ec02badc594f/examples/async_tasks/async_bench.rs

Looking at it, they probably should be ported to criterion and a bench needs to be added for the approach in this PR.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think another option is to use or implement something similar to FuturesExt::now_or_never. It would have almost no overhead compared to using future::block_on.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can confirm the example doesn't currently compile on wasm, since the single threaded task pool doesn't have the spawn_pollable api. You might be able to get it to work on wasm by detaching the task instead of storing it and then copying the code for spawn pollable over to the single threaded task pool.

@TheRawMeatball
Copy link
Member

#4627 aims to be a simpler and faster alternative to this.

hymm pushed a commit to hymm/bevy that referenced this pull request May 3, 2022
@hymm
Copy link
Contributor

hymm commented May 3, 2022

I updated #2650 to also include the approach in this branch: https://github.com/hymm/bevy/blob/async-bench/examples/async_tasks/async_bench.rs

[no_poll_once]        n_frames executed: 292, avg fps: 59.1(target:120), duration: 4.939s
[noop_waker]          n_frames executed: 221, avg fps: 44.6(target:120), duration: 4.959s
[handle_tasks]        n_frames executed: 121, avg fps: 24.2(target:120), duration: 5.008s
[handle_tasks_par]    n_frames executed: 146, avg fps: 29.1(target:120), duration: 5.013s
[handle_tasks_par_2]  n_frames executed: 134, avg fps: 26.9(target:120), duration: 4.983s
[async_channel]       n_frames executed: 292, avg fps: 59.3(target:120), duration: 4.925s

This seems to be significantly faster than the noop waker approach.

Copy link
Member

@DJMcNab DJMcNab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Broadly I'm in agreement with this change, I think it's a lot nicer.

No concerns inherent to the solution, just some concerns around naming and similar things.


/// Spawns a static future onto the thread pool. The returned `PollableTask` is not a future,
/// but can be polled in system functions on every frame update without being blocked on
pub fn spawn_pollable<T>(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a huge fan of this name (and its derivates, i.e. PollableTask and PollableTask::poll); normal tasks can also be polled, using Future::poll.

Unfortunately, I don't have any concrete suggestions for a better name.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not very of the name either, but I couldn't come up with an alternative, so I just took the name from the original PR.
CheckableTask? ChannelledTask? Simply calling it Task, too, identifying it via the package: ...::channelled::Task?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on C++, I think spawn_async would be a good candidate.

https://en.cppreference.com/w/cpp/thread/async

I was also looking for similar names when I first started to investigate.

let result = future.await;
match sender.send(result).await {
Ok(_) => {}
Err(_) => warn!("Could not send result of task to receiver"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This error message could/should be expanded. E.g.

Suggested change
Err(_) => warn!("Could not send result of task to receiver"),
Err(_) => error!("Sending result for future {future_name} (`Future<Output={return_name}>`) failed, because the recieving `PollableTask` was dropped"),

where future_name is std::any::type_name::<[F]>(), i.e. the type name of the future and return_name was std::any::type_name::<T>()

Additionally, it might be worth tracking the caller location of this function for this diagnostic. That would need to be outside of the async move block.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How to do the tracking of the caller location?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The caller location, or caller id could be given optionally to the spawn interface to let the user decide what identification system to use.

However I think this suggestion might be a better for for a later PR, it would be best introduced if
a. there's a real need for it from the users
b. later on when the initial feature is already introduced

return;
}

std::thread::sleep(Duration::from_secs_f32(1. / 100.));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sleeping in tests is very much not ideal. I think we would check the output from the relevant channel from each task

I.e. write this test relatviely async, using e.g. TaskPoll::scope

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite get how I should use TaskPool::scope here. I want to test that a PollableTask will return something. Should I wrap those pollable tasks in a scope to do the waiting there?

Err(try_error) => match try_error {
TryRecvError::Empty => None,
TryRecvError::Closed => {
panic!("Polling on the task failed because the connection was already closed.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this panic-worthy? I'm not sure exactly when the actual spawned future is dropped, but I think it would be reasonable for the executor to drop it once it reaches Ready once.

I'd say it might be better to add e.g. an AtomicBool to this object, and print a warning in this case, at most once for each task. I'm not sure of the exact details of the executor though

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added the panic due to this discussion: #4102 (comment)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, it seems quite harsh to stop the world for running the check after the task has ended.

I don't want to block on this; I don't feel very strongly either way.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One solution would be to return a result, but also log a warning

@alice-i-cecile alice-i-cecile added the C-Performance A change motivated by improving speed, memory usage or compile times label May 18, 2022
@alice-i-cecile alice-i-cecile added the X-Controversial There is active debate or serious implications around merging this PR label Jun 2, 2022
@alice-i-cecile
Copy link
Member

We need to decide between this approach and the one in #4627, correct?

@hymm
Copy link
Contributor

hymm commented Jun 8, 2022

We need to decide between this approach and the one in #4627, correct?

Given the perf results I think this PR would be preferred. I think we're just waiting on DJMcNab's comments to be addressed.

@pubrrr
Copy link
Contributor Author

pubrrr commented Jun 12, 2022

After the rebase, the example I initially changed became irrelevant (https://github.com/bevyengine/bevy/blob/main/examples/async_tasks/async_compute.rs). Should I add another one or is the unit test enough?

@alice-i-cecile
Copy link
Member

@pubrrr I'd like to see another example; working with async tasks like this is a common point of confusion.

@bjorn3 bjorn3 removed their request for review December 11, 2022 15:58
@bas-ie
Copy link
Contributor

bas-ie commented Oct 2, 2024

PR backlog cleanup: I'm less certain if this PR needs a tracking issue. @alice-i-cecile is the problem still relevant in current Bevy? Will close, but happy to make an issue as required. @pubrrr feel free to chime in, of course.

@bas-ie bas-ie closed this Oct 2, 2024
Copy link

@davids91 davids91 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added my comments already to the review before me, I couldn't find anything else to note.

@alice-i-cecile Do you still require adding example code for this?

@pubrrr do you think it would be feasible to finish this? I too need this functionality, but would rather avoid yet another PR for the same issue.
Would much rather support you in this one.

@pubrrr
Copy link
Contributor Author

pubrrr commented May 17, 2025

@pubrrr do you think it would be feasible to finish this? I too need this functionality, but would rather avoid yet another PR for the same issue.
Would much rather support you in this one.

Unfortunately I don't have time currently. Feel free to pick up my changes, you seem to have read it anyway already, I don't really know what I did here 3 years ago. Also I don't know how much of a hassle it would be to rebase it to the current main.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-Tasks Tools for parallel and async work C-Performance A change motivated by improving speed, memory usage or compile times C-Usability A targeted quality-of-life change that makes Bevy easier to use X-Controversial There is active debate or serious implications around merging this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants