-
Notifications
You must be signed in to change notification settings - Fork 502
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pre-RFC: rayon::yield_now #548
Comments
550: add bridge from Iterator to ParallelIterator r=cuviper a=QuietMisdreavus Half of #46 This started getting reviewed in QuietMisdreavus/polyester#6, but i decided to move my work to Rayon proper. This PR adds a new trait, `AsParallel`, an implementation on `Iterator + Send`, and an iterator adapter `IterParallel` that implements `ParallelIterator` with a similar "cache items as you go" methodology as Polyester. I introduced a new trait because `ParallelIterator` was implemented on `Range`, which is itself an `Iterator`. The basic idea is that you would start with a quick sequential `Iterator`, call `.as_parallel()` on it, and be able to use `ParallelIterator` adapters after that point, to do more expensive processing in multiple threads. The design of `IterParallel` is like this: * `IterParallel` defers background work to `IterParallelProducer`, which implements `UnindexedProducer`. * `IterParallelProducer` will split as many times as there are threads in the current pool. (I've been told that #492 is a better way to organize this, but until that's in, this is how i wrote it. `>_>`) * When folding items, `IterParallelProducer` keeps a `Stealer` from `crossbeam-deque` (added as a dependency, but using the same version as `rayon-core`) to access a deque of items that have already been loaded from the iterator. * If the `Stealer` is empty, a worker will attempt to lock the Mutex to access the source `Iterator` and the `Deque`. * If the Mutex is already locked, it will call `yield_now`. The implementation in polyester used a `synchronoise::SignalEvent` but i've been told that worker threads should not block. In lieu of #548, a regular spin-loop was chosen instead. * If the Mutex is available, the worker will load a number of items from the iterator (currently (number of threads * number of threads * 2)) before closing the Mutex and continuing. * (If the Mutex is poisoned, the worker will just... stop. Is there a recommended approach here? `>_>`) This design is effectively a first brush, has [the same caveats as polyester](https://docs.rs/polyester/0.1.0/polyester/trait.Polyester.html#implementation-note), probably needs some extra features in rayon-core, and needs some higher-level docs before i'm willing to let it go. However, i'm putting it here because it was not in the right place when i talked to @cuviper about it last time. Co-authored-by: QuietMisdreavus <[email protected]> Co-authored-by: Niko Matsakis <[email protected]>
Hi, do you have any update on this? It will be really usefull! |
I implemented this yesterday, and it was straightforward enough: master...cuviper:yield_now I'm now wondering if it makes sense for the caller to be "blind" to what happened (work executed or just yielded), or whether we should even call A different design could be |
I think polling makes more sense, because my actual use case is:
Knowing whether a job was executed can be important to detect that deadlock situation. So I think that a poll design is better suited. I don't know if there is a use case where you want more insight, like was a local job executed? was a job stolen? or is there nothing to do? It can be as well an information returned by the poll. |
@fredpointzero (or anyone else watching), I would appreciate feedback on #1026, thanks! |
If you're in a situation where you need to wait for some external resource, but don't want to block, then calling
rayon::yield_now
could let the current thread search for other work in the pool. If nothing was found, or if you're not actually in a threadpool at the moment, it can just callstd::thread::yield_now
.The text was updated successfully, but these errors were encountered: