Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Merged by Bors] - Document bevy_tasks and enable #![warn(missing_docs)] #3509

Closed
wants to merge 9 commits into from
Closed
1 change: 1 addition & 0 deletions crates/bevy_tasks/src/iter/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ where
B: Iterator<Item = Self::Item> + Send,
Self: Sized + Send,
{
/// The type of item that is being iterated over.
type Item;
james7132 marked this conversation as resolved.
Show resolved Hide resolved

/// Returns the next batch of items for processing.
Expand Down
65 changes: 65 additions & 0 deletions crates/bevy_tasks/src/lib.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
#![warn(missing_docs)]
#![doc = include_str!("../README.md")]

mod slice;
Expand Down Expand Up @@ -25,6 +26,7 @@ pub use countdown_event::CountdownEvent;
mod iter;
pub use iter::ParallelIterator;

#[allow(missing_docs)]
pub mod prelude {
#[doc(hidden)]
pub use crate::{
Expand All @@ -34,10 +36,73 @@ pub mod prelude {
};
}

// The following docs are copied from `num_cpus`'s docs for the wrapped functions.
james7132 marked this conversation as resolved.
Show resolved Hide resolved
// Attributed under MIT or Apache: https://github.com/seanmonstar/num_cpus

/// Returns the number of available CPUs of the current system.
///
/// This function will get the number of logical cores. Sometimes this is different from the number
/// of physical cores (See [Simultaneous multithreading on Wikipedia][smt]).
///
/// This will always return at least `1`.
///
/// # Examples
///
/// ```rust
/// let cpus = bevy_tasks::logical_core_count();
/// if cpus > 1 {
/// println!("We are on a multicore system with {} CPUs", cpus);
/// } else {
/// println!("We are on a single core system");
/// }
/// ```
///
/// # Note
///
/// This will check [sched affinity] on Linux, showing a lower number of CPUs if the current
/// thread does not have access to all the computer's CPUs.
///
/// This will also check [cgroups], frequently used in containers to constrain CPU usage.
///
/// [smt]: https://en.wikipedia.org/wiki/Simultaneous_multithreading
/// [sched affinity]: http://www.gnu.org/software/libc/manual/html_node/CPU-Affinity.html
/// [cgroups]: https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt
#[inline(always)]
pub fn logical_core_count() -> usize {
num_cpus::get()
}

/// Returns the number of physical cores of the current system.
///
/// This will always return at least `1`.
///
/// # Note
///
/// Physical count is supported only on Linux, mac OS and Windows platforms.
/// On other platforms, or if the physical count fails on supported platforms,
/// this function returns the same as [`get()`], which is the number of logical
/// CPUS.
///
/// # Examples
///
/// ```rust
/// let logical_cpus = bevy_tasks::logical_core_count();
/// let physical_cpus = bevy_tasks::physical_core_count();
/// if logical_cpus > physical_cpus {
/// println!("We have simultaneous multithreading with about {:.2} \
/// logical cores to 1 physical core.",
/// (logical_cpus as f64) / (physical_cpus as f64));
/// } else if logical_cpus == physical_cpus {
/// println!("Either we don't have simultaneous multithreading, or our \
/// system doesn't support getting the number of physical CPUs.");
/// } else {
/// println!("We have less logical CPUs than physical CPUs, maybe we only have access to \
/// some of the CPUs on our system.");
/// }
/// ```
///
/// [`logical_core_count()`]: fn.logical_core_count.html
#[inline(always)]
pub fn physical_core_count() -> usize {
num_cpus::get_physical()
}
118 changes: 118 additions & 0 deletions crates/bevy_tasks/src/slice.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,33 @@
use super::TaskPool;

/// Provides functions for mapping read-only slices across a provided `TaskPool`.
pub trait ParallelSlice<T: Sync>: AsRef<[T]> {
/// Splits the slice into chunks of size `chunks_size` or less and queues up one task
/// per chunk on the provided `task_pool` and returns a mapped `Vec` of the collected
/// results.
///
/// # Example
///
/// ```rust
/// # use bevy_tasks::prelude::*;
/// # use bevy_tasks::TaskPool;
/// let task_pool = TaskPool::new();
/// let counts = (0..10000).collect::<Vec<u32>>();
/// let incremented = counts.par_chunk_map(&task_pool, 100, |chunk| {
/// let mut results = Vec::new();
/// for count in chunk {
/// results.push(*count + 2);
/// }
/// results
/// });
/// # let flattened: Vec<_> = incremented.into_iter().flatten().collect();
/// # assert_eq!(flattened, (2..10002).collect::<Vec<u32>>());
/// ```
///
/// # See Also
///
/// `ParallelSliceMut::par_chunk_map_mut` for mapping mutable slices.
/// `ParallelSlice::par_splat_map` for mapping when a specific chunk size is unknown.
fn par_chunk_map<F, R>(&self, task_pool: &TaskPool, chunk_size: usize, f: F) -> Vec<R>
where
F: Fn(&[T]) -> R + Send + Sync,
Expand All @@ -15,6 +42,35 @@ pub trait ParallelSlice<T: Sync>: AsRef<[T]> {
})
}

/// Splits the slice into a maximum of `self.len() / max_tasks` chunks of approximately
james7132 marked this conversation as resolved.
Show resolved Hide resolved
/// equal size and queues one task per chunk on the provided `task_pool` and returns a
/// mapped `Vec` of the collected results.
///
/// If `max_tasks` is `None`, this function will attempt to use one chunk per thread in
/// `task_pool`.
///
/// # Example
///
/// ```rust
/// # use bevy_tasks::prelude::*;
/// # use bevy_tasks::TaskPool;
/// let task_pool = TaskPool::new();
/// let counts = (0..10000).collect::<Vec<u32>>();
/// let incremented = counts.par_splat_map(&task_pool, None, |chunk| {
/// let mut results = Vec::new();
/// for count in chunk {
/// results.push(*count + 2);
/// }
/// results
/// });
/// # let flattened: Vec<_> = incremented.into_iter().flatten().collect();
/// # assert_eq!(flattened, (2..10002).collect::<Vec<u32>>());
/// ```
///
/// # See Also
///
/// `ParallelSliceMut::par_splat_map` for mapping mutable slices.
/// `ParallelSlice::par_chunk_map` for mapping when a specific chunk size is desirable.
fn par_splat_map<F, R>(&self, task_pool: &TaskPool, max_tasks: Option<usize>, f: F) -> Vec<R>
where
F: Fn(&[T]) -> R + Send + Sync,
Expand All @@ -35,7 +91,37 @@ pub trait ParallelSlice<T: Sync>: AsRef<[T]> {

impl<S, T: Sync> ParallelSlice<T> for S where S: AsRef<[T]> {}

/// Provides functions for mapping mutable slices across a provided `TaskPool`.
pub trait ParallelSliceMut<T: Send>: AsMut<[T]> {
/// Splits the slice into chunks of size `chunks_size` or less and queues up one task
/// per chunk on the provided `task_pool` and returns a mapped `Vec` of the collected
/// results.
///
/// # Example
///
/// ```rust
/// # use bevy_tasks::prelude::*;
/// # use bevy_tasks::TaskPool;
/// let task_pool = TaskPool::new();
/// let mut counts = (0..10000).collect::<Vec<u32>>();
/// let incremented = counts.par_chunk_map_mut(&task_pool, 100, |chunk| {
/// let mut results = Vec::new();
/// for count in chunk {
/// *count += 5;
/// results.push(*count - 2);
/// }
/// results
/// });
///
/// assert_eq!(counts, (5..10005).collect::<Vec<u32>>());
/// # let flattened: Vec<_> = incremented.into_iter().flatten().collect();
/// # assert_eq!(flattened, (3..10003).collect::<Vec<u32>>());
/// ```
///
/// # See Also
///
/// `ParallelSlice::par_chunk_map` for mapping immutable slices.
/// `ParallelSliceMut::par_splat_map` for mapping when a specific chunk size is unknown.
fn par_chunk_map_mut<F, R>(&mut self, task_pool: &TaskPool, chunk_size: usize, f: F) -> Vec<R>
where
F: Fn(&mut [T]) -> R + Send + Sync,
Expand All @@ -50,6 +136,38 @@ pub trait ParallelSliceMut<T: Send>: AsMut<[T]> {
})
}

/// Splits the slice into a maximum of `self.len() / max_tasks` chunks of approximately
/// equal size and queues one task per chunk on the provided `task_pool` and returns a
/// mapped `Vec` of the collected results.
///
/// If `max_tasks` is `None`, this function will attempt to use one chunk per thread in
/// `task_pool`.
///
/// # Example
///
/// ```rust
/// # use bevy_tasks::prelude::*;
/// # use bevy_tasks::TaskPool;
/// let task_pool = TaskPool::new();
/// let mut counts = (0..10000).collect::<Vec<u32>>();
/// let incremented = counts.par_splat_map_mut(&task_pool, None, |chunk| {
/// let mut results = Vec::new();
/// for count in chunk {
/// *count += 5;
/// results.push(*count - 2);
/// }
/// results
/// });
///
/// assert_eq!(counts, (5..10005).collect::<Vec<u32>>());
/// # let flattened: Vec<_> = incremented.into_iter().flatten().collect::<Vec<u32>>();
/// # assert_eq!(flattened, (3..10003).collect::<Vec<u32>>());
/// ```
///
/// # See Also
///
/// `ParallelSlice::par_splat_map` for mapping immutable slices.
/// `ParallelSliceMut::par_chunk_map` for mapping when a specific chunk size is desirable.
fn par_splat_map_mut<F, R>(
&mut self,
task_pool: &TaskPool,
Expand Down
19 changes: 19 additions & 0 deletions crates/bevy_tasks/src/task_pool.rs
Original file line number Diff line number Diff line change
Expand Up @@ -236,6 +236,11 @@ impl TaskPool {
Task::new(self.executor.spawn(future))
}

/// Spawns a static future on the thread-local async executor for the current thread. The task
/// will run entirely on the thread the task was spawned on. The returned Task is a future.
/// It can also be cancelled and "detached" allowing it to continue running without having
/// to be polled by the end-user. Users should generally prefer to use `spawn` instead, unless
/// the provided future is not `Send`.
pub fn spawn_local<T>(&self, future: impl Future<Output = T> + 'static) -> Task<T>
where
T: 'static,
Expand All @@ -250,6 +255,9 @@ impl Default for TaskPool {
}
}

/// A `TaskPool` scope for running one or more non-`'static` futures.
///
/// For more information, see `TaskPool::scope` for usage.
#[derive(Debug)]
pub struct Scope<'scope, T> {
executor: &'scope async_executor::Executor<'scope>,
Expand All @@ -258,11 +266,22 @@ pub struct Scope<'scope, T> {
}

impl<'scope, T: Send + 'scope> Scope<'scope, T> {
/// Spawns a scoped future onto the thread pool. The scope *must* outlive
/// the provided future. The results of the future will be returned as a part of
/// `TaskPool::scope`'s return value.
///
/// For more information, see `TaskPool::scope` for usage.
pub fn spawn<Fut: Future<Output = T> + 'scope + Send>(&mut self, f: Fut) {
let task = self.executor.spawn(f);
self.spawned.push(task);
}

/// Spawns a scoped future onto the thread-local executor. The scope *must* outlive
/// the provided future. The results of the future will be returned as a part of
/// `TaskPool::scope`'s return value. Users should generally prefer to use `spawn` instead, unless
/// the provided future is not `Send`.
///
/// For more information, see `TaskPool::scope` for usage.
pub fn spawn_local<Fut: Future<Output = T> + 'scope>(&mut self, f: Fut) {
let task = self.local_executor.spawn(f);
self.spawned.push(task);
Expand Down