-
Notifications
You must be signed in to change notification settings - Fork 501
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extending some number of Vecs #721
Comments
Recent version of Rayon provides us with parallel iteration over a tuple.
I think this make code look better, even though this doesn't help answering the current question. |
Rayon also has two implementations of impl<A, B, FromA, FromB> ParallelExtend<(A, B)> for (FromA, FromB)
where
A: Send,
B: Send,
FromA: Send + ParallelExtend<A>,
FromB: Send + ParallelExtend<B>,
{...}
impl<L, R, A, B> ParallelExtend<Either<L, R>> for (A, B)
where
L: Send,
R: Send,
A: Send + ParallelExtend<L>,
B: Send + ParallelExtend<R>,
{...} These were added in #604, and as noted there, you can also nest this in pairs. let (vec1, (vec2, (vec3, vec4))) = par_iter.map(|(a, b, c, d)| (a, (b, (c, d)))).unzip(); That should also work for calling I kind of wish we had
|
This doesn't seem to work because of the lack of
I'm using this as a temporary fix: struct ExtendVec<'a, T> {
vec: &'a mut Vec<T>,
}
impl<'a, T> ExtendVec<'a, T> {
pub fn new(vec: &'a mut Vec<T>) -> Self {
Self { vec }
}
}
impl<T: Send> ParallelExtend<T> for ExtendVec<'_, T> {
fn par_extend<I>(&mut self, par_iter: I)
where I: IntoParallelIterator<Item = T>
{
self.vec.par_extend(par_iter);
}
} This seems to cause rustc to compile very slowly. |
Ah, a newtype wrapper does the trick -- we could even make something generic like that.
I'm not surprised. While the generic In a presentation I gave last year, I demonstrated a giant symbol -- 132967 characters! -- from just a 3-way collect. See also #671, which was alleviated some by #673 and rust-lang/rust#62429, but the overall structure still remains the same. If we had something more directly designed for just unzipping vectors, it wouldn't need to be nearly so complicated. |
I guess it's probably because of the giant symbols and the monomorphization that this is so slow. The tree arrangement doesn't help much, unfortunately. I was thinking whether there would be a way to maybe implement Even if the solution is dirty, it would still be much better than my current unsafe-riddled approach. Do you think any of these ideas could help reduce the costs? |
I was taking a look at the unzip implementation and it seems like even a manual 8-way tuple would probably produce the same result. @cuviper, it seems like finding a way to force the |
I'm at a loss for designing a better generic unzip. It's really difficult that we can only get each Maybe we need a new trait with a method that directly creates a consumer -- although that also means each collection's consumer type would become part of the public API. |
So, my current approach is basically extending |
@cuviper, would it desirable to have the |
In order to gain a performance boost from auto-vectorization, I'm storing data in multiple
Vec
s. Think struct of arrays instead of array of structs. In the array of structs case, it's very easy to simply extend the vector with whatever the parallel iterator provides. However, in the other case I don't have this luxury.I came up with two solutions:
Or use uninitialized memory:
The problem I have with the second solution is that it's very painful to use and it litters all my code with a lot of unsafe. However, I don't see any solution to remove it since initializing the vector with default values is visibly costly. Not zipping the iterators together also doesn't work because it will have to iterate through the data multiple times.
Any ideas on how to improve this?
The text was updated successfully, but these errors were encountered: