-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid RowConverter for multi group by #12269
base: main
Are you sure you want to change the base?
Conversation
@alamb The approach in this PR is to replace RowConverter and check the equality of the group by values by accessing the certain row and iterate all the group by expressions. The downside is that we need to have type-specific implementation but we could see it outperform Rows by eliminating the cost of I'm thinking of support only primitive, string, datetime those non-nested type. For other less common nested types maybe we just fallback to |
} | ||
|
||
impl<T: ArrowPrimitiveType> ArrayEq for PrimitiveGroupValueBuilder<T> { | ||
fn equal_to(&self, lhs_row: usize, array: &ArrayRef, rhs_row: usize) -> bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
equal_to
and append_val
are two core functions.
equal_to
is to compare the incoming row with the row in group value builder
append_val
is to add row into group value builder
Thanks @jayzhan211 -- I will try and review this over the next day or two (I am catching up from being out last week and I am not back full time until this Thursday) |
} | ||
|
||
for (i, group_val) in group_values_v2.iter().enumerate() { | ||
if !compare_equal(group_val.as_ref(), *group_idx, &cols[i], row) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As this is called in a loop, this can be optimized/specialized for certain cases like: do the arrays have any nulls or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get it how could I further optimize the loop based on nulls 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the idea would be change compare_equal to take advantage of cases when, for example, it was known the values couldn't be null (so checking Option
isn't needed)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, indeed 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TLDR is I think this is a really neat idea @jayzhan211 -- in essence it seems to me this PR basically changes from Row comparison to Column by column comparison.
The theory with using RowCoverter at first I believe is that:
- It handled all possible types and combinations
- The theory was that time spent creating the Row would be paid back by faster comparisons by avoiding dynamic dispatch.
Your benchmark numbers seem to show different results 👌
I thought about how the performance could be so good and I suppose it does make sense because for most aggregate queries, many of the rows will go into an existing group -- so the cost of copying the input, just to find it isn't needed is outweighted
Also, this doesn't make sense to upstream to Arrow for me, it is group by specific implementation, so we need to maintain this in Datafusion. Would like an early feedback on this approach!
I looked at this and I think we could potentially reuse a lot of what is upstream in arrow-rs 's builders. I left comments
I am running the clickbench benchmarks to see if I can confirm the results. If so, I suggest we try and reuse the builders from arrow-rs as much as possible and see how elegant we can make this PR.
But all in all, really nicely done 👏
let mut group_values_v2 = self | ||
.group_values_v2 | ||
.take() | ||
.expect("Can not emit from empty rows"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a neat optimization as well -- as it saves a copy of the intermediate group values 👍
// } | ||
// } | ||
|
||
pub struct ByteGroupValueBuilderNaive<O> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to my comment above, this looks very similar to the GenericBinaryBuilder in arrow -- it would be great if we could simply reuse that instead of a significant amount of copying 🤔
fn build(self: Box<Self>) -> ArrayRef; | ||
} | ||
|
||
pub struct PrimitiveGroupValueBuilder<T: ArrowPrimitiveType>(Vec<Option<T::Native>>); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks very similar to PrimitiveBuilder
in arrow-rs to me https://docs.rs/arrow/latest/arrow/array/struct.PrimitiveBuilder.html (though I think PrimitiveBuilder
is likely faster / handles nulls better)
I wonder if you could implement ArrayEq
for PrimitiveBuilder
using the methods like https://docs.rs/arrow/latest/arrow/array/struct.PrimitiveBuilder.html#method.values_slice
If so I think you would have a very compelling PR here
} | ||
|
||
for (i, group_val) in group_values_v2.iter().enumerate() { | ||
if !compare_equal(group_val.as_ref(), *group_idx, &cols[i], row) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the idea would be change compare_equal to take advantage of cases when, for example, it was known the values couldn't be null (so checking Option
isn't needed)
@@ -35,9 +35,4 @@ SELECT "URL", COUNT(*) AS c FROM hits GROUP BY "URL" ORDER BY c DESC LIMIT 10; | |||
SELECT 1, "URL", COUNT(*) AS c FROM hits GROUP BY 1, "URL" ORDER BY c DESC LIMIT 10; | |||
SELECT "ClientIP", "ClientIP" - 1, "ClientIP" - 2, "ClientIP" - 3, COUNT(*) AS c FROM hits GROUP BY "ClientIP", "ClientIP" - 1, "ClientIP" - 2, "ClientIP" - 3 ORDER BY c DESC LIMIT 10; | |||
SELECT "URL", COUNT(*) AS PageViews FROM hits WHERE "CounterID" = 62 AND "EventDate"::INT::DATE >= '2013-07-01' AND "EventDate"::INT::DATE <= '2013-07-31' AND "DontCountHits" = 0 AND "IsRefresh" = 0 AND "URL" <> '' GROUP BY "URL" ORDER BY PageViews DESC LIMIT 10; | |||
SELECT "Title", COUNT(*) AS PageViews FROM hits WHERE "CounterID" = 62 AND "EventDate"::INT::DATE >= '2013-07-01' AND "EventDate"::INT::DATE <= '2013-07-31' AND "DontCountHits" = 0 AND "IsRefresh" = 0 AND "Title" <> '' GROUP BY "Title" ORDER BY PageViews DESC LIMIT 10; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are these queries removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because I haven't implement DateTime builder, so couldn't pass the test
TLDR is I ran the benchmarks and it does appear to make a measurable performance improvement on several queries 👍
|
Signed-off-by: jayzhan211 <[email protected]>
Signed-off-by: jayzhan211 <[email protected]>
Signed-off-by: jayzhan211 <[email protected]>
|
Signed-off-by: jayzhan211 <[email protected]>
9d8dbea
to
5d904a3
Compare
@alamb I found |
Which issue does this PR close?
Closes #.
Rationale for this change
To avoid Row converter in multi group by clause, we add equality check for group values Array.
We can see a improvement on group by query (much more for string types). The downside is that this is type-specific design unlike Rows that covers all the types
What changes are included in this PR?
Are these changes tested?
Are there any user-facing changes?
Benchmark
Query after 37 is removed since DateTime is not supported