#39 performance issue in fuction getAliasedConstraints of LogicalPlan#27
Closed
zheniantoushipashi wants to merge 2 commits intoKyligence:kyspark-2.2.1.xfrom
Closed
#39 performance issue in fuction getAliasedConstraints of LogicalPlan#27zheniantoushipashi wants to merge 2 commits intoKyligence:kyspark-2.2.1.xfrom
zheniantoushipashi wants to merge 2 commits intoKyligence:kyspark-2.2.1.xfrom
Conversation
…erride function ++ in class ExpressionSet
Author
|
这个性能问题, 在spark2.3 版本已经fix ,不需要 double commit , 参考 apache#19022 |
zheniantoushipashi
pushed a commit
to zheniantoushipashi/spark
that referenced
this pull request
Jul 3, 2021
### What changes were proposed in this pull request?
As title. This PR is to add code-gen support for LEFT SEMI sort merge join. The main change is to add `semiJoin` code path in `SortMergeJoinExec.doProduce()` and introduce `onlyBufferFirstMatchedRow` in `SortMergeJoinExec.genScanner()`. The latter is for left semi sort merge join without condition. For this kind of query, we don't need to buffer all matched rows, but only the first one (this is same as non-code-gen code path).
Example query:
```
val df1 = spark.range(10).select($"id".as("k1"))
val df2 = spark.range(4).select($"id".as("k2"))
val oneJoinDF = df1.join(df2.hint("SHUFFLE_MERGE"), $"k1" === $"k2", "left_semi")
```
Example of generated code for the query:
```
== Subtree 5 / 5 (maxMethodCodeSize:302; maxConstantPoolSize:156(0.24% used); numInnerClasses:0) ==
*(5) Project [id#0L AS k1#2L]
+- *(5) SortMergeJoin [id#0L], [k2#6L], LeftSemi
:- *(2) Sort [id#0L ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(id#0L, 5), ENSURE_REQUIREMENTS, [id=Kyligence#27]
: +- *(1) Range (0, 10, step=1, splits=2)
+- *(4) Sort [k2#6L ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(k2#6L, 5), ENSURE_REQUIREMENTS, [id=Kyligence#33]
+- *(3) Project [id#4L AS k2#6L]
+- *(3) Range (0, 4, step=1, splits=2)
Generated code:
/* 001 */ public Object generate(Object[] references) {
/* 002 */ return new GeneratedIteratorForCodegenStage5(references);
/* 003 */ }
/* 004 */
/* 005 */ // codegenStageId=5
/* 006 */ final class GeneratedIteratorForCodegenStage5 extends org.apache.spark.sql.execution.BufferedRowIterator {
/* 007 */ private Object[] references;
/* 008 */ private scala.collection.Iterator[] inputs;
/* 009 */ private scala.collection.Iterator smj_streamedInput_0;
/* 010 */ private scala.collection.Iterator smj_bufferedInput_0;
/* 011 */ private InternalRow smj_streamedRow_0;
/* 012 */ private InternalRow smj_bufferedRow_0;
/* 013 */ private long smj_value_2;
/* 014 */ private org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray smj_matches_0;
/* 015 */ private long smj_value_3;
/* 016 */ private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter[] smj_mutableStateArray_0 = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter[2];
/* 017 */
/* 018 */ public GeneratedIteratorForCodegenStage5(Object[] references) {
/* 019 */ this.references = references;
/* 020 */ }
/* 021 */
/* 022 */ public void init(int index, scala.collection.Iterator[] inputs) {
/* 023 */ partitionIndex = index;
/* 024 */ this.inputs = inputs;
/* 025 */ smj_streamedInput_0 = inputs[0];
/* 026 */ smj_bufferedInput_0 = inputs[1];
/* 027 */
/* 028 */ smj_matches_0 = new org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray(1, 2147483647);
/* 029 */ smj_mutableStateArray_0[0] = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(1, 0);
/* 030 */ smj_mutableStateArray_0[1] = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(1, 0);
/* 031 */
/* 032 */ }
/* 033 */
/* 034 */ private boolean findNextJoinRows(
/* 035 */ scala.collection.Iterator streamedIter,
/* 036 */ scala.collection.Iterator bufferedIter) {
/* 037 */ smj_streamedRow_0 = null;
/* 038 */ int comp = 0;
/* 039 */ while (smj_streamedRow_0 == null) {
/* 040 */ if (!streamedIter.hasNext()) return false;
/* 041 */ smj_streamedRow_0 = (InternalRow) streamedIter.next();
/* 042 */ long smj_value_0 = smj_streamedRow_0.getLong(0);
/* 043 */ if (false) {
/* 044 */ smj_streamedRow_0 = null;
/* 045 */ continue;
/* 046 */
/* 047 */ }
/* 048 */ if (!smj_matches_0.isEmpty()) {
/* 049 */ comp = 0;
/* 050 */ if (comp == 0) {
/* 051 */ comp = (smj_value_0 > smj_value_3 ? 1 : smj_value_0 < smj_value_3 ? -1 : 0);
/* 052 */ }
/* 053 */
/* 054 */ if (comp == 0) {
/* 055 */ return true;
/* 056 */ }
/* 057 */ smj_matches_0.clear();
/* 058 */ }
/* 059 */
/* 060 */ do {
/* 061 */ if (smj_bufferedRow_0 == null) {
/* 062 */ if (!bufferedIter.hasNext()) {
/* 063 */ smj_value_3 = smj_value_0;
/* 064 */ return !smj_matches_0.isEmpty();
/* 065 */ }
/* 066 */ smj_bufferedRow_0 = (InternalRow) bufferedIter.next();
/* 067 */ long smj_value_1 = smj_bufferedRow_0.getLong(0);
/* 068 */ if (false) {
/* 069 */ smj_bufferedRow_0 = null;
/* 070 */ continue;
/* 071 */ }
/* 072 */ smj_value_2 = smj_value_1;
/* 073 */ }
/* 074 */
/* 075 */ comp = 0;
/* 076 */ if (comp == 0) {
/* 077 */ comp = (smj_value_0 > smj_value_2 ? 1 : smj_value_0 < smj_value_2 ? -1 : 0);
/* 078 */ }
/* 079 */
/* 080 */ if (comp > 0) {
/* 081 */ smj_bufferedRow_0 = null;
/* 082 */ } else if (comp < 0) {
/* 083 */ if (!smj_matches_0.isEmpty()) {
/* 084 */ smj_value_3 = smj_value_0;
/* 085 */ return true;
/* 086 */ } else {
/* 087 */ smj_streamedRow_0 = null;
/* 088 */ }
/* 089 */ } else {
/* 090 */ if (smj_matches_0.isEmpty()) {
/* 091 */ smj_matches_0.add((UnsafeRow) smj_bufferedRow_0);
/* 092 */ }
/* 093 */
/* 094 */ smj_bufferedRow_0 = null;
/* 095 */ }
/* 096 */ } while (smj_streamedRow_0 != null);
/* 097 */ }
/* 098 */ return false; // unreachable
/* 099 */ }
/* 100 */
/* 101 */ protected void processNext() throws java.io.IOException {
/* 102 */ while (findNextJoinRows(smj_streamedInput_0, smj_bufferedInput_0)) {
/* 103 */ long smj_value_4 = -1L;
/* 104 */ smj_value_4 = smj_streamedRow_0.getLong(0);
/* 105 */ scala.collection.Iterator<UnsafeRow> smj_iterator_0 = smj_matches_0.generateIterator();
/* 106 */ boolean smj_hasOutputRow_0 = false;
/* 107 */
/* 108 */ while (!smj_hasOutputRow_0 && smj_iterator_0.hasNext()) {
/* 109 */ InternalRow smj_bufferedRow_1 = (InternalRow) smj_iterator_0.next();
/* 110 */
/* 111 */ smj_hasOutputRow_0 = true;
/* 112 */ ((org.apache.spark.sql.execution.metric.SQLMetric) references[0] /* numOutputRows */).add(1);
/* 113 */
/* 114 */ // common sub-expressions
/* 115 */
/* 116 */ smj_mutableStateArray_0[1].reset();
/* 117 */
/* 118 */ smj_mutableStateArray_0[1].write(0, smj_value_4);
/* 119 */ append((smj_mutableStateArray_0[1].getRow()).copy());
/* 120 */
/* 121 */ }
/* 122 */ if (shouldStop()) return;
/* 123 */ }
/* 124 */ ((org.apache.spark.sql.execution.joins.SortMergeJoinExec) references[1] /* plan */).cleanupResources();
/* 125 */ }
/* 126 */
/* 127 */ }
```
### Why are the changes needed?
Improve query CPU performance. Test with one query:
```
def sortMergeJoin(): Unit = {
val N = 2 << 20
codegenBenchmark("left semi sort merge join", N) {
val df1 = spark.range(N).selectExpr(s"id * 2 as k1")
val df2 = spark.range(N).selectExpr(s"id * 3 as k2")
val df = df1.join(df2, col("k1") === col("k2"), "left_semi")
assert(df.queryExecution.sparkPlan.find(_.isInstanceOf[SortMergeJoinExec]).isDefined)
df.noop()
}
}
```
Seeing 30% of run-time improvement:
```
Running benchmark: left semi sort merge join
Running case: left semi sort merge join code-gen off
Stopped after 2 iterations, 1369 ms
Running case: left semi sort merge join code-gen on
Stopped after 5 iterations, 2743 ms
Java HotSpot(TM) 64-Bit Server VM 1.8.0_181-b13 on Mac OS X 10.16
Intel(R) Core(TM) i9-9980HK CPU 2.40GHz
left semi sort merge join: Best Time(ms) Avg Time(ms) Stdev(ms) Rate(M/s) Per Row(ns) Relative
------------------------------------------------------------------------------------------------------------------------
left semi sort merge join code-gen off 676 685 13 3.1 322.2 1.0X
left semi sort merge join code-gen on 524 549 32 4.0 249.7 1.3X
```
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Added unit test in `WholeStageCodegenSuite.scala` and `ExistenceJoinSuite.scala`.
Closes apache#32528 from c21/smj-left-semi.
Authored-by: Cheng Su <chengsu@fb.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
zheniantoushipashi
pushed a commit
to zheniantoushipashi/spark
that referenced
this pull request
Jul 3, 2021
### What changes were proposed in this pull request?
As title. This PR is to add code-gen support for LEFT ANTI sort merge join. The main change is to extract `loadStreamed` in `SortMergeJoinExec.doProduce()`. That is to set all columns values for streamed row, when the streamed row has no output row.
Example query:
```
val df1 = spark.range(10).select($"id".as("k1"))
val df2 = spark.range(4).select($"id".as("k2"))
df1.join(df2.hint("SHUFFLE_MERGE"), $"k1" === $"k2", "left_anti")
```
Example generated code:
```
== Subtree 5 / 5 (maxMethodCodeSize:296; maxConstantPoolSize:156(0.24% used); numInnerClasses:0) ==
*(5) Project [id#0L AS k1#2L]
+- *(5) SortMergeJoin [id#0L], [k2#6L], LeftAnti
:- *(2) Sort [id#0L ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(id#0L, 5), ENSURE_REQUIREMENTS, [id=Kyligence#27]
: +- *(1) Range (0, 10, step=1, splits=2)
+- *(4) Sort [k2#6L ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(k2#6L, 5), ENSURE_REQUIREMENTS, [id=Kyligence#33]
+- *(3) Project [id#4L AS k2#6L]
+- *(3) Range (0, 4, step=1, splits=2)
Generated code:
/* 001 */ public Object generate(Object[] references) {
/* 002 */ return new GeneratedIteratorForCodegenStage5(references);
/* 003 */ }
/* 004 */
/* 005 */ // codegenStageId=5
/* 006 */ final class GeneratedIteratorForCodegenStage5 extends org.apache.spark.sql.execution.BufferedRowIterator {
/* 007 */ private Object[] references;
/* 008 */ private scala.collection.Iterator[] inputs;
/* 009 */ private scala.collection.Iterator smj_streamedInput_0;
/* 010 */ private scala.collection.Iterator smj_bufferedInput_0;
/* 011 */ private InternalRow smj_streamedRow_0;
/* 012 */ private InternalRow smj_bufferedRow_0;
/* 013 */ private long smj_value_2;
/* 014 */ private org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray smj_matches_0;
/* 015 */ private long smj_value_3;
/* 016 */ private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter[] smj_mutableStateArray_0 = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter[2];
/* 017 */
/* 018 */ public GeneratedIteratorForCodegenStage5(Object[] references) {
/* 019 */ this.references = references;
/* 020 */ }
/* 021 */
/* 022 */ public void init(int index, scala.collection.Iterator[] inputs) {
/* 023 */ partitionIndex = index;
/* 024 */ this.inputs = inputs;
/* 025 */ smj_streamedInput_0 = inputs[0];
/* 026 */ smj_bufferedInput_0 = inputs[1];
/* 027 */
/* 028 */ smj_matches_0 = new org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray(1, 2147483647);
/* 029 */ smj_mutableStateArray_0[0] = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(1, 0);
/* 030 */ smj_mutableStateArray_0[1] = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(1, 0);
/* 031 */
/* 032 */ }
/* 033 */
/* 034 */ private boolean findNextJoinRows(
/* 035 */ scala.collection.Iterator streamedIter,
/* 036 */ scala.collection.Iterator bufferedIter) {
/* 037 */ smj_streamedRow_0 = null;
/* 038 */ int comp = 0;
/* 039 */ while (smj_streamedRow_0 == null) {
/* 040 */ if (!streamedIter.hasNext()) return false;
/* 041 */ smj_streamedRow_0 = (InternalRow) streamedIter.next();
/* 042 */ long smj_value_0 = smj_streamedRow_0.getLong(0);
/* 043 */ if (false) {
/* 044 */ if (!smj_matches_0.isEmpty()) {
/* 045 */ smj_matches_0.clear();
/* 046 */ }
/* 047 */ return false;
/* 048 */
/* 049 */ }
/* 050 */ if (!smj_matches_0.isEmpty()) {
/* 051 */ comp = 0;
/* 052 */ if (comp == 0) {
/* 053 */ comp = (smj_value_0 > smj_value_3 ? 1 : smj_value_0 < smj_value_3 ? -1 : 0);
/* 054 */ }
/* 055 */
/* 056 */ if (comp == 0) {
/* 057 */ return true;
/* 058 */ }
/* 059 */ smj_matches_0.clear();
/* 060 */ }
/* 061 */
/* 062 */ do {
/* 063 */ if (smj_bufferedRow_0 == null) {
/* 064 */ if (!bufferedIter.hasNext()) {
/* 065 */ smj_value_3 = smj_value_0;
/* 066 */ return !smj_matches_0.isEmpty();
/* 067 */ }
/* 068 */ smj_bufferedRow_0 = (InternalRow) bufferedIter.next();
/* 069 */ long smj_value_1 = smj_bufferedRow_0.getLong(0);
/* 070 */ if (false) {
/* 071 */ smj_bufferedRow_0 = null;
/* 072 */ continue;
/* 073 */ }
/* 074 */ smj_value_2 = smj_value_1;
/* 075 */ }
/* 076 */
/* 077 */ comp = 0;
/* 078 */ if (comp == 0) {
/* 079 */ comp = (smj_value_0 > smj_value_2 ? 1 : smj_value_0 < smj_value_2 ? -1 : 0);
/* 080 */ }
/* 081 */
/* 082 */ if (comp > 0) {
/* 083 */ smj_bufferedRow_0 = null;
/* 084 */ } else if (comp < 0) {
/* 085 */ if (!smj_matches_0.isEmpty()) {
/* 086 */ smj_value_3 = smj_value_0;
/* 087 */ return true;
/* 088 */ } else {
/* 089 */ return false;
/* 090 */ }
/* 091 */ } else {
/* 092 */ if (smj_matches_0.isEmpty()) {
/* 093 */ smj_matches_0.add((UnsafeRow) smj_bufferedRow_0);
/* 094 */ }
/* 095 */
/* 096 */ smj_bufferedRow_0 = null;
/* 097 */ }
/* 098 */ } while (smj_streamedRow_0 != null);
/* 099 */ }
/* 100 */ return false; // unreachable
/* 101 */ }
/* 102 */
/* 103 */ protected void processNext() throws java.io.IOException {
/* 104 */ while (smj_streamedInput_0.hasNext()) {
/* 105 */ findNextJoinRows(smj_streamedInput_0, smj_bufferedInput_0);
/* 106 */
/* 107 */ long smj_value_4 = -1L;
/* 108 */ smj_value_4 = smj_streamedRow_0.getLong(0);
/* 109 */ scala.collection.Iterator<UnsafeRow> smj_iterator_0 = smj_matches_0.generateIterator();
/* 110 */
/* 111 */ boolean wholestagecodegen_hasOutputRow_0 = false;
/* 112 */
/* 113 */ while (!wholestagecodegen_hasOutputRow_0 && smj_iterator_0.hasNext()) {
/* 114 */ InternalRow smj_bufferedRow_1 = (InternalRow) smj_iterator_0.next();
/* 115 */
/* 116 */ wholestagecodegen_hasOutputRow_0 = true;
/* 117 */ }
/* 118 */
/* 119 */ if (!wholestagecodegen_hasOutputRow_0) {
/* 120 */ // load all values of streamed row, because the values not in join condition are not
/* 121 */ // loaded yet.
/* 122 */
/* 123 */ ((org.apache.spark.sql.execution.metric.SQLMetric) references[0] /* numOutputRows */).add(1);
/* 124 */
/* 125 */ // common sub-expressions
/* 126 */
/* 127 */ smj_mutableStateArray_0[1].reset();
/* 128 */
/* 129 */ smj_mutableStateArray_0[1].write(0, smj_value_4);
/* 130 */ append((smj_mutableStateArray_0[1].getRow()).copy());
/* 131 */
/* 132 */ }
/* 133 */ if (shouldStop()) return;
/* 134 */ }
/* 135 */ ((org.apache.spark.sql.execution.joins.SortMergeJoinExec) references[1] /* plan */).cleanupResources();
/* 136 */ }
/* 137 */
/* 138 */ }
```
### Why are the changes needed?
Improve the query CPU performance.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Added unit test in `WholeStageCodegenSuite.scala`, and existed unit test in `ExistenceJoinSuite.scala`.
Closes apache#32547 from c21/smj-left-anti.
Authored-by: Cheng Su <chengsu@fb.com>
Signed-off-by: Takeshi Yamamuro <yamamuro@apache.org>
|
Can one of the admins verify this patch? |
|
We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
optimize funciton getAliasedConstraints with override function ++ in class ExpressionSet
related issue: https://github.com/Kyligence/KAP/issues/13145