-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-34862][SQL] Support nested column in ORC vectorized reader #31958
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from all commits
Commits
Show all changes
8 commits
Select commit
Hold shift + click to select a range
d1d6585
Support nested column in ORC vectorized reader
c21 fbc8c6c
Add missing license header
c21 c911e89
Add DirectAbstractMethodProblem of ColumnVector class in MimaExcludes…
c21 3bfc03a
Try to update MimaExcludes to fix MiMa test
c21 f30cc88
Do not allow UserDefinedType for vectorization to fix unit test failure
c21 9cd3bc5
Address all comments
c21 fda6b12
Address indentation comments
c21 44feacc
Address comment for style
c21 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
115 changes: 115 additions & 0 deletions
115
...re/src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcArrayColumnVector.java
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,115 @@ | ||
| /* | ||
| * Licensed to the Apache Software Foundation (ASF) under one or more | ||
| * contributor license agreements. See the NOTICE file distributed with | ||
| * this work for additional information regarding copyright ownership. | ||
| * The ASF licenses this file to You under the Apache License, Version 2.0 | ||
| * (the "License"); you may not use this file except in compliance with | ||
| * the License. You may obtain a copy of the License at | ||
| * | ||
| * http://www.apache.org/licenses/LICENSE-2.0 | ||
| * | ||
| * Unless required by applicable law or agreed to in writing, software | ||
| * distributed under the License is distributed on an "AS IS" BASIS, | ||
| * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| * See the License for the specific language governing permissions and | ||
| * limitations under the License. | ||
| */ | ||
|
|
||
| package org.apache.spark.sql.execution.datasources.orc; | ||
|
|
||
| import org.apache.hadoop.hive.ql.exec.vector.ColumnVector; | ||
|
|
||
| import org.apache.spark.sql.types.ArrayType; | ||
| import org.apache.spark.sql.types.DataType; | ||
| import org.apache.spark.sql.types.Decimal; | ||
| import org.apache.spark.sql.vectorized.ColumnarArray; | ||
| import org.apache.spark.sql.vectorized.ColumnarMap; | ||
| import org.apache.spark.unsafe.types.UTF8String; | ||
|
|
||
| /** | ||
| * A column vector implementation for Spark's {@link ArrayType}. | ||
| */ | ||
| public class OrcArrayColumnVector extends OrcColumnVector { | ||
| private final OrcColumnVector data; | ||
| private final long[] offsets; | ||
| private final long[] lengths; | ||
|
|
||
| OrcArrayColumnVector( | ||
| DataType type, | ||
| ColumnVector vector, | ||
| OrcColumnVector data, | ||
| long[] offsets, | ||
| long[] lengths) { | ||
|
|
||
| super(type, vector); | ||
|
|
||
| this.data = data; | ||
| this.offsets = offsets; | ||
| this.lengths = lengths; | ||
| } | ||
|
|
||
| @Override | ||
| public ColumnarArray getArray(int rowId) { | ||
| return new ColumnarArray(data, (int) offsets[rowId], (int) lengths[rowId]); | ||
| } | ||
|
|
||
| @Override | ||
| public boolean getBoolean(int rowId) { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
|
|
||
| @Override | ||
| public byte getByte(int rowId) { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
|
|
||
| @Override | ||
| public short getShort(int rowId) { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
|
|
||
| @Override | ||
| public int getInt(int rowId) { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
|
|
||
| @Override | ||
| public long getLong(int rowId) { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
|
|
||
| @Override | ||
| public float getFloat(int rowId) { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
|
|
||
| @Override | ||
| public double getDouble(int rowId) { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
|
|
||
| @Override | ||
| public Decimal getDecimal(int rowId, int precision, int scale) { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
|
|
||
| @Override | ||
| public UTF8String getUTF8String(int rowId) { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
|
|
||
| @Override | ||
| public byte[] getBinary(int rowId) { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
|
|
||
| @Override | ||
| public ColumnarMap getMap(int rowId) { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
|
|
||
| @Override | ||
| public org.apache.spark.sql.vectorized.ColumnVector getChild(int ordinal) { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
| } |
161 changes: 161 additions & 0 deletions
161
...e/src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcAtomicColumnVector.java
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,161 @@ | ||
| /* | ||
| * Licensed to the Apache Software Foundation (ASF) under one or more | ||
| * contributor license agreements. See the NOTICE file distributed with | ||
| * this work for additional information regarding copyright ownership. | ||
| * The ASF licenses this file to You under the Apache License, Version 2.0 | ||
| * (the "License"); you may not use this file except in compliance with | ||
| * the License. You may obtain a copy of the License at | ||
| * | ||
| * http://www.apache.org/licenses/LICENSE-2.0 | ||
| * | ||
| * Unless required by applicable law or agreed to in writing, software | ||
| * distributed under the License is distributed on an "AS IS" BASIS, | ||
| * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| * See the License for the specific language governing permissions and | ||
| * limitations under the License. | ||
| */ | ||
|
|
||
| package org.apache.spark.sql.execution.datasources.orc; | ||
|
|
||
| import java.math.BigDecimal; | ||
|
|
||
| import org.apache.hadoop.hive.ql.exec.vector.*; | ||
|
|
||
| import org.apache.spark.sql.catalyst.util.DateTimeUtils; | ||
| import org.apache.spark.sql.catalyst.util.RebaseDateTime; | ||
| import org.apache.spark.sql.types.DataType; | ||
| import org.apache.spark.sql.types.DateType; | ||
| import org.apache.spark.sql.types.Decimal; | ||
| import org.apache.spark.sql.types.TimestampType; | ||
| import org.apache.spark.sql.vectorized.ColumnarArray; | ||
| import org.apache.spark.sql.vectorized.ColumnarMap; | ||
| import org.apache.spark.unsafe.types.UTF8String; | ||
|
|
||
| /** | ||
| * A column vector implementation for Spark's AtomicType. | ||
| */ | ||
| public class OrcAtomicColumnVector extends OrcColumnVector { | ||
| private final boolean isTimestamp; | ||
| private final boolean isDate; | ||
|
|
||
| // Column vector for each type. Only 1 is populated for any type. | ||
| private LongColumnVector longData; | ||
| private DoubleColumnVector doubleData; | ||
| private BytesColumnVector bytesData; | ||
| private DecimalColumnVector decimalData; | ||
| private TimestampColumnVector timestampData; | ||
|
|
||
| OrcAtomicColumnVector(DataType type, ColumnVector vector) { | ||
| super(type, vector); | ||
|
|
||
| if (type instanceof TimestampType) { | ||
| isTimestamp = true; | ||
| } else { | ||
| isTimestamp = false; | ||
| } | ||
|
|
||
| if (type instanceof DateType) { | ||
| isDate = true; | ||
| } else { | ||
| isDate = false; | ||
| } | ||
|
|
||
| if (vector instanceof LongColumnVector) { | ||
| longData = (LongColumnVector) vector; | ||
| } else if (vector instanceof DoubleColumnVector) { | ||
| doubleData = (DoubleColumnVector) vector; | ||
| } else if (vector instanceof BytesColumnVector) { | ||
| bytesData = (BytesColumnVector) vector; | ||
| } else if (vector instanceof DecimalColumnVector) { | ||
| decimalData = (DecimalColumnVector) vector; | ||
| } else if (vector instanceof TimestampColumnVector) { | ||
| timestampData = (TimestampColumnVector) vector; | ||
| } else { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
| } | ||
|
|
||
| @Override | ||
| public boolean getBoolean(int rowId) { | ||
| return longData.vector[getRowIndex(rowId)] == 1; | ||
| } | ||
|
|
||
| @Override | ||
| public byte getByte(int rowId) { | ||
| return (byte) longData.vector[getRowIndex(rowId)]; | ||
| } | ||
|
|
||
| @Override | ||
| public short getShort(int rowId) { | ||
| return (short) longData.vector[getRowIndex(rowId)]; | ||
| } | ||
|
|
||
| @Override | ||
| public int getInt(int rowId) { | ||
| int value = (int) longData.vector[getRowIndex(rowId)]; | ||
| if (isDate) { | ||
| return RebaseDateTime.rebaseJulianToGregorianDays(value); | ||
| } else { | ||
| return value; | ||
| } | ||
| } | ||
|
|
||
| @Override | ||
| public long getLong(int rowId) { | ||
| int index = getRowIndex(rowId); | ||
| if (isTimestamp) { | ||
| return DateTimeUtils.fromJavaTimestamp(timestampData.asScratchTimestamp(index)); | ||
| } else { | ||
| return longData.vector[index]; | ||
| } | ||
| } | ||
|
|
||
| @Override | ||
| public float getFloat(int rowId) { | ||
| return (float) doubleData.vector[getRowIndex(rowId)]; | ||
| } | ||
|
|
||
| @Override | ||
| public double getDouble(int rowId) { | ||
| return doubleData.vector[getRowIndex(rowId)]; | ||
| } | ||
|
|
||
| @Override | ||
| public Decimal getDecimal(int rowId, int precision, int scale) { | ||
| if (isNullAt(rowId)) return null; | ||
| BigDecimal data = decimalData.vector[getRowIndex(rowId)].getHiveDecimal().bigDecimalValue(); | ||
| return Decimal.apply(data, precision, scale); | ||
| } | ||
|
|
||
| @Override | ||
| public UTF8String getUTF8String(int rowId) { | ||
| if (isNullAt(rowId)) return null; | ||
| int index = getRowIndex(rowId); | ||
| BytesColumnVector col = bytesData; | ||
| return UTF8String.fromBytes(col.vector[index], col.start[index], col.length[index]); | ||
| } | ||
|
|
||
| @Override | ||
| public byte[] getBinary(int rowId) { | ||
| if (isNullAt(rowId)) return null; | ||
| int index = getRowIndex(rowId); | ||
| byte[] binary = new byte[bytesData.length[index]]; | ||
| System.arraycopy(bytesData.vector[index], bytesData.start[index], binary, 0, binary.length); | ||
| return binary; | ||
| } | ||
|
|
||
| @Override | ||
| public ColumnarArray getArray(int rowId) { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
|
|
||
| @Override | ||
| public ColumnarMap getMap(int rowId) { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
|
|
||
| @Override | ||
| public org.apache.spark.sql.vectorized.ColumnVector getChild(int ordinal) { | ||
| throw new UnsupportedOperationException(); | ||
| } | ||
| } |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is weird, where do we change
org.apache.spark.sql.vectorized.ColumnVectorin this PR?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cloud-fan - yeah it's weird. We don't change
ColumnVectorclass at all. Do you have any idea for how to debug on this? I am still checking why, thanks.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe it's some bugs in Mima, not a bit deal as we know this PR doesn't break binary compatibility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cloud-fan - spent some time checking, but still not sure where the issue is, so I agree with you that might be some bug in Mima.