This repository was archived by the owner on Nov 15, 2024. It is now read-only.
Commit cde5f95
[SPARK-23963][SQL] Properly handle large number of columns in query on text-based Hive table
## What changes were proposed in this pull request?
TableReader would get disproportionately slower as the number of columns in the query increased.
I fixed the way TableReader was looking up metadata for each column in the row. Previously, it had been looking up this data in linked lists, accessing each linked list by an index (column number). Now it looks up this data in arrays, where indexing by column number works better.
## How was this patch tested?
Manual testing
All sbt unit tests
python sql tests
Author: Bruce Robbins <[email protected]>
Closes apache#21043 from bersprockets/tabreadfix.1 parent 36f747b commit cde5f95
File tree
1 file changed
+1
-1
lines changed- sql/hive/src/main/scala/org/apache/spark/sql/hive
1 file changed
+1
-1
lines changedLines changed: 1 addition & 1 deletion
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
381 | 381 | | |
382 | 382 | | |
383 | 383 | | |
384 | | - | |
| 384 | + | |
385 | 385 | | |
386 | 386 | | |
387 | 387 | | |
| |||
0 commit comments