-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-6628][SQL][Branch-2.1] Fix ClassCastException when executing sql statement 'insert into' on hbase table #18127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ql statement 'insert into' on hbase table
|
Test build #77449 has finished for PR 18127 at commit
|
|
Jenkins, test this please. |
|
Test build #77451 has finished for PR 18127 at commit
|
|
Hi @weiqingy, I just wonder if it is in progress in any way. |
|
Thanks, @HyukjinKwon . Yes, but will come back here after I finish other work. Do I need to close this for now and reopen it at that time? |
|
Thanks for your input @weiqingy. I was just trying to suggest to close PRs inactive for a month to review comments and/or non-successful Jenkins test result (for a good reason, of course). Would that take longer than a month? |
|
Hi @weiqingy Just wanted to confirm, if this was fixed in Spark 2.4 or not. Since, I am facing same issue when inserting records in a Hive-Hbase Table. Also, kindly specify the reason behind not including this change in further versions, if there is any. Kindly also let me know if this issue will be fixed in any upcoming release |
|
I'm also getting the I was checking the code in this pull request, and I couldn't find it merged in any branch/tag. I notice that starting from version Is this pull request necessary to fix the issue? If so, is there any temporary workaround? |
|
@lhsvobodaj @weiqingy @HyukjinKwon I think there has been a regression in the current codebase, here: spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveFileFormat.scala Line 94 in 3e30a98
|
|
I'm not sure if it was a regression as I couldn't find the fix merged in any branch. One of the possible reasons to not merge this code is this Hive issue HIVE-20678. |
|
@lhsvobodaj ok so that issue is fixed in Hive 4.0.0, but the problem is that we run on Cloudera's Distribution of Hadoop which uses an older version of Hive and there's no way around it then :( |
|
@racc We are also using CDH (5.11.2 and 6.1.0). The Hive fix for this issue is available on CDH 5.15.2 as per its release notes. |
|
Please check if this still exists in the master and open a PR with a test. |
|
Hi Team, With this configufration (Spark 2.4 / Hive 2.3.7 / Hadoop 2.7 / Hbase 1.4.9), without CDH. |
|
Hi, Team, |
+1 |
## What changes were proposed in this pull request? This PR proposes to close stale PRs, mostly the same instances with apache#18017 Closes apache#14085 - [SPARK-16408][SQL] SparkSQL Added file get Exception: is a directory … Closes apache#14239 - [SPARK-16593] [CORE] [WIP] Provide a pre-fetch mechanism to accelerate shuffle stage. Closes apache#14567 - [SPARK-16992][PYSPARK] Python Pep8 formatting and import reorganisation Closes apache#14579 - [SPARK-16921][PYSPARK] RDD/DataFrame persist()/cache() should return Python context managers Closes apache#14601 - [SPARK-13979][Core] Killed executor is re spawned without AWS key… Closes apache#14830 - [SPARK-16992][PYSPARK][DOCS] import sort and autopep8 on Pyspark examples Closes apache#14963 - [SPARK-16992][PYSPARK] Virtualenv for Pylint and pep8 in lint-python Closes apache#15227 - [SPARK-17655][SQL]Remove unused variables declarations and definations in a WholeStageCodeGened stage Closes apache#15240 - [SPARK-17556] [CORE] [SQL] Executor side broadcast for broadcast joins Closes apache#15405 - [SPARK-15917][CORE] Added support for number of executors in Standalone [WIP] Closes apache#16099 - [SPARK-18665][SQL] set statement state to "ERROR" after user cancel job Closes apache#16445 - [SPARK-19043][SQL]Make SparkSQLSessionManager more configurable Closes apache#16618 - [SPARK-14409][ML][WIP] Add RankingEvaluator Closes apache#16766 - [SPARK-19426][SQL] Custom coalesce for Dataset Closes apache#16832 - [SPARK-19490][SQL] ignore case sensitivity when filtering hive partition columns Closes apache#17052 - [SPARK-19690][SS] Join a streaming DataFrame with a batch DataFrame which has an aggregation may not work Closes apache#17267 - [SPARK-19926][PYSPARK] Make pyspark exception more user-friendly Closes apache#17371 - [SPARK-19903][PYSPARK][SS] window operator miss the `watermark` metadata of time column Closes apache#17401 - [SPARK-18364][YARN] Expose metrics for YarnShuffleService Closes apache#17519 - [SPARK-15352][Doc] follow-up: add configuration docs for topology-aware block replication Closes apache#17530 - [SPARK-5158] Access kerberized HDFS from Spark standalone Closes apache#17854 - [SPARK-20564][Deploy] Reduce massive executor failures when executor count is large (>2000) Closes apache#17979 - [SPARK-19320][MESOS][WIP]allow specifying a hard limit on number of gpus required in each spark executor when running on mesos Closes apache#18127 - [SPARK-6628][SQL][Branch-2.1] Fix ClassCastException when executing sql statement 'insert into' on hbase table Closes apache#18236 - [SPARK-21015] Check field name is not null and empty in GenericRowWit… Closes apache#18269 - [SPARK-21056][SQL] Use at most one spark job to list files in InMemoryFileIndex Closes apache#18328 - [SPARK-21121][SQL] Support changing storage level via the spark.sql.inMemoryColumnarStorage.level variable Closes apache#18354 - [SPARK-18016][SQL][CATALYST][BRANCH-2.1] Code Generation: Constant Pool Limit - Class Splitting Closes apache#18383 - [SPARK-21167][SS] Set kafka clientId while fetch messages Closes apache#18414 - [SPARK-21169] [core] Make sure to update application status to RUNNING if executors are accepted and RUNNING after recovery Closes apache#18432 - resolve com.esotericsoftware.kryo.KryoException Closes apache#18490 - [SPARK-21269][Core][WIP] Fix FetchFailedException when enable maxReqSizeShuffleToMem and KryoSerializer Closes apache#18585 - SPARK-21359 Closes apache#18609 - Spark SQL merge small files to big files Update InsertIntoHiveTable.scala Added: Closes apache#18308 - [SPARK-21099][Spark Core] INFO Log Message Using Incorrect Executor I… Closes apache#18599 - [SPARK-21372] spark writes one log file even I set the number of spark_rotate_log to 0 Closes apache#18619 - [SPARK-21397][BUILD]Maven shade plugin adding dependency-reduced-pom.xml to … Closes apache#18667 - Fix the simpleString used in error messages Closes apache#18782 - Branch 2.1 Added: Closes apache#17694 - [SPARK-12717][PYSPARK] Resolving race condition with pyspark broadcasts when using multiple threads Added: Closes apache#16456 - [SPARK-18994] clean up the local directories for application in future by annother thread Closes apache#18683 - [SPARK-21474][CORE] Make number of parallel fetches from a reducer configurable Closes apache#18690 - [SPARK-21334][CORE] Add metrics reporting service to External Shuffle Server Added: Closes apache#18827 - Merge pull request 1 from apache/master ## How was this patch tested? N/A Author: hyukjinkwon <[email protected]> Closes apache#18780 from HyukjinKwon/close-prs.
What changes were proposed in this pull request?
The issue of SPARK-6628 is:
cannot be cast to
The reason is:
From the two snippets above, we can see both
HiveHBaseTableOutputFormatandHiveOutputFormatextends/implementsOutputFormat, and can not cast to each other.For Spark 1.6, 2.0, 2.1, Spark initials the
outputFormatinSparkHiveWriterContainer. For Spark 2.2+, Spark initials theoutputFormatinHiveFileFormat.outputFormatabove has to beHiveOutputFormat. However, when users insert data into hbase, the outputFormat isHiveHBaseTableOutputFormat, it isn't instance ofHiveOutputFormat.This PR is to make
outputFormatto be "null" when theOutputFormatis not an instance ofHiveOutputFormat. This change should be safe sinceoutputFormatis only used to get the file extension in functiongetFileExtension().We can also submit this PR to Master branch.
How was this patch tested?
Manually test.
(1) create a HBase table with Hive:
(2) verify:
Before:
Insert data into the Hbase table
testwq100from Spark SQL:After:
The ClassCastException gone. "Insert" succeed.