Skip to content

Conversation

@zsxwing
Copy link
Member

@zsxwing zsxwing commented Jul 9, 2020

What changes were proposed in this pull request?

Force to initialize Hadoop VersionInfo in HiveExternalCatalog to make sure Hive can get the Hadoop version when using the isolated classloader.

Why are the changes needed?

This is a regression in Spark 3.0.0 because we switched the default Hive execution version from 1.2.1 to 2.3.7.

Spark allows the user to set spark.sql.hive.metastore.jars to specify jars to access Hive Metastore. These jars are loaded by the isolated classloader. Because we also share Hadoop classes with the isolated classloader, the user doesn't need to add Hadoop jars to spark.sql.hive.metastore.jars, which means when we are using the isolated classloader, hadoop-common jar is not available in this case. If Hadoop VersionInfo is not initialized before we switch to the isolated classloader, and we try to initialize it using the isolated classloader (the current thread context classloader), it will fail and report Unknown which causes Hive to throw the following exception:

java.lang.RuntimeException: Illegal Hadoop Version: Unknown (expected A.B.* format)
	at org.apache.hadoop.hive.shims.ShimLoader.getMajorVersion(ShimLoader.java:147)
	at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:122)
	at org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:88)
	at org.apache.hadoop.hive.metastore.ObjectStore.getDataSourceProps(ObjectStore.java:377)
	at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:268)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:517)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:482)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:544)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:370)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:219)
	at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:67)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1548)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
	at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3080)
	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3108)
	at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3349)
	at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:217)
	at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:204)
	at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:331)
	at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:292)
	at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:262)
	at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:247)
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:543)
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:511)
	at org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:175)
	at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:128)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:301)
	at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:431)
	at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:324)
	at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:72)
	at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:71)
	at org.apache.spark.sql.hive.client.HadoopVersionInfoSuite.$anonfun$new$1(HadoopVersionInfoSuite.scala:63)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)

Technically, This is indeed an issue of Hadoop VersionInfo which has been fixed: https://issues.apache.org/jira/browse/HADOOP-14067. But since we are still supporting old Hadoop versions, we should fix it.

Why this issue starts to happen in Spark 3.0.0?

In Spark 2.4.x, we use Hive 1.2.1 by default. It will trigger VersionInfo initialization in the static codes of Hive class. This will happen when we load HiveClientImpl class because HiveClientImpl.clent method refers to Hive class. At this moment, the thread context classloader is not using the isolcated classloader, so it can access hadoop-common jar on the classpath and initialize it correctly.

In Spark 3.0.0, we use Hive 2.3.7. The static codes of Hive class are not accessing VersionInfo because of the change in https://issues.apache.org/jira/browse/HIVE-11657. Instead, accessing VersionInfo happens when creating a Hive object (See the above stack trace). This happens here https://github.com/apache/spark/blob/v3.0.0/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala#L260. But we switch to the isolated classloader before calling HiveClientImpl.client (See https://github.com/apache/spark/blob/v3.0.0/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala#L283). This is exactly what I mentioned above: If Hadoop VersionInfo is not initialized before we switch to the isolated classloader, and we try to initialize it using the isolated classloader (the current thread context classloader), it will fail

Does this PR introduce any user-facing change?

No.

How was this patch tested?

The new regression test added in this PR.

Note that the new UT doesn't fail with the default profiles (-Phadoop-3.2) because it's already fixed at Hadoop 3.1. Please use the following to verify this.

build/sbt -Phadoop-2.7 -Phive "hive/testOnly *.HadoopVersionInfoSuite"

@zsxwing zsxwing requested a review from cloud-fan July 9, 2020 16:25
sharesHadoopClasses = true)
val jars = client.classLoader.getParent.asInstanceOf[URLClassLoader].getURLs
.map(u => new File(u.toURI))
// Drop all Hadoop jars to use the existing Hadoop jars on the classpath
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for making this test case, @zsxwing . So, does this artificial dropping mimic the situation?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep

"org.apache.spark.ml.classification.LogisticRegressionSuite",
"org.apache.spark.ml.classification.LinearSVCSuite",
"org.apache.spark.sql.SQLQueryTestSuite",
"org.apache.spark.sql.hive.client.HadoopVersionInfoSuite",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK this only works for SBT. Maybe we need to disable the test by default to avoid breaking the maven build.

cc @gengliangwang

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine. The test is not failing in maven build. In the maven build, it will still pass but won't be able to catch the issue.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah i see

@dongjoon-hyun dongjoon-hyun changed the title [SPARK-32256][SQL]Force to initialize Hadoop VersionInfo in HiveExternalCatalog [SPARK-32256][SQL][test-hadoop2.7] Force to initialize Hadoop VersionInfo in HiveExternalCatalog Jul 9, 2020
@dongjoon-hyun
Copy link
Member

Retest this please.

Copy link
Member

@dongjoon-hyun dongjoon-hyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, LGTM. I also verified this patch with Hadoop 2.7.
Also, retrigger the Jenkins with Hadoop 2.7.
Thanks, @zsxwing .

@SparkQA
Copy link

SparkQA commented Jul 9, 2020

Test build #125510 has finished for PR 29059 at commit 074db15.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@dongjoon-hyun
Copy link
Member

Retest this please.

Copy link
Member

@viirya viirya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great analysis!

@SparkQA
Copy link

SparkQA commented Jul 10, 2020

Test build #125518 has finished for PR 29059 at commit 074db15.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

Copy link
Member

@gengliangwang gengliangwang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. The PR description is great and detailed!

sparkConf = new SparkConf(),
hadoopConf = hadoopConf,
config = HiveClientBuilder.buildConf(Map.empty),
ivyPath = HiveClientBuilder.ivyPath,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just realized this will use the same ivy path as HiveExternalCatalogVersionsSuite on Jenkins.

2020-07-09 17:21:08.423 - stdout> py4j.protocol.Py4JJavaError: An error occurred while calling o29.sql.
2020-07-09 17:21:08.423 - stdout> : java.lang.NullPointerException
2020-07-09 17:21:08.423 - stdout> 	at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1792)
2020-07-09 17:21:08.423 - stdout> 	at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1769)
2020-07-09 17:21:08.423 - stdout> 	at org.apache.commons.io.IOUtils.copy(IOUtils.java:1744)
2020-07-09 17:21:08.423 - stdout> 	at org.apache.commons.io.IOUtils.toByteArray(IOUtils.java:462)
2020-07-09 17:21:08.423 - stdout> 	at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:216)
2020-07-09 17:21:08.423 - stdout> 	at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)
2020-07-09 17:21:08.423 - stdout> 	at java.lang.ClassLoader.loadClass(ClassLoader.java:411)
2020-07-09 17:21:08.423 - stdout> 	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
2020-07-09 17:21:08.423 - stdout> 	at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:262)
2020-07-09 17:21:08.423 - stdout> 	at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:385)
2020-07-09 17:21:08.423 - stdout> 	at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:287)
2020-07-09 17:21:08.423 - stdout> 	at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
2020-07-09 17:21:08.423 - stdout> 	at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
2020-07-09 17:21:08.423 - stdout> 	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:195)
2020-07-09 17:21:08.423 - stdout> 	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anon$1.<init>(HiveSessionStateBuilder.scala:69)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.hive.HiveSessionStateBuilder.analyzer(HiveSessionStateBuilder.scala:69)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
2020-07-09 17:21:08.424 - stdout> 	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
2020-07-09 17:21:08.424 - stdout> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-07-09 17:21:08.424 - stdout> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-07-09 17:21:08.424 - stdout> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-07-09 17:21:08.424 - stdout> 	at java.lang.reflect.Method.invoke(Method.java:498)
2020-07-09 17:21:08.424 - stdout> 	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
2020-07-09 17:21:08.424 - stdout> 	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
2020-07-09 17:21:08.424 - stdout> 	at py4j.Gateway.invoke(Gateway.java:282)
2020-07-09 17:21:08.424 - stdout> 	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
2020-07-09 17:21:08.424 - stdout> 	at py4j.commands.CallCommand.execute(CallCommand.java:79)
2020-07-09 17:21:08.424 - stdout> 	at py4j.GatewayConnection.run(GatewayConnection.java:238)
2020-07-09 17:21:08.424 - stdout> 	at java.lang.Thread.run(Thread.java:748)

Let me use a temp directory instead and see how long it will take if we don't share the same ivy path.

@zsxwing
Copy link
Member Author

zsxwing commented Jul 10, 2020

Although the temp ivy directory adds 1 minute to the test on my laptop, I think it should be okay since SBT tests run in parallel. It would be pretty complicated to coordinate two tests in different JVMs and that's not worth for a one-line fix.

@SparkQA
Copy link

SparkQA commented Jul 10, 2020

Test build #125558 has finished for PR 29059 at commit 8f94ed6.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

@HyukjinKwon
Copy link
Member

retest this please

@SparkQA
Copy link

SparkQA commented Jul 10, 2020

Test build #125573 has finished for PR 29059 at commit 8f94ed6.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@HyukjinKwon
Copy link
Member

Merged to master and branch-3.0.

HyukjinKwon pushed a commit that referenced this pull request Jul 10, 2020
…Info in HiveExternalCatalog

### What changes were proposed in this pull request?

Force to initialize Hadoop VersionInfo in HiveExternalCatalog to make sure Hive can get the Hadoop version when using the isolated classloader.

### Why are the changes needed?

This is a regression in Spark 3.0.0 because we switched the default Hive execution version from 1.2.1 to 2.3.7.

Spark allows the user to set `spark.sql.hive.metastore.jars` to specify jars to access Hive Metastore. These jars are loaded by the isolated classloader. Because we also share Hadoop classes with the isolated classloader, the user doesn't need to add Hadoop jars to `spark.sql.hive.metastore.jars`, which means when we are using the isolated classloader, hadoop-common jar is not available in this case. If Hadoop VersionInfo is not initialized before we switch to the isolated classloader, and we try to initialize it using the isolated classloader (the current thread context classloader), it will fail and report `Unknown` which causes Hive to throw the following exception:

```
java.lang.RuntimeException: Illegal Hadoop Version: Unknown (expected A.B.* format)
	at org.apache.hadoop.hive.shims.ShimLoader.getMajorVersion(ShimLoader.java:147)
	at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:122)
	at org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:88)
	at org.apache.hadoop.hive.metastore.ObjectStore.getDataSourceProps(ObjectStore.java:377)
	at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:268)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:517)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:482)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:544)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:370)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:219)
	at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:67)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1548)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
	at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3080)
	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3108)
	at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3349)
	at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:217)
	at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:204)
	at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:331)
	at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:292)
	at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:262)
	at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:247)
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:543)
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:511)
	at org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:175)
	at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:128)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:301)
	at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:431)
	at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:324)
	at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:72)
	at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:71)
	at org.apache.spark.sql.hive.client.HadoopVersionInfoSuite.$anonfun$new$1(HadoopVersionInfoSuite.scala:63)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
```

Technically, This is indeed an issue of Hadoop VersionInfo which has been fixed: https://issues.apache.org/jira/browse/HADOOP-14067. But since we are still supporting old Hadoop versions, we should fix it.

Why this issue starts to happen in Spark 3.0.0?

In Spark 2.4.x, we use Hive 1.2.1 by default. It will trigger `VersionInfo` initialization in the static codes of `Hive` class. This will happen when we load `HiveClientImpl` class because `HiveClientImpl.clent` method refers to `Hive` class. At this moment, the thread context classloader is not using the isolcated classloader, so it can access hadoop-common jar on the classpath and initialize it correctly.

In Spark 3.0.0, we use Hive 2.3.7. The static codes of `Hive` class are not accessing `VersionInfo` because of the change in https://issues.apache.org/jira/browse/HIVE-11657. Instead, accessing `VersionInfo` happens when creating a `Hive` object (See the above stack trace). This happens here https://github.com/apache/spark/blob/v3.0.0/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala#L260. But we switch to the isolated classloader before calling `HiveClientImpl.client` (See https://github.com/apache/spark/blob/v3.0.0/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala#L283). This is exactly what I mentioned above: `If Hadoop VersionInfo is not initialized before we switch to the isolated classloader, and we try to initialize it using the isolated classloader (the current thread context classloader), it will fail`

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

The new regression test added in this PR.

Note that the new UT doesn't fail with the default profiles (-Phadoop-3.2) because it's already fixed at Hadoop 3.1. Please use the following to verify this.
```
build/sbt -Phadoop-2.7 -Phive "hive/testOnly *.HadoopVersionInfoSuite"
```

Closes #29059 from zsxwing/SPARK-32256.

Authored-by: Shixiong Zhu <[email protected]>
Signed-off-by: HyukjinKwon <[email protected]>
(cherry picked from commit c8779d9)
Signed-off-by: HyukjinKwon <[email protected]>
@zsxwing zsxwing deleted the SPARK-32256 branch July 10, 2020 17:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants