Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 0 additions & 24 deletions core/src/test/resources/core-site.xml

This file was deleted.

3 changes: 2 additions & 1 deletion core/src/test/scala/org/apache/spark/SparkContextSuite.scala
Original file line number Diff line number Diff line change
Expand Up @@ -1155,11 +1155,12 @@ class SparkContextSuite extends SparkFunSuite with LocalSparkContext with Eventu
val testKey = "hadoop.tmp.dir"
val bufferKey = "io.file.buffer.size"
val hadoopConf0 = new Configuration()
hadoopConf0.set(testKey, "/tmp/hive_zero")
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is still a Hadoop conf although it's Overlay property.


val hiveConfFile = Utils.getContextOrSparkClassLoader.getResource("hive-site.xml")
assert(hiveConfFile != null)
hadoopConf0.addResource(hiveConfFile)
assert(hadoopConf0.get(testKey) === "/tmp/hive_one")
assert(hadoopConf0.get(testKey) === "/tmp/hive_zero")
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is okay because Apache Spark UT aims to test line 1170 ~ 1172.

    assert(sc.hadoopConfiguration.get(testKey) === "/tmp/hive_one",
      "hive configs have higher priority than hadoop ones ")
    assert(sc.hadoopConfiguration.get(bufferKey).toInt === 65536,
      "spark configs have higher priority than hive ones")

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

w/ this change, "/tmp/hive_one" in hive-site.xml overlays "/tmp/hive_{user}" not "/tmp/hive_zero", which happens in SparkHadoopUtil.appendS3AndSparkHadoopHiveConfigurations.

As hadoopConf0 is not passed to SparkHadoopUtil, while core-site.xml is, so here to reduce test flakiness, we can remove those asserts for hadoopConf0, as they are just used to check hive-site.xml overrides core-site.xml.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yaooqinn, can you open a PR for that, or push some changes into this PR? I think the test is not flaky but broken. It doesn't look obvious because we don't always run YarnClusterSuite.

assert(hadoopConf0.get(bufferKey) === "201811")

val sparkConf = new SparkConf()
Expand Down