-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-31935][SQL][TESTS][FOLLOWUP] Fix the test case for Hadoop2/3 #28796
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| spark.readStream.option("fs.defaultFS", defaultFs).text(path) | ||
| }.getMessage | ||
| assert(message == expectMessage) | ||
| assert(message.filterNot(Set(':', '"').contains) == expectMessage) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to remove : at line 539, too.
| dataSource invokePrivate checkAndGlobPathIfNecessary(false, false) | ||
| }.getMessage | ||
| assert(message.equals("No FileSystem for scheme: nonexistsFs")) | ||
| val expectMessage = "No FileSystem for scheme nonexistFS" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nonexistFS -> nonexistsFs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well, then I would prefer nonExistingFS..I was trying to keep the naming simple.
Let me change them all since you are asking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ur, I asked this because this test case fails still.
[info] - Data source options should be propagated in method checkAndGlobPathIfNecessary *** FAILED *** (599 milliseconds)
[info] "... for scheme nonexist[sFs]" did not equal "... for scheme nonexist[FS]" (DataSourceSuite.scala:146)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I don't care about the naming here if it passes with -Phadoop-3.2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see :)
dongjoon-hyun
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, LGTM. Thank you so much for recovering Hadoop 3.2, @gengliangwang .
I tested with the followings.
build/sbt "sql/testOnly *.FileStreamSourceSuite -- -z SPARK-31935"
build/sbt "sql/testOnly *.FileStreamSourceSuite -- -z SPARK-31935" -Phadoop-3.2
build/sbt "sql/testOnly *.DataSourceSuite -- -z checkAndGlobPathIfNecessary"
build/sbt "sql/testOnly *.DataSourceSuite -- -z checkAndGlobPathIfNecessary" -Phadoop-3.2
Merged to master!
|
Test build #123814 has finished for PR 28796 at commit
|
|
^^^ the document generation error seems not related. |
|
Oh.. |
|
Is it the same at the last commit, e3bb417? |
It seems to be a different commit. I'll take a look~ |
|
The latest run is ongoing and I don't think the error was related https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/123818/ |
|
Got it. Thanks, @gengliangwang . |
This PR updates the test case to accept Hadoop 2/3 error message correctly. SPARK-31935(apache#28760) breaks Hadoop 3.2 UT because Hadoop 2 and Hadoop 3 have different exception messages. In apache#28791, there are two test suites missed the fix No Unit test Closes apache#28796 from gengliangwang/SPARK-31926-followup. Authored-by: Gengliang Wang <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
|
Test build #123818 has finished for PR 28796 at commit
|
|
Test build #123813 has finished for PR 28796 at commit
|
What changes were proposed in this pull request?
This PR updates the test case to accept Hadoop 2/3 error message correctly.
Why are the changes needed?
SPARK-31935(#28760) breaks Hadoop 3.2 UT because Hadoop 2 and Hadoop 3 have different exception messages.
In #28791, there are two test suites missed the fix
Does this PR introduce any user-facing change?
No
How was this patch tested?
Unit test