-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-38270][SQL] Spark SQL CLI's AM should keep same exit code with client side #35594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
cliet -> client |
updated |
|
gentle ping @cloud-fan @tgravescs @mridulm |
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
Outdated
Show resolved
Hide resolved
...e-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala
Outdated
Show resolved
Hide resolved
...e-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala
Outdated
Show resolved
Hide resolved
LuciferYang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM +1
|
gentle ping @cloud-fan @bogdanghit |
|
@AngersZhuuuu does this address the CliSuite flakiness I reported a while back? |
Can you provide the link? Literally, it should be irrelevant |
|
gentle ping @bogdanghit |
|
We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable. |
|
@AngersZhuuuu can you rebase? We should merge this PR. |
|
+1, and sorry for not merging it after my approval |
Done |
|
thanks, merging to master! |
…tElementNames in Mima for Scala 2.13 ### What changes were proposed in this pull request? This PR is a followup of #35594 that recovers Mima compatibility test for Scala 2.13. ### Why are the changes needed? To fix the Mima build broken (https://github.com/apache/spark/actions/runs/3380379538/jobs/5613108397) ``` [error] spark-core: Failed binary compatibility check against org.apache.spark:spark-core_2.13:3.3.0! Found 2 potential problems (filtered 945) [error] * method productElementName(Int)java.lang.String in object org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages#Shutdown does not have a correspondent in current version [error] filter with: ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages#Shutdown.productElementName") [error] * method productElementNames()scala.collection.Iterator in object org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages#Shutdown does not have a correspondent in current version [error] filter with: ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages#Shutdown.productElementNames") ``` ### Does this PR introduce _any_ user-facing change? No, dev-only. ### How was this patch tested? CI in this PR should test it out. After that, scheduled jobs for Scala 2.13 will test this out Closes #38492 from HyukjinKwon/SPARK-38270-followup. Authored-by: Hyukjin Kwon <[email protected]> Signed-off-by: Hyukjin Kwon <[email protected]>
… client side
### What changes were proposed in this pull request?
Current Spark SQL CLI alway use shutdown hook to stop SparkSQLEnv
```
// Clean up after we exit
ShutdownHookManager.addShutdownHook { () => SparkSQLEnv.stop() }
```
but use process ret to close client side jvm
```
while (line != null) {
if (!line.startsWith("--")) {
if (prefix.nonEmpty) {
prefix += '\n'
}
if (line.trim().endsWith(";") && !line.trim().endsWith("\\;")) {
line = prefix + line
ret = cli.processLine(line, true)
prefix = ""
currentPrompt = promptWithCurrentDB
} else {
prefix = prefix + line
currentPrompt = continuedPromptWithDBSpaces
}
}
line = reader.readLine(currentPrompt + "> ")
}
sessionState.close()
System.exit(ret)
}
```
```
if (sessionState.execString != null) {
exitCode = cli.processLine(sessionState.execString)
System.exit(exitCode)
}
try {
if (sessionState.fileName != null) {
exitCode = cli.processFile(sessionState.fileName)
System.exit(exitCode)
}
} catch {
case e: FileNotFoundException =>
logError(s"Could not open input file for reading. (${e.getMessage})")
exitCode = 3
System.exit(exitCode)
}
```
This always cause client side exit code not consistent with AM.
IN this pr I prefer to pass the exit code to `SparkContext.stop()` method to pass a clear client side status to AM side in client mode.
In this pr, I add a new `stop` method in `SchedulerBackend`
```
def stop(exitCode: Int): Unit = stop()
```
So we don't need to implement it for all kinds of scheduler backend.
In this pr, we only handle the case me meet for `YarnClientSchedulerBackend`, then we can only implement `stop(exitCode: Int)` for this class, then for yarn client mode, it can work now.
I think this can benefit many similar case in future.
### Why are the changes needed?
Keep client side status consistent with AM side
### Does this PR introduce _any_ user-facing change?
With this pr, client side status will be same with am side
### How was this patch tested?
MT
Closes apache#35594 from AngersZhuuuu/SPARK-38270.
Lead-authored-by: Angerszhuuuu <[email protected]>
Co-authored-by: AngersZhuuuu <[email protected]>
Signed-off-by: Wenchen Fan <[email protected]>
…tElementNames in Mima for Scala 2.13 ### What changes were proposed in this pull request? This PR is a followup of apache#35594 that recovers Mima compatibility test for Scala 2.13. ### Why are the changes needed? To fix the Mima build broken (https://github.com/apache/spark/actions/runs/3380379538/jobs/5613108397) ``` [error] spark-core: Failed binary compatibility check against org.apache.spark:spark-core_2.13:3.3.0! Found 2 potential problems (filtered 945) [error] * method productElementName(Int)java.lang.String in object org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages#Shutdown does not have a correspondent in current version [error] filter with: ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages#Shutdown.productElementName") [error] * method productElementNames()scala.collection.Iterator in object org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages#Shutdown does not have a correspondent in current version [error] filter with: ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages#Shutdown.productElementNames") ``` ### Does this PR introduce _any_ user-facing change? No, dev-only. ### How was this patch tested? CI in this PR should test it out. After that, scheduled jobs for Scala 2.13 will test this out Closes apache#38492 from HyukjinKwon/SPARK-38270-followup. Authored-by: Hyukjin Kwon <[email protected]> Signed-off-by: Hyukjin Kwon <[email protected]>
What changes were proposed in this pull request?
Current Spark SQL CLI alway use shutdown hook to stop SparkSQLEnv
but use process ret to close client side jvm
This always cause client side exit code not consistent with AM.
IN this pr I prefer to pass the exit code to
SparkContext.stop()method to pass a clear client side status to AM side in client mode.In this pr, I add a new
stopmethod inSchedulerBackendSo we don't need to implement it for all kinds of scheduler backend.
In this pr, we only handle the case me meet for
YarnClientSchedulerBackend, then we can only implementstop(exitCode: Int)for this class, then for yarn client mode, it can work now.I think this can benefit many similar case in future.
Why are the changes needed?
Keep client side status consistent with AM side
Does this PR introduce any user-facing change?
With this pr, client side status will be same with am side
How was this patch tested?
MT