-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-40834][SQL][FOLLOWUP] Take care of legacy query end events #38747
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| var driverAccumUpdates = Seq[(Long, Long)]() | ||
|
|
||
| @volatile var metricsValues: Map[Long, String] = null | ||
| var errorMessage: Option[String] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: move errorMessage closer to completionTime, as they both indicate the end of the execution.
| val metrics: Seq[SQLPlanMetric], | ||
| val submissionTime: Long, | ||
| val completionTime: Option[Date], | ||
| val errorMessage: Option[String], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
| System.currentTimeMillis(), | ||
| // Use empty string to indicate no error, as None may mean events generated by old | ||
| // versions of Spark. | ||
| errorMessage.orElse(Some(""))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to add docs at SparkListenerSQLExecutionEnd, Some("") is not an error. Otherwise developers may confuse.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point
|
thanks for review, merging to master! |
### What changes were proposed in this pull request? This is a followup of apache#38302 . For events generated by old versions of Spark, which do not have the new `errorMessage` field, we should use the old way to detect query execution status (failed or not). This PR also adds a UI test for the expected behavior. ### Why are the changes needed? backward compatibility ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? new tests Closes apache#38747 from cloud-fan/ui. Authored-by: Wenchen Fan <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
### What changes were proposed in this pull request? This is a followup of apache#38302 . For events generated by old versions of Spark, which do not have the new `errorMessage` field, we should use the old way to detect query execution status (failed or not). This PR also adds a UI test for the expected behavior. ### Why are the changes needed? backward compatibility ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? new tests Closes apache#38747 from cloud-fan/ui. Authored-by: Wenchen Fan <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
What changes were proposed in this pull request?
This is a followup of #38302 . For events generated by old versions of Spark, which do not have the new
errorMessagefield, we should use the old way to detect query execution status (failed or not).This PR also adds a UI test for the expected behavior.
Why are the changes needed?
backward compatibility
Does this PR introduce any user-facing change?
no
How was this patch tested?
new tests