-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-38001][SQL] Replace the error classes related to unsupported features by UNSUPPORTED_FEATURE
#35302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-38001][SQL] Replace the error classes related to unsupported features by UNSUPPORTED_FEATURE
#35302
Conversation
UNSUPPORTED_FEATUREUNSUPPORTED_FEATURE
UNSUPPORTED_FEATUREUNSUPPORTED_FEATURE
|
|
||
| case _ => | ||
| throw QueryExecutionErrors.dataTypeUnsupportedError(dt) | ||
| throw QueryExecutionErrors.dataTypeUnsupportedForWriterFuncError(dt) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems we can never reach here unless we have a bug. maybe we don't need error class here and can throw IllegalStateException directly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about to not handle the default case at all (I mean remove it). The compiler should output a warning if we forget to handle a type, and we.will get a match error exception in the case too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After offline discussion, I'm going to replace the exception by IllegalStateException.
| test("UNSUPPORTED_FEATURE: unsupported types (map and struct) in Literal.apply") { | ||
| def checkUnsupportedTypeInLiteral(v: Any): Unit = { | ||
| val e = intercept[SparkRuntimeException] { | ||
| Literal(v) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a user-facing API. Can we use functions.lit?
| pivotVal.toString, pivotVal.dataType.simpleString, pivotCol.dataType.catalogString)) | ||
| } | ||
|
|
||
| def unsupportedIfNotExistsError(tableName: String): Throwable = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shall we add a QueryCompilationErrorsSuite and test this error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
InsertIntoSQLOnlyTests has a test for the error already. BTW, the test from InsertIntoSQLOnlyTests runs as part of:
- V1WriteFallbackSessionCatalogSuite
- DataSourceV2SQLSuite
- DataSourceV2SQLSessionCatalogSuite
- DataSourceV2DataFrameSuite
- DataSourceV2DataFrameSessionCatalogSuite
@cloud-fan How about to create the QueryCompilationErrorsSuiteBase trait/abstract class and include it to those test suites?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the base test suites QueryCompilationErrorsSuiteBase + SessionCatalogTestBase and new test suites for compilation errors:
- QueryCompilationErrorsDSv2Suite
- QueryCompilationErrorsDSv2SessionCatalogSuite
- QueryCompilationErrorsV1WriteFallbackSuite
| messageParameters = Array(nodeName)) | ||
| new SparkUnsupportedOperationException( | ||
| errorClass = "UNSUPPORTED_FEATURE", | ||
| messageParameters = Array(s"$nodeName does not implement simpleStringWithNodeId")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this does not sound like a user-facing error. Can we double-check it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems, it is not. At least, I haven't found a way how to trigger it from user level. I will replace the exception by IllegalStateException as for #35302 (comment).
|
@cloud-fan Could you look at this one more time, please. |
| .agg(sum($"sales.earnings")) | ||
| .collect() | ||
| } | ||
| assert(e2.getMessage === "The feature is not supported: " + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TBH, this message is quite misleading, as the query does not create literals directly. We can improve it later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opened the ticket SPARK-38097 to improve the error.
|
thanks, merging to master! |
…eatures by `UNSUPPORTED_FEATURE` In the PR, I propose to re-use one error class `UNSUPPORTED_FEATURE` in the following Spark's exceptions: - `QueryCompilationErrors.unsupportedIfNotExistsError` - when `IF NOT EXISTS` is not supported by `INSERT INTO`. - `QueryExecutionErrors.aesModeUnsupportedError` - when an user specify unsupported AES mode and padding. - `QueryExecutionErrors.literalTypeUnsupportedError` - impossible to create a literal from the input value (some Java class, for instance). - `QueryExecutionErrors.transactionUnsupportedByJdbcServerError` - the target JDBC server does not support transaction. And replace the following exceptions by `IllegalStateException` since they are internal, and should be not visible to users: - `QueryExecutionErrors.simpleStringWithNodeIdUnsupportedError` - a sub-class of `Expression` or `Block` doesn't implements the method `simpleStringWithNodeId()`. - `QueryExecutionErrors.dataTypeUnsupportedForWriterFuncError` - generating of a writer function for a struct field, array element, map key or map value doesn't support Catalyst's type. Also, added new base test suite `QueryCompilationErrorsDSv2Suite` for testing DSv2 specific compilation error classes. Reducing the number of error classes should prevent from explode of `error-classes.json`. Also, using one error class for similar errors should improve user experience with Spark SQL. Yes. By running the affected test suites: ``` $ build/sbt "test:testOnly *SparkThrowableSuite" $ build/sbt "test:testOnly *QueryExecutionErrorsSuite" $ build/sbt "test:testOnly *DataSourceV2SQLSuite" $ build/sbt "test:testOnly *DataSourceV2DataFrameSessionCatalogSuite" $ build/sbt "test:testOnly *DataFramePivotSuite" ``` and the new test suite: ``` $ build/sbt "test:testOnly *QueryCompilationErrorsDSv2Suite" ``` Closes apache#35302 from MaxGekk/re-use-unsupported_feature-error-class. Authored-by: Max Gekk <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
What changes were proposed in this pull request?
In the PR, I propose to re-use one error class
UNSUPPORTED_FEATUREin the following Spark's exceptions:QueryCompilationErrors.unsupportedIfNotExistsError- whenIF NOT EXISTSis not supported byINSERT INTO.QueryExecutionErrors.aesModeUnsupportedError- when an user specify unsupported AES mode and padding.QueryExecutionErrors.literalTypeUnsupportedError- impossible to create a literal from the input value (some Java class, for instance).QueryExecutionErrors.transactionUnsupportedByJdbcServerError- the target JDBC server does not support transaction.And replace the following exceptions by
IllegalStateExceptionsince they are internal, and should be not visible to users:QueryExecutionErrors.simpleStringWithNodeIdUnsupportedError- a sub-class ofExpressionorBlockdoesn't implements the methodsimpleStringWithNodeId().QueryExecutionErrors.dataTypeUnsupportedForWriterFuncError- generating of a writer function for a struct field, array element, map key or map value doesn't support Catalyst's type.Also, added new base test suite
QueryCompilationErrorsDSv2Suitefor testing DSv2 specific compilation error classes.Why are the changes needed?
Reducing the number of error classes should prevent from explode of
error-classes.json. Also, using one error class for similar errors should improve user experience with Spark SQL.Does this PR introduce any user-facing change?
Yes.
How was this patch tested?
By running the affected test suites:
and the new test suite: