-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-33364][SQL][FOLLOWUP] Refine the catalog v2 API to purge a table #30890
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| * @return true if a table was deleted, false if no table exists for the identifier | ||
| * @throws UnsupportedOperationException If table purging is not supported | ||
| * | ||
| * @since 3.1.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you going to backport this to branch-3.1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, otherwise it's a breaking change and is not accepted.
|
There are tests for the spark/sql/catalyst/src/test/scala/org/apache/spark/sql/connector/catalog/TableCatalogSuite.scala Lines 620 to 633 in 4e5d2e0
Should we add similar primitive tests for purgeTable() there?
|
MaxGekk
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
Kubernetes integration test starting |
|
Kubernetes integration test starting |
imback82
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
Kubernetes integration test status success |
|
Test build #133223 has finished for PR 30890 at commit
|
|
Test build #133228 has finished for PR 30890 at commit
|
HyukjinKwon
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
Merged to master and branch-3.1. |
This is a followup of #30267 Inspired by #30886, it's better to have 2 methods `def dropTable` and `def purgeTable`, than `def dropTable(ident)` and `def dropTable(ident, purge)`. 1. make the APIs orthogonal. Previously, `def dropTable(ident, purge)` calls `def dropTable(ident)` and is a superset. 2. simplifies the catalog implementation a little bit. Now the `if (purge) ... else ...` check is done at the Spark side. No. existing tests Closes #30890 from cloud-fan/purgeTable. Authored-by: Wenchen Fan <[email protected]> Signed-off-by: HyukjinKwon <[email protected]> (cherry picked from commit ec1560a) Signed-off-by: HyukjinKwon <[email protected]>
What changes were proposed in this pull request?
This is a followup of #30267
Inspired by #30886, it's better to have 2 methods
def dropTableanddef purgeTable, thandef dropTable(ident)anddef dropTable(ident, purge).Why are the changes needed?
def dropTable(ident, purge)callsdef dropTable(ident)and is a superset.if (purge) ... else ...check is done at the Spark side.Does this PR introduce any user-facing change?
No.
How was this patch tested?
existing tests