-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-14525][SQL] Make DataFrameWrite.save work for jdbc #12601
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@JustinPihony I think we haven't reached the conclusion yet and haven't got any feedback, from who knows this part well (or committers), whether we should deprecate Usually, we should discuss the proper ways to fix and problems first in JIRA and open a PR. As I said in the JIRA, if we go for deprecating |
|
I think |
|
|
||
| dataSource.write(mode, df) | ||
| dataSource.providingClass.newInstance() match { | ||
| case jdbc: execution.datasources.jdbc.DefaultSource => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks a new method is introduced. I think we don't necessarily have to introduce this new function but use the existing interfaces, eg. CreatableRelationProvider in interfaces.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree and admit I was being lazy in not trying to figure out how to make the current implementation return a BaseRelation. I'll take another look today at just turning the DefaultSource into a CreatableRelationProvider
|
BTW, it looks it is pretty general that Firstly, I just checked java.sql.Driver API (which JDBC data source uses) and it describes the argument for
Secondly, apparently Spark uses the methods below in public Set<String> stringPropertyNames()
public String getProperty(String key, String defaultValue)
public void store(OutputStream out, String comments) // This converts keys and values to String internally.
public synchronized Object setProperty(String key, String value)It looks they use So, I think it might be OK to support |
|
I think I can rework based on this because it is anyway opened already. Excuse my ping @rxin, @JoshRosen |
|
@HyukjinKwon You will notice that I opted to not deprecate jdbc as I don't think that would be the correct path anyway (unless all format methods were to be deprecated). I'm not sure what gave the impression that I wanted to deprecate, but I think multiple methods to accomplish the same goal is perfectly fine. This change merely alters the underlying implementation so that both methods work, As to additional details, I was simply following the contributer directions. I only put it under additional details because that was what it was to me. |
|
@JustinPihony A possible problem was noticed (are keys and values in Also, I think it is a minor but still it looks not sensible that |
|
@HyukjinKwon I just posted on the JIRA the background of |
|
Even if so, I think we need a kinda wrapper or something to safely convert |
|
I can finish this next Monday(fix the conflicts that now exist), and will actually do that given the above comments. I'd still like to get an opinion on whether I should change the code to be a |
|
I just updated the branch to have no conflicts. Again, either the code looks good to merge, or I can make JDBC a |
| val resolvedSchema = JDBCRDD.resolveTable(url, table, properties) | ||
| providedSchemaOption match { | ||
| case Some(providedSchema) => | ||
| if (providedSchema.sql.toLowerCase == resolvedSchema.sql.toLowerCase) resolvedSchema |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the only area I'm unsure about. I'd like a second opinion on whether this seems ok, or if I need to build something more custom for schema comparison.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess it would make sense if it does not try to apply the resolved schema but just use the specified one when the schema is explicitly set like the other data sources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can easily do a simpler getOrElse as is done in spark-xml which has more of a benefit of being lazier. But if an error does occur due to a mismatch, then the error is further from the original issue. I'm fine with either scenario, but at least wanted to give the other side for this one. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think JDBCRDD.resolveTable needs another query execution. Although it would be less expensive than inferring schemas in CSV or JSON, it would be still a bit of overhead. I am not 100% about this too. So, I think it might be better to be consistent with the others in this case.
|
Bump :) Anybody able to review this one for me please? |
| val partitionColumn = parameters.getOrElse("partitionColumn", null) | ||
| val lowerBound = parameters.getOrElse("lowerBound", null) | ||
| val upperBound = parameters.getOrElse("upperBound", null) | ||
| val numPartitions = parameters.getOrElse("numPartitions", null) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a class for those options, JDBCOptions. It would be nicer if those options are managed in a single place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@HyukjinKwon Thanks, I did not know about this. Before I push code I was curious why JDBCOptions does not include the partitioning validation? That seems like a point of duplication also.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the validation can be done together in JDBCOptions.
|
Bump @HyukjinKwon I have some comments to your comments. Could you please review them and I can push my changes. |
|
@JustinPihony Sorry, I did not realize you submitted a PR for the same issue. Could you please review my PR? #14077 I think my solution might be cleaner and simpler. Thanks! |
|
@gatorsmile I did just review it and still prefer mine...a simpler PR does not necessarily mean it is more correct. |
|
Bumping my JIRA comment to here for @rxin to respond please? @rxin Given the bug found in SPARK-16401, the CreatableRelationProvider is not necessary. However it might be nice to have now that I've already implemented it. I can reduce the code by removing the CreatableRelationProvider aspect, so I would love your feedback on this PR. Even if just to say the code should be reduced or not. |
|
@srowen I had to fix something on my local machine to get proper test results, but this should be good to go now. |
|
@srowen Ping. I don't think there is anything on my plate. This should be mergeable |
|
OK, this isn't really my area. It looks reasonable to me. @gatorsmile ? |
|
@HyukjinKwon @gatorsmile Could you please review the documentation that I added so that we can put this to bed :) |
|
Sure, will build the document in my local computer and review it soon. Thanks! |
|
@JustinPihony The document changes in Scala, JAVA and Python look good to me, but could you please also add the examples for both SQL and R?
|
|
Not sure you already knew it. Just want to share the commands how to build the doc. SKIP_API=1 jekyll build
SKIP_API=1 jekyll serveAfter the second command, you can visit the generated document: |
| // $example off:schema_merging$ | ||
|
|
||
| import java.util.Properties; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should put java.util.List and java.util.Properties imports together without additional newline. It seems you already know but just in case - imports.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this really be added to the example, though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No reason to not follow the guildline?
|
@gatorsmile I added the R and SQL documentation. I took the SQL portion from https://github.com/apache/spark/pull/6121/files |
| import java.util.List; | ||
| import java.util.Properties; | ||
| // $example off:schema_merging$ | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, maybe, my previous comment was not clear. I meant
import java.util.List;
// $example off:schema_merging$
import java.util.Properties;I haven't tried to build the doc against the current state of this PR but I guess we won't need this import for Parquet`s schema mering example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@HyukjinKwon Yes, that is what I was talking about...just fixed it back
|
Thanks for mentioning me. It looks good to me except for few comments above in my personal view. |
docs/sql-programming-guide.md
Outdated
| {% highlight sql %} | ||
|
|
||
| CREATE TEMPORARY VIEW jdbcTable | ||
| CREATE TEMPORARY TABLE jdbcTable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please change it back. CREATE TEMPORARY TABLE is deprecated. You will get a Parser error
CREATE TEMPORARY TABLE is not supported yet. Please use CREATE TEMPORARY VIEW as an alternative.(line 1, pos 0)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, thanks. I had been going off of the tests
|
|
||
| df.write.format("jdbc") | ||
| .options(Map("url" -> url, "dbtable" -> "TEST.SAVETEST")) | ||
| .save |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: save -> save()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| this.extraOptions = this.extraOptions ++ (connectionProperties.asScala) | ||
| // explicit url and dbtable should override all | ||
| this.extraOptions += ("url" -> url, "dbtable" -> table) | ||
| format("jdbc").save |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The omission of parentheses on methods should only be used when the method has no side-effects.
Thus, please change it to save()
|
Mostly LGTM, except three minor comments. Thank you for your hard work, @JustinPihony ! |
|
Test build #65858 has finished for PR 12601 at commit
|
|
Test build #65860 has finished for PR 12601 at commit
|
|
@srowen The doc changes have been reviewed, so this should be good to go |
|
Test build #65891 has finished for PR 12601 at commit
|
|
Merged to master |
| } | ||
|
|
||
| /* | ||
| * The following structure applies to this code: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does this table mean? what is CreateTable, saveTable, BaseRelation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now, at least, three of reviewers are confused of this bit. Do you mind if I submit a PR to clean up this part?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the table does not exist and the mode is OVERWRITE, we create a table, then insert rows into the table, and finally return a BaseRelation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also took a look at @gatorsmile 's approach, I think it's easier to understand, why it's rejected? We can also get rid of the return:
if (tableExists) {
mode match {
case SaveMode.Ignore =>
......
}
} else {
......
}
…SparkR ## What changes were proposed in this pull request? `write.df`/`read.df` API require path which is not actually always necessary in Spark. Currently, it only affects the datasources implementing `CreatableRelationProvider`. Currently, Spark currently does not have internal data sources implementing this but it'd affect other external datasources. In addition we'd be able to use this way in Spark's JDBC datasource after apache#12601 is merged. **Before** - `read.df` ```r > read.df(source = "json") Error in dispatchFunc("read.df(path = NULL, source = NULL, schema = NULL, ...)", : argument "x" is missing with no default ``` ```r > read.df(path = c(1, 2)) Error in dispatchFunc("read.df(path = NULL, source = NULL, schema = NULL, ...)", : argument "x" is missing with no default ``` ```r > read.df(c(1, 2)) Error in invokeJava(isStatic = TRUE, className, methodName, ...) : java.lang.ClassCastException: java.lang.Double cannot be cast to java.lang.String at org.apache.spark.sql.execution.datasources.DataSource.hasMetadata(DataSource.scala:300) at ... In if (is.na(object)) { : ... ``` - `write.df` ```r > write.df(df, source = "json") Error in (function (classes, fdef, mtable) : unable to find an inherited method for function ‘write.df’ for signature ‘"function", "missing"’ ``` ```r > write.df(df, source = c(1, 2)) Error in (function (classes, fdef, mtable) : unable to find an inherited method for function ‘write.df’ for signature ‘"SparkDataFrame", "missing"’ ``` ```r > write.df(df, mode = TRUE) Error in (function (classes, fdef, mtable) : unable to find an inherited method for function ‘write.df’ for signature ‘"SparkDataFrame", "missing"’ ``` **After** - `read.df` ```r > read.df(source = "json") Error in loadDF : analysis error - Unable to infer schema for JSON at . It must be specified manually; ``` ```r > read.df(path = c(1, 2)) Error in f(x, ...) : path should be charactor, null or omitted. ``` ```r > read.df(c(1, 2)) Error in f(x, ...) : path should be charactor, null or omitted. ``` - `write.df` ```r > write.df(df, source = "json") Error in save : illegal argument - 'path' is not specified ``` ```r > write.df(df, source = c(1, 2)) Error in .local(df, path, ...) : source should be charactor, null or omitted. It is 'parquet' by default. ``` ```r > write.df(df, mode = TRUE) Error in .local(df, path, ...) : mode should be charactor or omitted. It is 'error' by default. ``` ## How was this patch tested? Unit tests in `test_sparkSQL.R` Author: hyukjinkwon <[email protected]> Closes apache#15231 from HyukjinKwon/write-default-r.
What changes were proposed in this pull request?
This change modifies the implementation of DataFrameWriter.save such that it works with jdbc, and the call to jdbc merely delegates to save.
How was this patch tested?
This was tested via unit tests in the JDBCWriteSuite, of which I added one new test to cover this scenario.
Additional details
@rxin This seems to have been most recently touched by you and was also commented on in the JIRA.
This contribution is my original work and I license the work to the project under the project's open source license.