-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-17955][SQL] Make DataFrameReader.jdbc call DataFrameReader.format("jdbc").load #15499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Test build #67011 has finished for PR 15499 at commit
|
|
Merging in master. Thanks. |
…mat("jdbc").load
## What changes were proposed in this pull request?
This PR proposes to make `DataFrameReader.jdbc` call `DataFrameReader.format("jdbc").load` consistently with other APIs in `DataFrameReader`/`DataFrameWriter` and avoid calling `sparkSession.baseRelationToDataFrame(..)` here and there.
The changes were mostly copied from `DataFrameWriter.jdbc()` which was recently updated.
```diff
- val params = extraOptions.toMap ++ connectionProperties.asScala.toMap
- val options = new JDBCOptions(url, table, params)
- val relation = JDBCRelation(parts, options)(sparkSession)
- sparkSession.baseRelationToDataFrame(relation)
+ this.extraOptions = this.extraOptions ++ connectionProperties.asScala
+ // explicit url and dbtable should override all
+ this.extraOptions += ("url" -> url, "dbtable" -> table)
+ format("jdbc").load()
```
## How was this patch tested?
Existing tests should cover this.
Author: hyukjinkwon <[email protected]>
Closes apache#15499 from HyukjinKwon/SPARK-17955.
| // connectionProperties should override settings in extraOptions. | ||
| val params = extraOptions.toMap ++ connectionProperties.asScala.toMap | ||
| val options = new JDBCOptions(url, table, params) | ||
| val relation = JDBCRelation(parts, options)(sparkSession) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After this change, we lost the feature for parallel JDBC reading, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me revert this back.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I thought this was all handled in
Lines 29 to 46 in 0c0ad43
| override def createRelation( | |
| sqlContext: SQLContext, | |
| parameters: Map[String, String]): BaseRelation = { | |
| val jdbcOptions = new JDBCOptions(parameters) | |
| val partitionColumn = jdbcOptions.partitionColumn | |
| val lowerBound = jdbcOptions.lowerBound | |
| val upperBound = jdbcOptions.upperBound | |
| val numPartitions = jdbcOptions.numPartitions | |
| val partitionInfo = if (partitionColumn == null) { | |
| null | |
| } else { | |
| JDBCPartitioningInfo( | |
| partitionColumn, lowerBound.toLong, upperBound.toLong, numPartitions.toInt) | |
| } | |
| val parts = JDBCRelation.columnPartition(partitionInfo) | |
| JDBCRelation(parts, jdbcOptions)(sqlContext.sparkSession) | |
| } |
Would this be possible to adding those into options rather than reverting this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are using a different mechanism for table partitioning. More flexible. Users can do more advanced partitioning (e.g., using multiple columns) here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I am sorry it seems a careless mistake.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nvm
…mat("jdbc").load
## What changes were proposed in this pull request?
This PR proposes to make `DataFrameReader.jdbc` call `DataFrameReader.format("jdbc").load` consistently with other APIs in `DataFrameReader`/`DataFrameWriter` and avoid calling `sparkSession.baseRelationToDataFrame(..)` here and there.
The changes were mostly copied from `DataFrameWriter.jdbc()` which was recently updated.
```diff
- val params = extraOptions.toMap ++ connectionProperties.asScala.toMap
- val options = new JDBCOptions(url, table, params)
- val relation = JDBCRelation(parts, options)(sparkSession)
- sparkSession.baseRelationToDataFrame(relation)
+ this.extraOptions = this.extraOptions ++ connectionProperties.asScala
+ // explicit url and dbtable should override all
+ this.extraOptions += ("url" -> url, "dbtable" -> table)
+ format("jdbc").load()
```
## How was this patch tested?
Existing tests should cover this.
Author: hyukjinkwon <[email protected]>
Closes apache#15499 from HyukjinKwon/SPARK-17955.
What changes were proposed in this pull request?
This PR proposes to make
DataFrameReader.jdbccallDataFrameReader.format("jdbc").loadconsistently with other APIs inDataFrameReader/DataFrameWriterand avoid callingsparkSession.baseRelationToDataFrame(..)here and there.The changes were mostly copied from
DataFrameWriter.jdbc()which was recently updated.How was this patch tested?
Existing tests should cover this.