Skip to content

Commit 7111f6e

Browse files
committed
[SPARK-39773][SQL][DOCS] Update document of JDBC options for pushDownOffset
### What changes were proposed in this pull request? Because the DS v2 pushdown framework added new JDBC option `pushDownOffset` for offset pushdown, we should update sql-data-sources-jdbc.md. ### Why are the changes needed? Add doc for `pushDownOffset`. ### Does this PR introduce _any_ user-facing change? 'No'. Updated for new feature. ### How was this patch tested? N/A Closes #37186 from beliefer/SPARK-39773. Authored-by: Jiaan Geng <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
1 parent c53d31d commit 7111f6e

File tree

2 files changed

+14
-5
lines changed

2 files changed

+14
-5
lines changed

docs/sql-data-sources-jdbc.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -281,7 +281,16 @@ logging into the data sources.
281281
<td><code>pushDownLimit</code></td>
282282
<td><code>false</code></td>
283283
<td>
284-
The option to enable or disable LIMIT push-down into V2 JDBC data source. The LIMIT push-down also includes LIMIT + SORT , a.k.a. the Top N operator. The default value is false, in which case Spark does not push down LIMIT or LIMIT with SORT to the JDBC data source. Otherwise, if sets to true, LIMIT or LIMIT with SORT is pushed down to the JDBC data source. If <code>numPartitions</code> is greater than 1, SPARK still applies LIMIT or LIMIT with SORT on the result from data source even if LIMIT or LIMIT with SORT is pushed down. Otherwise, if LIMIT or LIMIT with SORT is pushed down and <code>numPartitions</code> equals to 1, SPARK will not apply LIMIT or LIMIT with SORT on the result from data source.
284+
The option to enable or disable LIMIT push-down into V2 JDBC data source. The LIMIT push-down also includes LIMIT + SORT , a.k.a. the Top N operator. The default value is false, in which case Spark does not push down LIMIT or LIMIT with SORT to the JDBC data source. Otherwise, if sets to true, LIMIT or LIMIT with SORT is pushed down to the JDBC data source. If <code>numPartitions</code> is greater than 1, Spark still applies LIMIT or LIMIT with SORT on the result from data source even if LIMIT or LIMIT with SORT is pushed down. Otherwise, if LIMIT or LIMIT with SORT is pushed down and <code>numPartitions</code> equals to 1, Spark will not apply LIMIT or LIMIT with SORT on the result from data source.
285+
</td>
286+
<td>read</td>
287+
</tr>
288+
289+
<tr>
290+
<td><code>pushDownOffset</code></td>
291+
<td><code>false</code></td>
292+
<td>
293+
The option to enable or disable OFFSET push-down into V2 JDBC data source. The default value is false, in which case Spark will not push down OFFSET to the JDBC data source. Otherwise, if sets to true, Spark will try to push down OFFSET to the JDBC data source. If <code>pushDownOffset</code> is true and <code>numPartitions</code> is equal to 1, OFFSET will be pushed down to the JDBC data source. Otherwise, OFFSET will not be pushed down and Spark still applies OFFSET on the result from data source.
285294
</td>
286295
<td>read</td>
287296
</tr>

sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1114,10 +1114,10 @@ object JdbcUtils extends Logging with SQLConfHelper {
11141114
*/
11151115
def processIndexProperties(
11161116
properties: util.Map[String, String],
1117-
catalogName: String): (String, Array[String]) = {
1117+
dialectName: String): (String, Array[String]) = {
11181118
var indexType = ""
11191119
val indexPropertyList: ArrayBuffer[String] = ArrayBuffer[String]()
1120-
val supportedIndexTypeList = getSupportedIndexTypeList(catalogName)
1120+
val supportedIndexTypeList = getSupportedIndexTypeList(dialectName)
11211121

11221122
if (!properties.isEmpty) {
11231123
properties.asScala.foreach { case (k, v) =>
@@ -1147,8 +1147,8 @@ object JdbcUtils extends Logging with SQLConfHelper {
11471147
false
11481148
}
11491149

1150-
def getSupportedIndexTypeList(catalogName: String): Array[String] = {
1151-
catalogName match {
1150+
def getSupportedIndexTypeList(dialectName: String): Array[String] = {
1151+
dialectName match {
11521152
case "mysql" => Array("BTREE", "HASH")
11531153
case "postgresql" => Array("BTREE", "HASH", "BRIN")
11541154
case _ => Array.empty

0 commit comments

Comments
 (0)