Skip to content

Conversation

@xi-db
Copy link
Contributor

@xi-db xi-db commented Nov 8, 2025

What changes were proposed in this pull request?

(This PR is a backporting PR containing #52496 and the test fix #52941.)

In the previous PR #52271 of Spark Connect ArrowBatch Result Chunking, both Server-side and PySpark client changes were implemented.

In this PR, the corresponding Scala client changes are implemented, so large Arrow rows are now supported on the Scala client as well.

To reproduce the existing issue we are solving here, run this code on Spark Connect Scala client:

val res = spark.sql("select repeat('a', 1024*1024*300)").collect()
println(res(0).getString(0).length)

It fails with RESOURCE_EXHAUSTED error with message gRPC message exceeds maximum size 134217728: 314573320, because the server is trying to send an ExecutePlanResponse of ~300MB to the client.

With the improvement introduced by the PR, the above code runs successfully and prints the expected result.

Why are the changes needed?

It improves Spark Connect stability when returning large rows.

Does this PR introduce any user-facing change?

No.

How was this patch tested?

New tests.

Was this patch authored or co-authored using generative AI tooling?

No.

xi-db added 2 commits November 8, 2025 09:54
…king - Scala Client

### What changes were proposed in this pull request?

In the previous PR apache#52271 of Spark Connect ArrowBatch Result Chunking, both Server-side and PySpark client changes were implemented.

In this PR, the corresponding Scala client changes are implemented, so large Arrow rows are now supported on the Scala client as well.

To reproduce the existing issue we are solving here, run this code on Spark Connect Scala client:

```
val res = spark.sql("select repeat('a', 1024*1024*300)").collect()
println(res(0).getString(0).length)
```

It fails with `RESOURCE_EXHAUSTED` error with message `gRPC message exceeds maximum size 134217728: 314573320`, because the server is trying to send an ExecutePlanResponse of ~300MB to the client.

With the improvement introduced by the PR, the above code runs successfully and prints the expected result.

### Why are the changes needed?

It improves Spark Connect stability when returning large rows.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

New tests.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#52496 from xi-db/arrow-batch-chuking-scala-client.

Authored-by: Xi Lyu <[email protected]>
Signed-off-by: Herman van Hovell <[email protected]>
(cherry picked from commit daa83fc)
…ct testing

### What changes were proposed in this pull request?

In this PR apache#52496, tests were implemented using `io.grpc.ClientInterceptor` to verify gRPC messages. However, it failed the Maven tests ([comment](apache#52496 (comment))) because the related gRPC classes are missing in the testing SparkConnectService in Maven tests.

In this PR, gRPC classes for testing purposes are added as artifacts like other existing classes from `scalatest` and `spark-catalyst` to also allow io.grpc classes in tests.

### Why are the changes needed?

To fix the broken daily Maven tests ([comment](apache#52496 (comment))).

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Maven tests with following commands passed.

```
$ build/mvn -Phive clean install -DskipTests
$ build/mvn -Phive -pl sql/connect/client/jvm test -Dtest=none -DwildcardSuites=org.apache.spark.sql.connect.ClientE2ETestSuite
```

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#52941 from xi-db/arrow-batch-chunking-scala-client-fix-maven.

Authored-by: Xi Lyu <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
(cherry picked from commit 1f7bbeb)
@xi-db
Copy link
Contributor Author

xi-db commented Nov 8, 2025

Hi @dongjoon-hyun , here's the backporting PR on branch-4.1 containing the reverted PR and the followup test fix. Thanks! (context: #52941 (comment))

Copy link
Member

@dongjoon-hyun dongjoon-hyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, LGTM. Thank you, @xi-db .

Merged to branch-4.1.

dongjoon-hyun pushed a commit that referenced this pull request Nov 8, 2025
… Chunking - Scala Client

### What changes were proposed in this pull request?

(This PR is a backporting PR containing #52496 and the test fix #52941.)

In the previous PR #52271 of Spark Connect ArrowBatch Result Chunking, both Server-side and PySpark client changes were implemented.

In this PR, the corresponding Scala client changes are implemented, so large Arrow rows are now supported on the Scala client as well.

To reproduce the existing issue we are solving here, run this code on Spark Connect Scala client:

```
val res = spark.sql("select repeat('a', 1024*1024*300)").collect()
println(res(0).getString(0).length)
```

It fails with `RESOURCE_EXHAUSTED` error with message `gRPC message exceeds maximum size 134217728: 314573320`, because the server is trying to send an ExecutePlanResponse of ~300MB to the client.

With the improvement introduced by the PR, the above code runs successfully and prints the expected result.

### Why are the changes needed?

It improves Spark Connect stability when returning large rows.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

New tests.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #52953 from xi-db/[email protected].

Authored-by: Xi Lyu <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants