Skip to content

Conversation

@sarutak
Copy link
Member

@sarutak sarutak commented Sep 23, 2025

What changes were proposed in this pull request?

This PR aims to fix one of the issues which block SPARK-48139.
In the problematic test interrupt tag in SparkSessionE2ESuite, four futures run on threads in ForkJoinPool and try to interrupt through tags.
A thread in a ForkJoinPool can create a spare thread and make it available in the pool so any of threads can be parent and child. It can happen when a thread performs a blocking operation. One example is ArrayBlockingQueue.take and it is called in a method provided by gRPC.

On the other hand, tags are inplemented as InheritableThreadLocal.
So, if the futures q1 and q4, or q2 and q3 are parent and child, tags should be inheritd, which causes the flaky test faulure.

You can easily reprodue the issue by inserting a sleep into the problematic test like as follows (don't forget to replace ignore with test).

   // TODO(SPARK-48139): Re-enable `SparkSessionE2ESuite.interrupt tag`
-  ignore("interrupt tag") {
+  test("interrupt tag") {
     val session = spark
     import session.implicits._
 
@@ -204,6 +204,7 @@ class SparkSessionE2ESuite extends ConnectFunSuite with RemoteSparkSession {
         spark.clearTags() // clear for the case of thread reuse by another Future
       }
     }(executionContext)
+    Thread.sleep(1000)
     val q4 = Future {
       assert(spark.getTags() == Set())
       spark.addTag("one")

And then, run the test.

$ build/sbt 'connect-client-jvm/testOnly org.apache.spark.sql.connect.SparkSessionE2ESuite -- -z "interrupt tag"'

Why are the changes needed?

For test stability.

Does this PR introduce any user-facing change?

No.

How was this patch tested?

Ran the problematic test with inserting sleep like mentioned above and it passed.

Was this patch authored or co-authored using generative AI tooling?

No.

@sarutak
Copy link
Member Author

sarutak commented Sep 23, 2025

Intentionally keep the test as ignore because there is another cause of the flakiness.

Copy link
Member

@dongjoon-hyun dongjoon-hyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It sounds reasonable to me as a partial improvement. Thank you, @sarutak .

@dongjoon-hyun
Copy link
Member

Feel free to merge since the CI result is irrelevant to this PR because the test case is still ignored.

@sarutak sarutak closed this in 0e42b95 Sep 23, 2025
@sarutak
Copy link
Member Author

sarutak commented Sep 23, 2025

Thank yoou @dongjoon-hyun @zhengruifeng @yaooqinn for the review!
Merged to master.

BTW, as far as I know, there is the last one issue which blocks SPARK-48139 and a PR tries to resolve it.
@zhengruifeng I'm happy if you are familiar with the area and can take a look at the PR.

dongjoon-hyun pushed a commit that referenced this pull request Sep 27, 2025
…SessionE2ESuite - interrupt tag` caused by the usage of `ForkJoinPool`

### What changes were proposed in this pull request?
This PR backports #52417 to `branch-4.0`.

This PR aims to fix one of the issues which block SPARK-48139.
In the problematic test `interrupt tag` in `SparkSessionE2ESuite`, four futures run on threads in `ForkJoinPool` and try to interrupt through tags.
A thread in a `ForkJoinPool` can create a spare thread and make it available in the pool so any of threads can be parent and child. It can happen when a thread performs a blocking operation. One example is `ArrayBlockingQueue.take` and it is called in a method provided by [gRPC](https://github.com/grpc/grpc-java/blob/24085103b926559659ecd3941a3308479876f084/stub/src/main/java/io/grpc/stub/ClientCalls.java#L607).

On the other hand, tags are implemented as [InheritableThreadLocal](https://github.com/apache/spark/blob/13e70100426233e62fd9edf13e229f91f4941ff8/sql/connect/common/src/main/scala/org/apache/spark/sql/connect/client/SparkConnectClient.scala#L285).
So, if the futures q1 and q4, or q2 and q3 are parent and child, tags should be inherited, which causes the flaky test failure.

You can easily reproduce the issue by inserting a sleep into the problematic test like as follows (don't forget to replace `ignore` with `test`).

```
   // TODO(SPARK-48139): Re-enable `SparkSessionE2ESuite.interrupt tag`
-  ignore("interrupt tag") {
+  test("interrupt tag") {
     val session = spark
     import session.implicits._

 -204,6 +204,7  class SparkSessionE2ESuite extends ConnectFunSuite with RemoteSparkSession {
         spark.clearTags() // clear for the case of thread reuse by another Future
       }
     }(executionContext)
+    Thread.sleep(1000)
     val q4 = Future {
       assert(spark.getTags() == Set())
       spark.addTag("one")
```

And then, run the test.
```
$ build/sbt 'connect-client-jvm/testOnly org.apache.spark.sql.connect.SparkSessionE2ESuite -- -z "interrupt tag"'
```

### Why are the changes needed?
For test stability.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Ran the problematic test with inserting sleep like mentioned above and it passed.

### Was this patch authored or co-authored using generative AI tooling?
No.

Closes #52476 from sarutak/fix-thread-pool-issue-4.0.

Authored-by: Kousuke Saruta <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
dongjoon-hyun pushed a commit that referenced this pull request Sep 27, 2025
…SessionE2ESuite - interrupt tag` caused by the usage of `ForkJoinPool`

### What changes were proposed in this pull request?
This PR backports #52417 to `branch-3.5`.

Different from `master` and `branch-4.0`, the SPARK-53673 doesn't seem to affect the `branch-3.5` at this time because the implementation of `ClientCalls#waitForNext` in `gRPC 1.56.0` which `branch-3.5` depends on is different from the one in `gRPC 1.67.1` which `master` and `branch-4.0` depend on.
More specifically,  the test doesn't go through the pass which calls `ArrayBlockingQueue#take` but go through [this else block](https://github.com/grpc/grpc-java/blob/v1.56.0/stub/src/main/java/io/grpc/stub/ClientCalls.java#L634).
But I think it's better to backport #52417 to `branch-3.5` to prevent future changes from causing that issue.

### Why are the changes needed?
Just in case to prevent future changes from causing that issue.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
1.  Temporarily enable the test and insert a sleep into the test like as follows

```
   // TODO(SPARK-48139): Re-enable `SparkSessionE2ESuite.interrupt tag`
-  ignore("interrupt tag") {
+  test("interrupt tag") {
     val session = spark
     import session.implicits._

 -204,6 +204,7  class SparkSessionE2ESuite extends ConnectFunSuite with RemoteSparkSession {
         spark.clearTags() // clear for the case of thread reuse by another Future
       }
     }(executionContext)
+    Thread.sleep(1000)
     val q4 = Future {
       assert(spark.getTags() == Set())
       spark.addTag("one")
```

2. Run the test and confirm it passes
```
$ build/sbt 'connect-client-jvm/testOnly org.apache.spark.sql.SparkSessionE2ESuite -- -z "interrupt tag"'
```

### Was this patch authored or co-authored using generative AI tooling?
No.

Closes #52477 from sarutak/fix-thread-pool-issue-3.5.

Authored-by: Kousuke Saruta <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
dongjoon-hyun pushed a commit that referenced this pull request Oct 23, 2025
…tionSuite.Cancellation APIs in SparkSession are isolated`

### What changes were proposed in this pull request?
This PR aims to reenable `SparkSessionJobTaggingAndCancellationSuite.Cancellation APIs in SparkSession are isolated`.

#48736 disabled this test because of it was flaky. In this test, futures ran on threads managed by `ForkJoinPool`.
Each future invokes `SparkSession#addTag` and `SparkSession#getTag`, and tags are implemented using `InheritableThreadLocal`. So the root cause of this issue is same as #52417.

But #48906 replaced `ForkJoinPool` with `Executors.newFixedThreadPool(3)` so I believe this issue no longer occurs.
In fact, this issue can be reproduced by replacing `Executors.newFixedThreadPool(3)` with `new ForkJoinPool(3)` and inserting a sleep like as follows.

```
     // global ExecutionContext has only 2 threads in Apache Spark CI
     // create own thread pool for four Futures used in this test
-    val threadPool = Executors.newFixedThreadPool(3)
+    val threadPool = new ForkJoinPool(3)

...

+      Thread.sleep(1000)
       val jobB = Future {
         sessionB = globalSession.cloneSession()
         import globalSession.implicits._
```

Then, run the test as follows.
```
$ build/sbt 'sql/testOnly org.apache.spark.sql.SparkSessionJobTaggingAndCancellationSuite -- -z "Cancellation APIs in Spark\
Session are isolated"'
```

```
info] - Cancellation APIs in SparkSession are isolated *** FAILED *** (2 seconds, 726 milliseconds)
[info]   ArrayBuffer({"spark.app.startTime"="1761192376305", "spark.rdd.scope"="{"id":"3","name":"Exchange"}", "spark.hadoop.fs.s3a.vectored.read.min.seek.size"="128K", "spark.hadoop.hadoop.caller.context.enabled"="true", "spark.memory.debugFill"="true", "spark.master.rest.enabled"="false", "spark.sql.warehouse.dir"="file:/Users/sarutak/oss/spark/sql/core/spark-warehouse", "spark.master"="local[2]", "spark.job.interruptOnCancel"="true", "spark.app.name"="test", "spark.driver.host"="192.168.1.109", "spark.app.id"="local-1761192376735", "spark.job.tags"="spark-session-e2dd839b-2170-43c9-a8c9-1c8a24fe583c,spark-session-8c09c25f-089c-41ee-add1-1de463658349-thread-6b832f9d-3a55-4d1f-b47d-418fc2ed05e4-one,spark-session-e2dd839b-2170-43c9-a8c9-1c8a24fe583c-execution-root-id-0,spark-session-e2dd839b-2170-43c9-a8c9-1c8a24fe583c-thread-a4b5b347-6e56-4416-b3a5-37a312bdfe34-one,spark-session-8c09c25f-089c-41ee-add1-1de463658349,spark-session-8c09c25f-089c-41ee-add1-1de463658349-thread-6b832f9d-3a55-4d1f-b47d-418fc2ed05e4-two", "spark.unsafe.exceptionOnMemoryLeak"="true", "spark.sql.execution.root.id"="0", "spark.ui.showConsoleProgress"="false", "spark.driver.extraJavaOptions"="-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-modules=jdk.incubator.vector --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false -Dio.netty.tryReflectionSetAccessible=true -Dio.netty.allocator.type=pooled -Dio.netty.handler.ssl.defaultEndpointVerificationAlgorithm=NONE --enable-native-access=ALL-UNNAMED", "spark.driver.port"="56972", "spark.testing"="true", "spark.hadoop.fs.s3a.vectored.read.max.merged.size"="2M", "spark.sql.execution.id"="1", "spark.rdd.scope.noOverride"="true", "spark.executor.id"="driver", "spark.port.maxRetries"="100", "spark.executor.extraJavaOptions"="-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-modules=jdk.incubator.vector --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false -Dio.netty.tryReflectionSetAccessible=true -Dio.netty.allocator.type=pooled -Dio.netty.handler.ssl.defaultEndpointVerificationAlgorithm=NONE --enable-native-access=ALL-UNNAMED", "spark.test.home"="/Users/sarutak/oss/spark", "spark.ui.enabled"="false"}, {"spark.app.startTime"="1761192376305", "spark.rdd.scope"="{"id":"5","name":"Exchange"}", "spark.hadoop.fs.s3a.vectored.read.min.seek.size"="128K", "spark.hadoop.hadoop.caller.context.enabled"="true", "spark.memory.debugFill"="true", "spark.master.rest.enabled"="false", "spark.sql.warehouse.dir"="file:/Users/sarutak/oss/spark/sql/core/spark-warehouse", "spark.master"="local[2]", "spark.job.interruptOnCancel"="true", "spark.app.name"="test", "spark.driver.host"="192.168.1.109", "spark.app.id"="local-1761192376735", "spark.job.tags"="spark-session-e2dd839b-2170-43c9-a8c9-1c8a24fe583c-execution-root-id-0,spark-session-e2dd839b-2170-43c9-a8c9-1c8a24fe583c-thread-a4b5b347-6e56-4416-b3a5-37a312bdfe34-one,spark-session-e2dd839b-2170-43c9-a8c9-1c8a24fe583c", "spark.unsafe.exceptionOnMemoryLeak"="true", "spark.sql.execution.root.id"="0", "spark.ui.showConsoleProgress"="false", "spark.driver.extraJavaOptions"="-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-modules=jdk.incubator.vector --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false -Dio.netty.tryReflectionSetAccessible=true -Dio.netty.allocator.type=pooled -Dio.netty.handler.ssl.defaultEndpointVerificationAlgorithm=NONE --enable-native-access=ALL-UNNAMED", "spark.driver.port"="56972", "spark.testing"="true", "spark.hadoop.fs.s3a.vectored.read.max.merged.size"="2M", "spark.sql.execution.id"="0", "spark.rdd.scope.noOverride"="true", "spark.executor.id"="driver", "spark.port.maxRetries"="100", "spark.executor.extraJavaOptions"="-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-modules=jdk.incubator.vector --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false -Dio.netty.tryReflectionSetAccessible=true -Dio.netty.allocator.type=pooled -Dio.netty.handler.ssl.defaultEndpointVerificationAlgorithm=NONE --enable-native-access=ALL-UNNAMED", "spark.test.home"="/Users/sarutak/oss/spark", "spark.ui.enabled"="false"}) had size 2 instead of expected size 1 (SparkSessionJobTaggingAndCancellationSuite.scala:229)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:472)
[info]   at org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:471)
[info]   at org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1231)
[info]   at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:1295)
[info]   at org.apache.spark.sql.SparkSessionJobTaggingAndCancellationSuite.$anonfun$new$13(SparkSessionJobTaggingAndCancellationSuite.scala:229)
[info]   at scala.collection.immutable.List.foreach(List.scala:323)
[info]   at org.apache.spark.sql.SparkSessionJobTaggingAndCancellationSuite.$anonfun$new$6(SparkSessionJobTaggingAndCancellationSuite.scala:226)
[info]   at org.scalatest.enablers.Timed$$anon$1.timeoutAfter(Timed.scala:127)
[info]   at org.scalatest.concurrent.TimeLimits$.failAfterImpl(TimeLimits.scala:282)
[info]   at org.scalatest.concurrent.TimeLimits.failAfter(TimeLimits.scala:231)
[info]   at org.scalatest.concurrent.TimeLimits.failAfter$(TimeLimits.scala:230)
[info]   at org.apache.spark.SparkFunSuite.failAfter(SparkFunSuite.scala:68)
[info]   at org.apache.spark.SparkFunSuite.$anonfun$test$2(SparkFunSuite.scala:154)
[info]   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
[info]   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:226)
[info]   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:226)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:224)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:236)
[info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:236)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:218)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:68)
[info]   at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
[info]   at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
[info]   at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:68)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:269)
[info]   at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
[info]   at scala.collection.immutable.List.foreach(List.scala:323)
[info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info]   at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
[info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:269)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:268)
[info]   at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1564)
[info]   at org.scalatest.Suite.run(Suite.scala:1114)
[info]   at org.scalatest.Suite.run$(Suite.scala:1096)
[info]   at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1564)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:273)
[info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:273)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:272)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:68)
[info]   at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
[info]   at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
[info]   at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
[info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:68)
[info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:321)
[info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:517)
[info]   at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:414)
[info]   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
[info]   at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
[info]   at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
[info]   at java.base/java.lang.Thread.run(Thread.java:840)
```

On the other hand, if inserting sleep but leaving `Executors.newFixedThreadPool(3)` as it is, this test always seems to pass.
So, we can now reenable this test.

### Why are the changes needed?
For better test coverage.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
The test always passes on my dev environment even if inserting sleep like explained above.

### Was this patch authored or co-authored using generative AI tooling?
No.

Closes #52704 from sarutak/SPARK-50205.

Authored-by: Kousuke Saruta <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
zifeif2 pushed a commit to zifeif2/spark that referenced this pull request Nov 14, 2025
…SessionE2ESuite - interrupt tag` caused by the usage of `ForkJoinPool`

### What changes were proposed in this pull request?
This PR backports apache#52417 to `branch-4.0`.

This PR aims to fix one of the issues which block SPARK-48139.
In the problematic test `interrupt tag` in `SparkSessionE2ESuite`, four futures run on threads in `ForkJoinPool` and try to interrupt through tags.
A thread in a `ForkJoinPool` can create a spare thread and make it available in the pool so any of threads can be parent and child. It can happen when a thread performs a blocking operation. One example is `ArrayBlockingQueue.take` and it is called in a method provided by [gRPC](https://github.com/grpc/grpc-java/blob/24085103b926559659ecd3941a3308479876f084/stub/src/main/java/io/grpc/stub/ClientCalls.java#L607).

On the other hand, tags are implemented as [InheritableThreadLocal](https://github.com/apache/spark/blob/4fdb4abb260bd6df09c6239cea3e962c1b709381/sql/connect/common/src/main/scala/org/apache/spark/sql/connect/client/SparkConnectClient.scala#L285).
So, if the futures q1 and q4, or q2 and q3 are parent and child, tags should be inherited, which causes the flaky test failure.

You can easily reproduce the issue by inserting a sleep into the problematic test like as follows (don't forget to replace `ignore` with `test`).

```
   // TODO(SPARK-48139): Re-enable `SparkSessionE2ESuite.interrupt tag`
-  ignore("interrupt tag") {
+  test("interrupt tag") {
     val session = spark
     import session.implicits._

 -204,6 +204,7  class SparkSessionE2ESuite extends ConnectFunSuite with RemoteSparkSession {
         spark.clearTags() // clear for the case of thread reuse by another Future
       }
     }(executionContext)
+    Thread.sleep(1000)
     val q4 = Future {
       assert(spark.getTags() == Set())
       spark.addTag("one")
```

And then, run the test.
```
$ build/sbt 'connect-client-jvm/testOnly org.apache.spark.sql.connect.SparkSessionE2ESuite -- -z "interrupt tag"'
```

### Why are the changes needed?
For test stability.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Ran the problematic test with inserting sleep like mentioned above and it passed.

### Was this patch authored or co-authored using generative AI tooling?
No.

Closes apache#52476 from sarutak/fix-thread-pool-issue-4.0.

Authored-by: Kousuke Saruta <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
huangxiaopingRD pushed a commit to huangxiaopingRD/spark that referenced this pull request Nov 25, 2025
…onE2ESuite - interrupt tag` caused by the usage of `ForkJoinPool`

### What changes were proposed in this pull request?
This PR aims to fix one of the issues which block SPARK-48139.
In the problematic test `interrupt tag` in `SparkSessionE2ESuite`, four futures run on threads in `ForkJoinPool` and try to interrupt through tags.
A thread in a `ForkJoinPool` can create a spare thread and make it available in the pool so any of threads can be parent and child. It can happen when a thread performs a blocking operation. One example is `ArrayBlockingQueue.take` and it is called in a method provided by [gRPC](https://github.com/grpc/grpc-java/blob/3e993a9f44ff52bd3d5ac59dfa978d8e7d30e28b/stub/src/main/java/io/grpc/stub/ClientCalls.java#L607).

On the other hand, tags are inplemented as [InheritableThreadLocal](https://github.com/apache/spark/blob/f3ac67ee9b3b0ce63b30426f8bec825b20d91dde/sql/connect/common/src/main/scala/org/apache/spark/sql/connect/client/SparkConnectClient.scala#L303).
So, if the futures q1 and q4, or q2 and q3 are parent and child, tags should be inheritd, which causes the flaky test faulure.

You can easily reprodue the issue by inserting a sleep into the problematic test like as follows (don't forget to replace `ignore` with `test`).

```
   // TODO(SPARK-48139): Re-enable `SparkSessionE2ESuite.interrupt tag`
-  ignore("interrupt tag") {
+  test("interrupt tag") {
     val session = spark
     import session.implicits._

 -204,6 +204,7  class SparkSessionE2ESuite extends ConnectFunSuite with RemoteSparkSession {
         spark.clearTags() // clear for the case of thread reuse by another Future
       }
     }(executionContext)
+    Thread.sleep(1000)
     val q4 = Future {
       assert(spark.getTags() == Set())
       spark.addTag("one")
```

And then, run the test.
```
$ build/sbt 'connect-client-jvm/testOnly org.apache.spark.sql.connect.SparkSessionE2ESuite -- -z "interrupt tag"'
```

### Why are the changes needed?
For test stability.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Ran the problematic test with inserting sleep like mentioned above and it passed.

### Was this patch authored or co-authored using generative AI tooling?
No.

Closes apache#52417 from sarutak/fix-thread-pool-issue.

Authored-by: Kousuke Saruta <[email protected]>
Signed-off-by: Kousuke Saruta <[email protected]>
huangxiaopingRD pushed a commit to huangxiaopingRD/spark that referenced this pull request Nov 25, 2025
…tionSuite.Cancellation APIs in SparkSession are isolated`

### What changes were proposed in this pull request?
This PR aims to reenable `SparkSessionJobTaggingAndCancellationSuite.Cancellation APIs in SparkSession are isolated`.

apache#48736 disabled this test because of it was flaky. In this test, futures ran on threads managed by `ForkJoinPool`.
Each future invokes `SparkSession#addTag` and `SparkSession#getTag`, and tags are implemented using `InheritableThreadLocal`. So the root cause of this issue is same as apache#52417.

But apache#48906 replaced `ForkJoinPool` with `Executors.newFixedThreadPool(3)` so I believe this issue no longer occurs.
In fact, this issue can be reproduced by replacing `Executors.newFixedThreadPool(3)` with `new ForkJoinPool(3)` and inserting a sleep like as follows.

```
     // global ExecutionContext has only 2 threads in Apache Spark CI
     // create own thread pool for four Futures used in this test
-    val threadPool = Executors.newFixedThreadPool(3)
+    val threadPool = new ForkJoinPool(3)

...

+      Thread.sleep(1000)
       val jobB = Future {
         sessionB = globalSession.cloneSession()
         import globalSession.implicits._
```

Then, run the test as follows.
```
$ build/sbt 'sql/testOnly org.apache.spark.sql.SparkSessionJobTaggingAndCancellationSuite -- -z "Cancellation APIs in Spark\
Session are isolated"'
```

```
info] - Cancellation APIs in SparkSession are isolated *** FAILED *** (2 seconds, 726 milliseconds)
[info]   ArrayBuffer({"spark.app.startTime"="1761192376305", "spark.rdd.scope"="{"id":"3","name":"Exchange"}", "spark.hadoop.fs.s3a.vectored.read.min.seek.size"="128K", "spark.hadoop.hadoop.caller.context.enabled"="true", "spark.memory.debugFill"="true", "spark.master.rest.enabled"="false", "spark.sql.warehouse.dir"="file:/Users/sarutak/oss/spark/sql/core/spark-warehouse", "spark.master"="local[2]", "spark.job.interruptOnCancel"="true", "spark.app.name"="test", "spark.driver.host"="192.168.1.109", "spark.app.id"="local-1761192376735", "spark.job.tags"="spark-session-e2dd839b-2170-43c9-a8c9-1c8a24fe583c,spark-session-8c09c25f-089c-41ee-add1-1de463658349-thread-6b832f9d-3a55-4d1f-b47d-418fc2ed05e4-one,spark-session-e2dd839b-2170-43c9-a8c9-1c8a24fe583c-execution-root-id-0,spark-session-e2dd839b-2170-43c9-a8c9-1c8a24fe583c-thread-a4b5b347-6e56-4416-b3a5-37a312bdfe34-one,spark-session-8c09c25f-089c-41ee-add1-1de463658349,spark-session-8c09c25f-089c-41ee-add1-1de463658349-thread-6b832f9d-3a55-4d1f-b47d-418fc2ed05e4-two", "spark.unsafe.exceptionOnMemoryLeak"="true", "spark.sql.execution.root.id"="0", "spark.ui.showConsoleProgress"="false", "spark.driver.extraJavaOptions"="-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-modules=jdk.incubator.vector --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false -Dio.netty.tryReflectionSetAccessible=true -Dio.netty.allocator.type=pooled -Dio.netty.handler.ssl.defaultEndpointVerificationAlgorithm=NONE --enable-native-access=ALL-UNNAMED", "spark.driver.port"="56972", "spark.testing"="true", "spark.hadoop.fs.s3a.vectored.read.max.merged.size"="2M", "spark.sql.execution.id"="1", "spark.rdd.scope.noOverride"="true", "spark.executor.id"="driver", "spark.port.maxRetries"="100", "spark.executor.extraJavaOptions"="-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-modules=jdk.incubator.vector --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false -Dio.netty.tryReflectionSetAccessible=true -Dio.netty.allocator.type=pooled -Dio.netty.handler.ssl.defaultEndpointVerificationAlgorithm=NONE --enable-native-access=ALL-UNNAMED", "spark.test.home"="/Users/sarutak/oss/spark", "spark.ui.enabled"="false"}, {"spark.app.startTime"="1761192376305", "spark.rdd.scope"="{"id":"5","name":"Exchange"}", "spark.hadoop.fs.s3a.vectored.read.min.seek.size"="128K", "spark.hadoop.hadoop.caller.context.enabled"="true", "spark.memory.debugFill"="true", "spark.master.rest.enabled"="false", "spark.sql.warehouse.dir"="file:/Users/sarutak/oss/spark/sql/core/spark-warehouse", "spark.master"="local[2]", "spark.job.interruptOnCancel"="true", "spark.app.name"="test", "spark.driver.host"="192.168.1.109", "spark.app.id"="local-1761192376735", "spark.job.tags"="spark-session-e2dd839b-2170-43c9-a8c9-1c8a24fe583c-execution-root-id-0,spark-session-e2dd839b-2170-43c9-a8c9-1c8a24fe583c-thread-a4b5b347-6e56-4416-b3a5-37a312bdfe34-one,spark-session-e2dd839b-2170-43c9-a8c9-1c8a24fe583c", "spark.unsafe.exceptionOnMemoryLeak"="true", "spark.sql.execution.root.id"="0", "spark.ui.showConsoleProgress"="false", "spark.driver.extraJavaOptions"="-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-modules=jdk.incubator.vector --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false -Dio.netty.tryReflectionSetAccessible=true -Dio.netty.allocator.type=pooled -Dio.netty.handler.ssl.defaultEndpointVerificationAlgorithm=NONE --enable-native-access=ALL-UNNAMED", "spark.driver.port"="56972", "spark.testing"="true", "spark.hadoop.fs.s3a.vectored.read.max.merged.size"="2M", "spark.sql.execution.id"="0", "spark.rdd.scope.noOverride"="true", "spark.executor.id"="driver", "spark.port.maxRetries"="100", "spark.executor.extraJavaOptions"="-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-modules=jdk.incubator.vector --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false -Dio.netty.tryReflectionSetAccessible=true -Dio.netty.allocator.type=pooled -Dio.netty.handler.ssl.defaultEndpointVerificationAlgorithm=NONE --enable-native-access=ALL-UNNAMED", "spark.test.home"="/Users/sarutak/oss/spark", "spark.ui.enabled"="false"}) had size 2 instead of expected size 1 (SparkSessionJobTaggingAndCancellationSuite.scala:229)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:472)
[info]   at org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:471)
[info]   at org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1231)
[info]   at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:1295)
[info]   at org.apache.spark.sql.SparkSessionJobTaggingAndCancellationSuite.$anonfun$new$13(SparkSessionJobTaggingAndCancellationSuite.scala:229)
[info]   at scala.collection.immutable.List.foreach(List.scala:323)
[info]   at org.apache.spark.sql.SparkSessionJobTaggingAndCancellationSuite.$anonfun$new$6(SparkSessionJobTaggingAndCancellationSuite.scala:226)
[info]   at org.scalatest.enablers.Timed$$anon$1.timeoutAfter(Timed.scala:127)
[info]   at org.scalatest.concurrent.TimeLimits$.failAfterImpl(TimeLimits.scala:282)
[info]   at org.scalatest.concurrent.TimeLimits.failAfter(TimeLimits.scala:231)
[info]   at org.scalatest.concurrent.TimeLimits.failAfter$(TimeLimits.scala:230)
[info]   at org.apache.spark.SparkFunSuite.failAfter(SparkFunSuite.scala:68)
[info]   at org.apache.spark.SparkFunSuite.$anonfun$test$2(SparkFunSuite.scala:154)
[info]   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
[info]   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:226)
[info]   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:226)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:224)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:236)
[info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:236)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:218)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:68)
[info]   at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
[info]   at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
[info]   at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:68)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:269)
[info]   at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
[info]   at scala.collection.immutable.List.foreach(List.scala:323)
[info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info]   at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
[info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:269)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:268)
[info]   at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1564)
[info]   at org.scalatest.Suite.run(Suite.scala:1114)
[info]   at org.scalatest.Suite.run$(Suite.scala:1096)
[info]   at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1564)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:273)
[info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:273)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:272)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:68)
[info]   at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
[info]   at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
[info]   at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
[info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:68)
[info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:321)
[info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:517)
[info]   at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:414)
[info]   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
[info]   at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
[info]   at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
[info]   at java.base/java.lang.Thread.run(Thread.java:840)
```

On the other hand, if inserting sleep but leaving `Executors.newFixedThreadPool(3)` as it is, this test always seems to pass.
So, we can now reenable this test.

### Why are the changes needed?
For better test coverage.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
The test always passes on my dev environment even if inserting sleep like explained above.

### Was this patch authored or co-authored using generative AI tooling?
No.

Closes apache#52704 from sarutak/SPARK-50205.

Authored-by: Kousuke Saruta <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants