-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-33277][PYSPARK][SQL] Use ContextAwareIterator to stop consuming after the task ends. #30242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
3f96145
2a0d2af
6e5be90
895d91d
997e1aa
aec13c2
7db9bb8
e258016
0947e29
a6142bd
6b07f8d
8b11647
1ba0b65
87d8854
c77b61e
e2cc227
ee308de
429c159
46613dd
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -18,6 +18,7 @@ | |
| package org.apache.spark.sql.execution.python | ||
|
|
||
| import java.io.File | ||
| import java.util.concurrent.atomic.{AtomicBoolean, AtomicReference} | ||
|
|
||
| import scala.collection.mutable.ArrayBuffer | ||
|
|
||
|
|
@@ -89,6 +90,7 @@ trait EvalPythonExec extends UnaryExecNode { | |
|
|
||
| inputRDD.mapPartitions { iter => | ||
| val context = TaskContext.get() | ||
| val contextAwareIterator = new ContextAwareIterator(iter, context) | ||
|
|
||
| // The queue used to buffer input rows so we can drain it to | ||
| // combine input with output from Python. | ||
|
|
@@ -120,7 +122,7 @@ trait EvalPythonExec extends UnaryExecNode { | |
| }.toSeq) | ||
|
|
||
| // Add rows to queue to join later with the result. | ||
| val projectedRowIter = iter.map { inputRow => | ||
| val projectedRowIter = contextAwareIterator.map { inputRow => | ||
| queue.add(inputRow.asInstanceOf[UnsafeRow]) | ||
| projection(inputRow) | ||
| } | ||
|
|
@@ -137,3 +139,53 @@ trait EvalPythonExec extends UnaryExecNode { | |
| } | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * A TaskContext aware iterator. | ||
| * | ||
| * As the Python evaluation consumes the parent iterator in a separate thread, | ||
| * it could consume more data from the parent even after the task ends and the parent is closed. | ||
| * Thus, we should use ContextAwareIterator to stop consuming after the task ends. | ||
| */ | ||
| class ContextAwareIterator[IN](iter: Iterator[IN], context: TaskContext) extends Iterator[IN] { | ||
|
|
||
| private val thread = new AtomicReference[Thread]() | ||
|
|
||
| if (iter.hasNext) { | ||
| val failed = new AtomicBoolean(false) | ||
|
|
||
| context.addTaskFailureListener { (_, _) => | ||
| failed.set(true) | ||
| } | ||
|
|
||
| context.addTaskCompletionListener[Unit] { _ => | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This assumes the task completion listener to stop
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The task completion lister will wait for the |
||
| var thread = this.thread.get() | ||
|
|
||
| // Wait for a while since the writer thread might not reach to consuming the iterator yet. | ||
| while (thread == null && !failed.get()) { | ||
| // Use `context.wait()` instead of `Thread.sleep()` here since the task completion lister | ||
| // works under `synchronized(context)`. We might need to consider to improve in the future. | ||
| // It's a bad idea to hold an implicit lock when calling user's listener because it's | ||
| // pretty easy to cause surprising deadlock. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is a bit scary. Is there a better way?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Maybe we can fix this first. The this listener doesn't need to rely on an implicit lock.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see. Let me change the strategy here. |
||
| context.wait(10) | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Did you mean
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I do mean
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I didn't realize it. It's better to not rely on this in a listener. This is something we should consider to improve in future. It's a bad idea to hold an implicit lock when calling user's listener because it's pretty easy to cause surprising deadlock. |
||
|
|
||
| thread = this.thread.get() | ||
| } | ||
|
|
||
| if (thread != null && thread != Thread.currentThread()) { | ||
| // Wait until the writer thread ends. | ||
| while (thread.isAlive) { | ||
| // Use `context.wait()` instead of `Thread.sleep()` with the same reason above. | ||
| context.wait(10) | ||
ueshin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| override def hasNext: Boolean = { | ||
| thread.set(Thread.currentThread()) | ||
| !context.isCompleted() && !context.isInterrupted() && iter.hasNext | ||
| } | ||
|
|
||
| override def next(): IN = iter.next() | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this change the thread that
iter.hasNextis running? We can add the listeners without checking it.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually this is to make sure the upstream iterator is initialized. The upstream iterator must be initialized earlier as it might register another completion listener and the listener should run later than this one.