-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-44433][3.5][PYTHON][CONNECT][SS][FOLLOWUP] Terminate listener process with removeListener and improvements #42340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| pythonWorkerFactory = Some(workerFactory) | ||
| } finally { | ||
| conf.set(PYTHON_USE_DAEMON, prevConf) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This and the stop() method are different from master branch since the createPythonWorker method doesn't support custom modules at that time:
cc @ueshin to double check this
core/src/main/scala/org/apache/spark/api/python/StreamingPythonRunner.scala
Show resolved
Hide resolved
ueshin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise, LGTM, pending tests.
core/src/main/scala/org/apache/spark/api/python/StreamingPythonRunner.scala
Outdated
Show resolved
Hide resolved
|
Thanks! merging to 3.5. |
…process with removeListener and improvements ### Master Branch PR: #42283 ### What changes were proposed in this pull request? This is a followup to #42116. It addresses the following issues: 1. When `removeListener` is called upon one listener, before the python process is left running, now it also get stopped. 2. When multiple `removeListener` is called on the same listener, in non-connect mode, subsequent calls will be noop. But before this PR, in connect it actually throws an error, which doesn't align with existing behavior, this PR addresses it. 3. Set the socket timeout to be None (\infty) for `foreachBatch_worker` and `listener_worker`, because there could be a long time between each microbatch. If not setting this, the socket will timeout and won't be able to process new data. ``` scala> Streaming query listener worker is starting with url sc://localhost:15002/;user_id=wei.liu and sessionId 886191f0-2b64-4c44-b067-de511f04b42d. Traceback (most recent call last): File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/sql/connect/streaming/worker/listener_worker.py", line 95, in <module> File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/sql/connect/streaming/worker/listener_worker.py", line 82, in main File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/serializers.py", line 557, in loads File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/serializers.py", line 594, in read_int File "/usr/lib/python3.9/socket.py", line 704, in readinto return self._sock.recv_into(b) socket.timeout: timed out ``` ### Why are the changes needed? Necessary improvements ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Manual test + unit test Closes #42340 from WweiL/SPARK-44433-listener-followup-3.5. Authored-by: Wei Liu <[email protected]> Signed-off-by: Takuya UESHIN <[email protected]>
Master Branch PR: #42283
What changes were proposed in this pull request?
This is a followup to #42116. It addresses the following issues:
removeListeneris called upon one listener, before the python process is left running, now it also get stopped.removeListeneris called on the same listener, in non-connect mode, subsequent calls will be noop. But before this PR, in connect it actually throws an error, which doesn't align with existing behavior, this PR addresses it.foreachBatch_workerandlistener_worker, because there could be a long time between each microbatch. If not setting this, the socket will timeout and won't be able to process new data.Why are the changes needed?
Necessary improvements
Does this PR introduce any user-facing change?
No
How was this patch tested?
Manual test + unit test