-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-44433][PYTHON][CONNECT][SS][FOLLOWUP] Terminate listener process with removeListener and improvements
#42283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-44433][PYTHON][CONNECT][SS][FOLLOWUP] Terminate listener process with removeListener and improvements
#42283
Conversation
|
@bogao007 @HyukjinKwon Can you guys take a look? Thanks! This needs to goto 3.5 also |
bogao007
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with a minor comment
| (dataOut, dataIn) | ||
| } | ||
|
|
||
| def stop(): Unit = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor: please add documentation since this is a public function
|
The test failure isn't related I believe we could merge this. Before the doc change it was all green |
|
@ueshin since this changes the method to create python worker, I was wondering if you could also take a look? Thanks! |
ueshin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise, LGTM.
|
|
||
| envVars.put("SPARK_AUTH_SOCKET_TIMEOUT", authSocketTimeout.toString) | ||
| envVars.put("SPARK_BUFFER_SIZE", bufferSize.toString) | ||
| conf.set(PYTHON_USE_DAEMON, false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not updated in this PR, but should we set this back to the original value after creating the Python worker?
As the conf is visible from other part in the Driver, it could affect the behavior.
It can be done in a separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Btw, do we need to need to change this at all? It might be simpler to keep this unchanged.
|
Merged to master. |
|
@WweiL it has a conflict with branch-3.5. Mind resolving them and create a PR please? |
…r creating streaming python processes ### What changes were proposed in this pull request? Followup of this comment: #42283 (comment) Change back the spark conf after creating streaming python process. ### Why are the changes needed? Bug fix ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Config only change Closes #42341 from WweiL/SPARK-44433-followup-USEDAEMON. Authored-by: Wei Liu <[email protected]> Signed-off-by: Takuya UESHIN <[email protected]>
…process with removeListener and improvements ### Master Branch PR: #42283 ### What changes were proposed in this pull request? This is a followup to #42116. It addresses the following issues: 1. When `removeListener` is called upon one listener, before the python process is left running, now it also get stopped. 2. When multiple `removeListener` is called on the same listener, in non-connect mode, subsequent calls will be noop. But before this PR, in connect it actually throws an error, which doesn't align with existing behavior, this PR addresses it. 3. Set the socket timeout to be None (\infty) for `foreachBatch_worker` and `listener_worker`, because there could be a long time between each microbatch. If not setting this, the socket will timeout and won't be able to process new data. ``` scala> Streaming query listener worker is starting with url sc://localhost:15002/;user_id=wei.liu and sessionId 886191f0-2b64-4c44-b067-de511f04b42d. Traceback (most recent call last): File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/sql/connect/streaming/worker/listener_worker.py", line 95, in <module> File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/sql/connect/streaming/worker/listener_worker.py", line 82, in main File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/serializers.py", line 557, in loads File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/serializers.py", line 594, in read_int File "/usr/lib/python3.9/socket.py", line 704, in readinto return self._sock.recv_into(b) socket.timeout: timed out ``` ### Why are the changes needed? Necessary improvements ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Manual test + unit test Closes #42340 from WweiL/SPARK-44433-listener-followup-3.5. Authored-by: Wei Liu <[email protected]> Signed-off-by: Takuya UESHIN <[email protected]>
| self.spark.streams.removeListener(test_listener) | ||
|
|
||
| # Remove again to verify this won't throw any error | ||
| self.spark.streams.removeListener(test_listener) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this test ensure listener worker is removed?
Another PR #42385 broke stop() method, but it didn't cause any failure.
(I added a comment about the breaking change: https://github.com/apache/spark/pull/42385/files#r1295429496)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That test is not to ensure the worker is removed, it is to ensure no error will be thrown when removeListener is called twice on the same listener
What changes were proposed in this pull request?
This is a followup to #42116. It addresses the following issues:
removeListeneris called upon one listener, before the python process is left running, now it also get stopped.removeListeneris called on the same listener, in non-connect mode, subsequent calls will be noop. But before this PR, in connect it actually throws an error, which doesn't align with existing behavior, this PR addresses it.foreachBatch_workerandlistener_worker, because there could be a long time between each microbatch. If not setting this, the socket will timeout and won't be able to process new data.Why are the changes needed?
Necessary improvements
Does this PR introduce any user-facing change?
No
How was this patch tested?
Manual test + unit test