Skip to content

Conversation

@WweiL
Copy link
Contributor

@WweiL WweiL commented Aug 2, 2023

What changes were proposed in this pull request?

This is a followup to #42116. It addresses the following issues:

  1. When removeListener is called upon one listener, before the python process is left running, now it also get stopped.
  2. When multiple removeListener is called on the same listener, in non-connect mode, subsequent calls will be noop. But before this PR, in connect it actually throws an error, which doesn't align with existing behavior, this PR addresses it.
  3. Set the socket timeout to be None (\infty) for foreachBatch_worker and listener_worker, because there could be a long time between each microbatch. If not setting this, the socket will timeout and won't be able to process new data.
scala> Streaming query listener worker is starting with url sc://localhost:15002/;user_id=wei.liu and sessionId 886191f0-2b64-4c44-b067-de511f04b42d.
Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/sql/connect/streaming/worker/listener_worker.py", line 95, in <module>
  File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/sql/connect/streaming/worker/listener_worker.py", line 82, in main
  File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/serializers.py", line 557, in loads
  File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/serializers.py", line 594, in read_int
  File "/usr/lib/python3.9/socket.py", line 704, in readinto
    return self._sock.recv_into(b)
socket.timeout: timed out

Why are the changes needed?

Necessary improvements

Does this PR introduce any user-facing change?

No

How was this patch tested?

Manual test + unit test

@WweiL
Copy link
Contributor Author

WweiL commented Aug 2, 2023

@bogao007 @HyukjinKwon Can you guys take a look? Thanks! This needs to goto 3.5 also

Copy link
Contributor

@bogao007 bogao007 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with a minor comment

(dataOut, dataIn)
}

def stop(): Unit = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor: please add documentation since this is a public function

@WweiL
Copy link
Contributor Author

WweiL commented Aug 3, 2023

The test failure isn't related I believe we could merge this. Before the doc change it was all green

@WweiL
Copy link
Contributor Author

WweiL commented Aug 3, 2023

@ueshin since this changes the method to create python worker, I was wondering if you could also take a look? Thanks!

Copy link
Member

@ueshin ueshin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise, LGTM.


envVars.put("SPARK_AUTH_SOCKET_TIMEOUT", authSocketTimeout.toString)
envVars.put("SPARK_BUFFER_SIZE", bufferSize.toString)
conf.set(PYTHON_USE_DAEMON, false)
Copy link
Member

@ueshin ueshin Aug 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not updated in this PR, but should we set this back to the original value after creating the Python worker?
As the conf is visible from other part in the Driver, it could affect the behavior.

It can be done in a separate PR.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw, do we need to need to change this at all? It might be simpler to keep this unchanged.

@HyukjinKwon
Copy link
Member

HyukjinKwon commented Aug 4, 2023

Merged to master.

@HyukjinKwon
Copy link
Member

@WweiL it has a conflict with branch-3.5. Mind resolving them and create a PR please?

ueshin pushed a commit that referenced this pull request Aug 5, 2023
…r creating streaming python processes

### What changes were proposed in this pull request?

Followup of this comment: #42283 (comment)
Change back the spark conf after creating streaming python process.

### Why are the changes needed?

Bug fix

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Config only change

Closes #42341 from WweiL/SPARK-44433-followup-USEDAEMON.

Authored-by: Wei Liu <[email protected]>
Signed-off-by: Takuya UESHIN <[email protected]>
ueshin pushed a commit that referenced this pull request Aug 5, 2023
…process with removeListener and improvements

### Master Branch PR: #42283

### What changes were proposed in this pull request?

This is a followup to #42116. It addresses the following issues:

1. When `removeListener` is called upon one listener, before the python process is left running, now it also get stopped.
2. When multiple `removeListener` is called on the same listener, in non-connect mode, subsequent calls will be noop. But before this PR, in connect it actually throws an error, which doesn't align with existing behavior, this PR addresses it.
3. Set the socket timeout to be None (\infty) for `foreachBatch_worker` and `listener_worker`, because there could be a long time between each microbatch. If not setting this, the socket will timeout and won't be able to process new data.

```
scala> Streaming query listener worker is starting with url sc://localhost:15002/;user_id=wei.liu and sessionId 886191f0-2b64-4c44-b067-de511f04b42d.
Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/sql/connect/streaming/worker/listener_worker.py", line 95, in <module>
  File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/sql/connect/streaming/worker/listener_worker.py", line 82, in main
  File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/serializers.py", line 557, in loads
  File "/home/wei.liu/oss-spark/python/lib/pyspark.zip/pyspark/serializers.py", line 594, in read_int
  File "/usr/lib/python3.9/socket.py", line 704, in readinto
    return self._sock.recv_into(b)
socket.timeout: timed out
```

### Why are the changes needed?

Necessary improvements

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Manual test + unit test

Closes #42340 from WweiL/SPARK-44433-listener-followup-3.5.

Authored-by: Wei Liu <[email protected]>
Signed-off-by: Takuya UESHIN <[email protected]>
self.spark.streams.removeListener(test_listener)

# Remove again to verify this won't throw any error
self.spark.streams.removeListener(test_listener)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this test ensure listener worker is removed?
Another PR #42385 broke stop() method, but it didn't cause any failure.
(I added a comment about the breaking change: https://github.com/apache/spark/pull/42385/files#r1295429496)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That test is not to ensure the worker is removed, it is to ensure no error will be thrown when removeListener is called twice on the same listener

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants