-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement and use TritonServerManager class #488
Conversation
Signed-off-by: Rishi Chandra <[email protected]>
return None | ||
return f"grpc://localhost:{self._ports[1]}" | ||
|
||
def _find_ports(self, start_port: int = 7000) -> List[int]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking about this again, should port finding be integrated into server start spark job which would return the ports and pids? Otherwise, likelier that ports could be taken in the mean time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
examples/ML+DL-Examples/Spark-DL/dl_inference/pytriton_utils.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Consolidate util functions into a server manager class to simplify usage. (Note that notebooks were rerun but only the Triton utility invocations are changed).
Also on Dataproc, the utils file needs to be copied driver root dir instead of the same directory as the notebooks, since sc.addPyFile on Dataproc only accepts absolute paths from root.