-
Notifications
You must be signed in to change notification settings - Fork 174
Description
I have a program which launches python subprocesses via python's subprocess module using an invocation syntax like:
subprocess.Popen("./path/to/script/with/pyshebangscript", ...])
Where pyshebangscript looks like this and is an auto-generated shim created by poetry from the [tool.poetry.scripts]
pyproject.toml section.
#!/app/pypoetry/virtualenvs/appname--PefSXhl-py3.10/bin/python
import sys
from zephyrus.cli import app
if __name__ == '__main__':
sys.exit(app())
AFAICT this mechanism for invoking the sub process defeats the default subProcess: true
tracking behavior in debugpy.
I found a workaround which appears to work wherein I explicitly reconnect the subprocess back to the original debugging session (here I'm assuming that os.environ
is exported from parent to child process so that the env variables updated below propogate information to the subprocess).
I put this code in my project's root most __init__.py
file
import os
DEBUGPY_ENDPOINT = os.environ.get("DEBUGPY_ENDPOINT", None)
USE_DEBUGPY = os.environ.get("USE_DEBUGPY", None)
if USE_DEBUGPY is not None and DEBUGPY_ENDPOINT is not None:
import debugpy
debugger_host, debugger_port = os.environ["DEBUGPY_ENDPOINT"].split(":")
print(f"Connecting to debugpy host process on {debugger_host}:{debugger_port}")
debugpy.connect(
(debugger_host, int(debugger_port)), access_token=os.environ["DEBUGPY_TOKEN"]
)
debugpy.wait_for_client()
print("Connected to debugger process")
if USE_DEBUGPY is not None and DEBUGPY_ENDPOINT is None:
import debugpy
def get_debugger_endpoint():
import os
import psutil
parent_pid = os.getppid() # Get parent process ID
parent = psutil.Process(parent_pid)
parent_cli = parent.cmdline()
debugpy_endpoint = parent_cli[3]
debugpy_token = parent_cli[7]
return debugpy_endpoint, debugpy_token
debugger_endpoint = get_debugger_endpoint()
os.environ["DEBUGPY_ENDPOINT"], os.environ["DEBUGPY_TOKEN"] = debugger_endpoint
print(f"Debugger endpoint found at {debugger_endpoint}")
The above snippet includes some hacky logic to detect the debugpy endpoint and access_token from the original process's parent process's cli arguments ...
So with that context - I suppose I have a two part question here.
FIrst -- is there a good way to change the subprocess.Popen
invocation that wouldn't defeat the built in subprocess tracking logic in vscode/debugpy?
Second -- if I do need to do something like above to get this working properly, would it be possible to add something to the public api of debugpy
to make it easy to directly observe the endpoint and access token associated with a connected process? Retrieving these values is needed so that these values can be passed along from the originally debugged process to the subprocess(es) it launches -- where those subprocesses will in turn debugpy.connect()
back to the active debugging session.
I think a mechanism to observe these values would be useful even if there's a better way to handle this exact problem, as I believe the logic above would work for subprocesses that get launched on remote hosts as well. I do this on my system as these subprocesses may also execute inside of cloud run jobs ... To get the experience I want (pausing on breakpoints on code executing a cloud run job) -- I think all I will need to figure out is how to get vscode's debugpy
debugger type to support connections from external network's (rather than binding the debug server port to the default 127.0.0.1 only).