-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
REPL-driven, multi-process-native debugger features #113
Labels
Comments
goodboy
added
discussion
enhancement
New feature or request
help wanted
Extra attention is needed
question
Further information is requested
labels
Feb 11, 2020
This was referenced Jul 22, 2020
Been doing lots of digging as part of #129 and dumping some links:
More to come.. |
goodboy
added a commit
that referenced
this issue
Jul 30, 2020
This is the first step in addressing #113 and the initial support of #130. Basically this allows (sub)processes to engage the `pdbpp` debug machinery which read/writes the root actor's tty but only in a FIFO semaphored way such that no two processes are using it simultaneously. That means you can have multiple actors enter a trace or crash and run the debugger in a sensible way without clobbering each other's access to stdio. It required adding some "tear down hooks" to a custom `pdbpp.Pdb` type such that we release a child's lock on the parent on debugger exit (in this case when either of the "continue" or "quit" commands are issued to the debugger console). There's some code left commented in anticipation of full support for issue #130 where we're need to actually capture and feed stdin to the target (remote) actor which won't necessarily being running on the same host.
goodboy
added a commit
that referenced
this issue
Jul 30, 2020
This is the first step in addressing #113 and the initial support of #130. Basically this allows (sub)processes to engage the `pdbpp` debug machinery which read/writes the root actor's tty but only in a FIFO semaphored way such that no two processes are using it simultaneously. That means you can have multiple actors enter a trace or crash and run the debugger in a sensible way without clobbering each other's access to stdio. It required adding some "tear down hooks" to a custom `pdbpp.Pdb` type such that we release a child's lock on the parent on debugger exit (in this case when either of the "continue" or "quit" commands are issued to the debugger console). There's some code left commented in anticipation of full support for issue #130 where we're need to actually capture and feed stdin to the target (remote) actor which won't necessarily being running on the same host.
goodboy
added a commit
that referenced
this issue
Aug 4, 2020
This is the first step in addressing #113 and the initial support of #130. Basically this allows (sub)processes to engage the `pdbpp` debug machinery which read/writes the root actor's tty but only in a FIFO semaphored way such that no two processes are using it simultaneously. That means you can have multiple actors enter a trace or crash and run the debugger in a sensible way without clobbering each other's access to stdio. It required adding some "tear down hooks" to a custom `pdbpp.Pdb` type such that we release a child's lock on the parent on debugger exit (in this case when either of the "continue" or "quit" commands are issued to the debugger console). There's some code left commented in anticipation of full support for issue #130 where we're need to actually capture and feed stdin to the target (remote) actor which won't necessarily being running on the same host.
goodboy
added a commit
that referenced
this issue
Aug 4, 2020
This is the first step in addressing #113 and the initial support of #130. Basically this allows (sub)processes to engage the `pdbpp` debug machinery which read/writes the root actor's tty but only in a FIFO semaphored way such that no two processes are using it simultaneously. That means you can have multiple actors enter a trace or crash and run the debugger in a sensible way without clobbering each other's access to stdio. It required adding some "tear down hooks" to a custom `pdbpp.Pdb` type such that we release a child's lock on the parent on debugger exit (in this case when either of the "continue" or "quit" commands are issued to the debugger console). There's some code left commented in anticipation of full support for issue #130 where we're need to actually capture and feed stdin to the target (remote) actor which won't necessarily being running on the same host.
goodboy
added a commit
that referenced
this issue
Aug 9, 2020
This is the first step in addressing #113 and the initial support of #130. Basically this allows (sub)processes to engage the `pdbpp` debug machinery which read/writes the root actor's tty but only in a FIFO semaphored way such that no two processes are using it simultaneously. That means you can have multiple actors enter a trace or crash and run the debugger in a sensible way without clobbering each other's access to stdio. It required adding some "tear down hooks" to a custom `pdbpp.Pdb` type such that we release a child's lock on the parent on debugger exit (in this case when either of the "continue" or "quit" commands are issued to the debugger console). There's some code left commented in anticipation of full support for issue #130 where we're need to actually capture and feed stdin to the target (remote) actor which won't necessarily being running on the same host.
goodboy
added a commit
that referenced
this issue
Aug 13, 2020
This is the first step in addressing #113 and the initial support of #130. Basically this allows (sub)processes to engage the `pdbpp` debug machinery which read/writes the root actor's tty but only in a FIFO semaphored way such that no two processes are using it simultaneously. That means you can have multiple actors enter a trace or crash and run the debugger in a sensible way without clobbering each other's access to stdio. It required adding some "tear down hooks" to a custom `pdbpp.Pdb` type such that we release a child's lock on the parent on debugger exit (in this case when either of the "continue" or "quit" commands are issued to the debugger console). There's some code left commented in anticipation of full support for issue #130 where we're need to actually capture and feed stdin to the target (remote) actor which won't necessarily being running on the same host.
goodboy
added a commit
that referenced
this issue
Sep 24, 2020
This is the first step in addressing #113 and the initial support of #130. Basically this allows (sub)processes to engage the `pdbpp` debug machinery which read/writes the root actor's tty but only in a FIFO semaphored way such that no two processes are using it simultaneously. That means you can have multiple actors enter a trace or crash and run the debugger in a sensible way without clobbering each other's access to stdio. It required adding some "tear down hooks" to a custom `pdbpp.Pdb` type such that we release a child's lock on the parent on debugger exit (in this case when either of the "continue" or "quit" commands are issued to the debugger console). There's some code left commented in anticipation of full support for issue #130 where we're need to actually capture and feed stdin to the target (remote) actor which won't necessarily being running on the same host.
goodboy
changed the title
Proper multiprocessing-native debugger wen?
Multi-process-native debugger features
Feb 24, 2021
goodboy
changed the title
Multi-process-native debugger features
Even more multi-process-native debugger features
Dec 21, 2021
goodboy
changed the title
Even more multi-process-native debugger features
REPL-drive, multi-process-native debugger features
Dec 21, 2021
goodboy
changed the title
REPL-drive, multi-process-native debugger features
REPL-driven, multi-process-native debugger features
Dec 21, 2021
2 tasks
1 task
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
In
tractor
every actor is both a (potential) client and server. Concocting a "native" feeling "remote debugger" shouldn't be that bad (they thought naively..), right?I did a little digging and asked all the cool peeps what they use. Well here's a list of interesting stuff:
popular Python remote debuggers
pytest
devspdb
- looks a little unmaintained thoWhat I want:
breakpoint()
in actor code andtractor
does the right thing and yields std stream control in FIFO order to whatever actor got their message to the parent firstpdbpp
, it's just too good not to usegdb
does with threadsProfiling and other possible tools and approaches
A WIP list of tooling/instrumentation that might add to and/or inspire ideas for the future!
REPL driven, would love to have, features
ptk
has utils for not clobbering stdout which would be super handy for having multiple actors logging while your in the middle of debugging a crash.ipython
rsyscall
'swish
APIAs always, lurkers feel free to pipe in with ideas 🏄♂️
The text was updated successfully, but these errors were encountered: