-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dap: rework auto-resume logic when connection is closed in multi-client mode #2958
Comments
Two things:
|
I have been wondering about that myself. If a dap client reconnects, all breakpoints will be cleared and reset anyway. But an rpc client would inherit them, won't it? And if the user has logging, the logpoints would log to stderr with no client and just the server running. So some small reasons, but not strong reasons. We could go either way and should pick whatever is easiest, I think. There is also a difference between treating them as regular breakpoints with no client - i.e. stopping auto-resume until a client reconnects and actually clearing them and letting the process just run uninterrupted. |
Shouldn't we clear all breakpoints if a client disconnects (at least when a user asks to resume the target before disconnecting)? I couldn't find a convincing use case of leaving breakpoints (or logpoints) after the client is disconnected yet. If one wants to leave an app running with breakpoints/logpoints over night in the hope of catching a rare event, couldn't the person leave the debugging session active over night too? Interestingly, in GDB remote, I think the following behavior effectively causes all the breakpoints to get cleared before a user who connects using terminal-based cli detaches/disconnects from the remote gdb server.
Separately, I found [*]: In my case, my expectation on |
In golang/vscode-go#2368 a user wants to clear all the breakpoints & resume when disconnecting. I also agree with the user and this is a regression when switching from the legacy adapter. Reading the original issue again, I am afraid the scope of this issue is broader than changing the disconnect behavior around breakpoints/logpoints handling. Do you want us to open a narrower issue (change the disconnect behavior to clear all breakpoints) and fix the issue? |
Yes, the scope of this issue was intended to be broader - it was meant to sort out all the race conditions caused by the complexity of disconnecting and reconnecting while managing different types of breakpoints. If we simply clear all breakpoints, this becomes very simple. So the prerequisite question was: are the surviving breakpoints valuable enough to deal with the complexity? There are definitely users that don't want their server to get unexpectedly stuck on some breakpoint after they are done debugging. The UI has a toggle to deactivate/remove all breakpoints in one click, but a user would have to remember to use it. On the other hand there could be users who loose connection due to flakiness, who might appreciate a seamless reconnect. Or maybe they are chasing a rare condition and want things to keep running until it is hit, then connect to investigate more. We had a VC discussion about this a couple of months ago. From the notes:
And that's why this issue is still open :) In our attempt to maybe agree to simplify this, we ended up agreeing on more work, not less (both dealing with races while having breakpoints AND adding flags to clear them). Happy to revisit :) @briandealwis What would CloudCode users prefer? |
From CloudCode's use case here (#2772 (comment)) I think the good default for CloudCode is to clear all the breakpoints + continue. "Continue" is already the default, and "Clearing all breakpoints" is another missing feature.
What I remember from the discussion is nobody came up with convincing, strong reasons to keep the breakpoints (and I still see no answer to my question #2958 (comment)). The decision was made based on an assumption that this can be implemented without adding too much burden/time or this issue was under active work. Since the assumption is no longer true, why not rescope and focus on #3083? According to what @OrBin reported in golang/vscode-go#2368, Goland clears breakpoints before tearing down the session. GDB protocol by design clears breakpoints. Why not just follow the step?
I don't think this is worth all the complexities.
Users can keep the debug session running. |
I hesitate to make a statement on behalf of all Cloud Code users, but like @hyangah, I can’t think of a compelling reason to retain breakpoints on detach. I think it would be unlikely for a user to want to set breakpoints, detach, and then re-attach later in the hopes of finding some session paused in a breakpoint, as it would lead to hangs in each process hitting the breakpoint. |
briandealwis Thanks for the input. We have never come up with any particularly strong reasons for keeping the breakpoints. It's just been the historical behavior that got copied over when we ported the legacy adapter, which in turn was probably modeled after the CLI. Such historical behaviors tend to stick because there is a chance that somebody depends on them, so nobody revisits the status quo until it causes some friction. Interestingly, I can't find any user complaints about this behavior in the old issues. But we have one now (golang/vscode-go#2368). And this helps simplify this issue well. So the status quo has been challenged. All good reasons to do it the other way. If some user pops in and complaints that we took away their favorite feature, we can ask them to use the cli. |
Hello. This bug report is more than two years old already and still in a discussion stage without any progress after August 2022. What is stopping you to decide how to implement this obvious feature and make remote debugging logically correct? I came to Go development from Java and in Java disconnecting debugger from a remote process disables all breakpoints there and lets that remote process to continue running. |
If a client connection is closed while a running command is in progress, that command is not interrupted, so the auto-resume goroutine continues as if nothing has happened. The auto-resume code therefore needs to handle any stops it encounters after that correctly.
The auto-resumer might get interrupted by the following events:
Session
, so when a halt is triggered from a newSession
, the paused auto-resumer in the oldSession
has no way of detecting it.Session
s.The text was updated successfully, but these errors were encountered: