-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Request] Don't Block at End of Stream #572
Comments
What about |
If I understand correctly, that flag has no affect on an unfinished/blocking stream. For example, run A pager like |
At least there's no freeze in macOS. |
You can use ESC-G to jump to the end of the buffered data and retain control of the UI. However there are cases where if you attempt to read past the end of buffered data (on an open pipe), less will block waiting for data from the pipe. The pipe must remain open for this situation to arise; for example
If you let the screen fill up and then page forward, eventually you will reach a point where less is waiting for data from the pipe. You can break out of the wait and regain control by entering ^X. |
If I understand ^X correctly, it causes On the other hand, If such feature were to be implemented in |
There were some changes in this area in the post659 branch. I think it behaves more like you want. If you are on a 40 line terminal and press If you want to try this branch, let me know if you find any behavior that doesn't act like you expect. |
That's great, so it seems most the functionally to behave without freezing on streams is already there. Perhaps it could be added. If implemented, a
|
I think changes mentioned here are in 9933cf1 commit which was the result of discussion in issue #553.
For me this would be surprising and not coherent with behavior described in points 1 and 2 which seem to suggest that no action should implicitly try to read more data.
I'm not sure most people would agree that by pressing f they requested 39 lines. I think the intention is to request at most the next 39 lines as one (most of the time) does not know how much data is left. I think the most natural behavior and what most people want is how it works in lnav where new data is being read continuously, without blocking UI, and ready to be accessed in whatever way; going to the end, searching etc.. In addition user can pause reading new data anytime by pressing Generally the fact that less' UI freezes when reading more data is the root cause of many complaints from users which find (rightly so) such behavior unexpected and undesirable. The big problem seems to be that instead of fixing this root cause less went the other way and started to introduce new commands (like |
Fair point. If it blocks on search, then that would mean the searched text was not found YET, but will continue to search as new input arrives. Though,
Yes
Unfortunately, I have to agree. |
If I'm understanding this issue, it seems that one solution would be to allow any command (not just ctrl-X) to interrupt a blocked read. A blocking read could still occur internally, but since any command would interrupt it, the behavior from the user's point of view is that the read is not actually blocking. Does this make sense? This would be much simpler and less risky to implement than avoiding blocking reads internally, which probably wouldn't be possible on systems that don't support |
Commenting as a bystander, that sounds right to me... keyboard events are just another source of interrupts, along with more data available on the read fd. This is my understanding of how, for example the text web browser |
This is the kind of major rearchitecting that I'd like to avoid. The code isn't structured with a single event loop, and I think it would be a large effort to change it to such an architecture. (Also it couldn't work if the system doesn't support |
Sorry for what is now an off-topic question, but I'm curious about this limitation? My understanding is that select() can do everything poll() can, it's just slower due to fixed FD_SET size, and of course everyone likes the OS-specific calls like epoll() these days, for even more scalability (and edge/level triggering options). But in fact, Linux kernels before 2.1.23 implemented poll() in libc with sys_select() internally... |
Sorry for the imprecise language. I was using "poll" to mean any API that waits for input on multiple files, including |
To be sure I have it right, Seems to work if that's the right test. Is F always required? What if more data arrives but we aren't in "waiting" state, user has to know to keep trying to read more by pressing F to wait again? Actually just scrolling down from the last line seems to work after adding more lines to the file, but not in the original version, that won't work without F or G, which are not needed in new version, just down-line is enough. So I guess G or F are required before to get more lines, but not anymore. Does it sound right? |
I see, so that's a regular file. When I use a fifo instead, the F is still needed to trigger more reads. So the new change just means any key will come out of the "waiting" state, rather than having to be ^X. User still needs to keep pressing F to read more. G or scroll will not work for this, only F. And search still needs to be triggered again, etc. |
Can you describe your test case that requires F with a fifo? My testing shows that plain forward movement is sufficient to read new data from a fifo. If I do |
it doesn't work for me,
at this point, no change on terminal 1 with scroll down (
|
I see. The difference seems to be that your first cat command is opening and then closing the fifo, then the tac command reopens it. In my test I used just one command to write into the fifo so that it remained open during the test, like
I think it is more representative of real world usage to have the writing program keep the fifo open, but I will investigate why closing and reopening the fifo causes this behavior. |
yeah, agreed, that should be the common case by far |
Using any key to interrupt seems about correct, but think this does Not have the desired effect. The goal is to keep |
What user-visible behavior is not achieved by the latest version? |
@smemsh, de3e84f allows scrolling forward to read new data from a fifo that has been closed and reopened. BTW, the issue of SIGINT being ineffective when opening a fifo is because |
confirmed it works in that scenario now. between all the changes, seems easier now to deal with growing files and/or more pipe data
if caught, after handler runs doesn't open() just return prematurely with errno EINTR ? I do not see SA_RESTART flag set. I see there's longjmp already to do its own restoration of the read? |
No, as far as I can see, the open() call does not return EINTR after the signal handler returns. I guess this is because I am installing the handler with signal(), and the man page says
In any case, I have fixed this using setjmp in ca59eda. |
INT works now, but TSTP ends the program. It is not really an error though... just job control (works if less is on ordinary file). foreground it again, and the [blocked] read can resume. strerror does not need to be printed either maybe. Also, exit might not be right even for INT because, for example it doesn't exit less when viewing an ordinary file. But since in this case nothing has been read yet at all, it does seem right to just exit. It looks like it's only before the first read that it can happen. But for TSTP, maybe it shouldn't exit. |
It's unclear to me how SIGINT could be handled any better. SIGINT on a regular file just goes back to reading the file, but a writer-less fifo is useless; it can't be read and can't even be opened. Less normally just blocks until the open completes, but if the user sends SIGINT, they are indicating that they don't want to wait any more, so I don't see any reasonable behavior other than to abandon that fifo. If more than one file is given on the command line, it does advance to the next file rather than exiting. I noted in the commit message for ca59eda that SIGTSTP doesn't work as expected. I ran into a problem where sending an uncaught TSTP to the current processes didn't stop the process (although the kill() returned zero). I worked on it for a bit but couldn't figure it out, so postponed debugging it further for now. I agree it's not the correct behavior. |
I'm not a developer, but perhaps it may be worth looking at the code of |
Recently I've been testing the pagers
ov
andmoar
.When piping data to these pagers, then scrolling to the end of the stream, the program doesn't lockup and block. An indicator that the end of the stream has been reached, and normal functionality of the program to scroll, search, charge view, etc continues.
I'd like to make a feature request that
less
would also not freeze waiting for more data. Waiting endlessly for more data that may or may not ever come.The text was updated successfully, but these errors were encountered: