Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Request] Don't Block at End of Stream #572

Open
Sepero opened this issue Sep 17, 2024 · 29 comments
Open

[Request] Don't Block at End of Stream #572

Sepero opened this issue Sep 17, 2024 · 29 comments

Comments

@Sepero
Copy link

Sepero commented Sep 17, 2024

Recently I've been testing the pagers ov and moar.

When piping data to these pagers, then scrolling to the end of the stream, the program doesn't lockup and block. An indicator that the end of the stream has been reached, and normal functionality of the program to scroll, search, charge view, etc continues.

I'd like to make a feature request that less would also not freeze waiting for more data. Waiting endlessly for more data that may or may not ever come.

@polluks
Copy link
Contributor

polluks commented Sep 18, 2024

What about --quit-at-eof?

@Sepero
Copy link
Author

Sepero commented Sep 22, 2024

What about --quit-at-eof?

If I understand correctly, that flag has no affect on an unfinished/blocking stream.

For example, run dmesg -w | less -S then press the END key. It won't go to the end of output. Instead, the screen will simply freeze forever, waiting for end of stream will never come.

A pager like ov will jump to the latest available line from stream, then allow user input again. It doesn't freeze and prevent user input.

@polluks
Copy link
Contributor

polluks commented Sep 22, 2024

At least there's no freeze in macOS.

@gwsw
Copy link
Owner

gwsw commented Sep 22, 2024

You can use ESC-G to jump to the end of the buffered data and retain control of the UI. However there are cases where if you attempt to read past the end of buffered data (on an open pipe), less will block waiting for data from the pipe. The pipe must remain open for this situation to arise; for example

 i=1; while :; do echo $i; sleep 1; let i=$i+1; done ) | less 

If you let the screen fill up and then page forward, eventually you will reach a point where less is waiting for data from the pipe. You can break out of the wait and regain control by entering ^X.

@Sepero
Copy link
Author

Sepero commented Sep 23, 2024

If I understand ^X correctly, it causes less to unfreeze and allow input, but at the cost of stopping reading the stream.

On the other hand, ov never blocks or freezes, and ^X is not necessary. Pressing END will simply display the last of currently streamed content, and begin receiving user input again. As more input is piped to the pager, pressing END again will again display the last of currently streamed content.

If such feature were to be implemented in less, perhaps a "no-blocking" argument could be given to provide this behavior.

@gwsw
Copy link
Owner

gwsw commented Sep 23, 2024

There were some changes in this area in the post659 branch. I think it behaves more like you want.

If you are on a 40 line terminal and press f, less tries to read the next 39 lines, even if that requires blocking, since you requested 39 lines. ^X will abort that process and return control. If you then again press f, less tries to read the next 39 lines again. Any lines that have already arrived from the pipe will display immediately; if there are less than 39 lines available it will block waiting for data. ESC-G acts similar to how you describe ov's END command; it displays the end of buffered content.

If you want to try this branch, let me know if you find any behavior that doesn't act like you expect.

@Sepero
Copy link
Author

Sepero commented Sep 24, 2024

That's great, so it seems most the functionally to behave without freezing on streams is already there. Perhaps it could be added.

If implemented, a --no-blocking option would ideally behave in ways like this, using standard keys:

  1. Pressing END goes to last streamed line (like ESC G)
  2. Pressing Down would never freeze. Stop at the last streamed line.
  3. Doing a /search beyond the end of currently streamed input may freeze less, and be unfrozen with ^X, but the stream never stops being read (no need to press f).

@piotr-dobrogost
Copy link

piotr-dobrogost commented Sep 25, 2024

There were some changes in this area in the post659 branch. I think it behaves more like you want.

I think changes mentioned here are in 9933cf1 commit which was the result of discussion in issue #553.

3. Doing a /search beyond the end of currently streamed input may freeze less, and be unfrozen with ^X, but the stream never stops being read (no need to press f).

For me this would be surprising and not coherent with behavior described in points 1 and 2 which seem to suggest that no action should implicitly try to read more data.

If you are on a 40 line terminal and press f, less tries to read the next 39 lines, even if that requires blocking, since you requested 39 lines.

I'm not sure most people would agree that by pressing f they requested 39 lines. I think the intention is to request at most the next 39 lines as one (most of the time) does not know how much data is left.

I think the most natural behavior and what most people want is how it works in lnav where new data is being read continuously, without blocking UI, and ready to be accessed in whatever way; going to the end, searching etc.. In addition user can pause reading new data anytime by pressing =. I encourage to try ( i=1; while :; do echo $i; sleep 0.2; let i=$i+1; done ) | lnav and see.
Would it be possible to have such behavior in less?

Generally the fact that less' UI freezes when reading more data is the root cause of many complaints from users which find (rightly so) such behavior unexpected and undesirable. The big problem seems to be that instead of fixing this root cause less went the other way and started to introduce new commands (like CTRL-X or ESC-G) which still does not solve the problem and which makes using less more complicated than it should be.

@Sepero
Copy link
Author

Sepero commented Sep 28, 2024

  1. Doing a /search beyond the end of currently streamed input may freeze less, and be unfrozen with ^X, but the stream never stops being read (no need to press f).

For me this would be surprising and not coherent with behavior described in points 1 and 2 which seem to suggest that no action should implicitly try to read more data.

Fair point. If it blocks on search, then that would mean the searched text was not found YET, but will continue to search as new input arrives. Though, ov doesn't behave this way. Regarding this behavior- I could go either way. Search and stop, OR search and block (with message "^X to stop search"), I would prefer the second behavior but both are acceptable to me.

If you are on a 40 line terminal and press f, less tries to read the next 39 lines, even if that requires blocking, since you requested 39 lines.

I'm not sure most people would agree that by pressing f they requested 39 lines. I think the intention is to request at most the next 39 lines as one (most of the time) does not know how much data is left.

Yes

Generally the fact that less' UI freezes when reading more data is the root cause of many complaints from users which find (rightly so) such behavior unexpected and undesirable. The big problem seems to be that instead of fixing this root cause less went the other way and started to introduce new commands (like CTRL-X or ESC-G) which still does not solve the problem and which makes using less more complicated than it should be.

Unfortunately, I have to agree.
The ideas of improving less behavior were good, but the implementation perhaps could have been better.

@gwsw
Copy link
Owner

gwsw commented Nov 4, 2024

If I'm understanding this issue, it seems that one solution would be to allow any command (not just ctrl-X) to interrupt a blocked read. A blocking read could still occur internally, but since any command would interrupt it, the behavior from the user's point of view is that the read is not actually blocking. Does this make sense? This would be much simpler and less risky to implement than avoiding blocking reads internally, which probably wouldn't be possible on systems that don't support poll().

@smemsh
Copy link

smemsh commented Nov 5, 2024

one solution would be to allow any command (not just ctrl-X) to interrupt a blocked read

Commenting as a bystander, that sounds right to me... keyboard events are just another source of interrupts, along with more data available on the read fd. This is my understanding of how, for example the text web browser elinks does it, with a main select() loop that waits for any I/O, including keypresses, which will unblock the program and either cause work to be registered for later ("bottom halves") or do it right away if high-priority (ie, for interactivity). One can build elinks with libev or libevent also (which is another possibility for less), but ordinary select() seems to work ok (although supporting all three makes quite an #ifdef mess!). The web page can continue coming in and rendering while you are pressing UI keys. This all works on a single thread.

gwsw added a commit that referenced this issue Nov 6, 2024
@gwsw
Copy link
Owner

gwsw commented Nov 6, 2024

a main select() loop that waits for any I/O, including keypresses, which will unblock the program and either cause work to be registered for later ("bottom halves") or do it right away if high-priority (ie, for interactivity)

This is the kind of major rearchitecting that I'd like to avoid. The code isn't structured with a single event loop, and I think it would be a large effort to change it to such an architecture. (Also it couldn't work if the system doesn't support poll(), so that would complicate the implementation even further.) On the other hand, the two-line change in 23ff6a4 retains the current architecture but allows any command to interrupt a blocked read. I'd appreciate feedback on whether this achieves the desired effect.

@smemsh
Copy link

smemsh commented Nov 7, 2024

Also it couldn't work if the system doesn't support poll()

Sorry for what is now an off-topic question, but I'm curious about this limitation? My understanding is that select() can do everything poll() can, it's just slower due to fixed FD_SET size, and of course everyone likes the OS-specific calls like epoll() these days, for even more scalability (and edge/level triggering options). But in fact, Linux kernels before 2.1.23 implemented poll() in libc with sys_select() internally...

@gwsw
Copy link
Owner

gwsw commented Nov 7, 2024

Sorry for the imprecise language. I was using "poll" to mean any API that waits for input on multiple files, including poll, select, epoll, etc. Currently less runs on such systems, though without ctrl-X support.

@smemsh
Copy link

smemsh commented Nov 7, 2024

To be sure I have it right, less largefile, use G key, then F, now both old and 23ff6a4 will wait for more input at the end (Waiting for data...), but for latter we can type any char to interrupt that, whereas we need ^X with former?

Seems to work if that's the right test. Is F always required? What if more data arrives but we aren't in "waiting" state, user has to know to keep trying to read more by pressing F to wait again?

Actually just scrolling down from the last line seems to work after adding more lines to the file, but not in the original version, that won't work without F or G, which are not needed in new version, just down-line is enough.

So I guess G or F are required before to get more lines, but not anymore. Does it sound right?

@gwsw
Copy link
Owner

gwsw commented Nov 7, 2024

That's almost correct. The ability of any forward movement (rather than only F) to read new data appeared in 9933cf1 a couple of months ago, not in yesterday's 23ff6a4. Otherwise your description is correct.

@smemsh
Copy link

smemsh commented Nov 7, 2024

I see, so that's a regular file. When I use a fifo instead, the F is still needed to trigger more reads. So the new change just means any key will come out of the "waiting" state, rather than having to be ^X. User still needs to keep pressing F to read more. G or scroll will not work for this, only F. And search still needs to be triggered again, etc.

@gwsw
Copy link
Owner

gwsw commented Nov 7, 2024

Can you describe your test case that requires F with a fifo? My testing shows that plain forward movement is sufficient to read new data from a fifo. If I do mkfifo fifo; less <fifo and then append data to the fifo in another shell, I can just press j in less to read data as it appears at the end of the fifo.

@smemsh
Copy link

smemsh commented Nov 7, 2024

it doesn't work for me,

  1. term1: LESS= ./less -nf fifo (seems to block SIGINT and SIGTSTP btw?)
  2. term2: cat /etc/passwd > fifo
  3. term1: G
  4. term2: tac /etc/passwd > fifo

at this point, no change on terminal 1 with scroll down (j for me normally also), nor G. requires F to get the additional read. less built like this:

 $ git log -1 --pretty=%h
23ff6a4

 $ (make distclean; \
    make -f Makefile.aut distfiles \
    && ./configure \
    && make -j4) \
   &>/dev/null && echo done || echo failed
done

 $ ./less --version | head -1
less 669x (POSIX regular expressions)

@gwsw
Copy link
Owner

gwsw commented Nov 7, 2024

I see. The difference seems to be that your first cat command is opening and then closing the fifo, then the tac command reopens it. In my test I used just one command to write into the fifo so that it remained open during the test, like

( cat /etc/passwd; read a; tac /etc/passwd ) > fifo

I think it is more representative of real world usage to have the writing program keep the fifo open, but I will investigate why closing and reopening the fifo causes this behavior.

@smemsh
Copy link

smemsh commented Nov 7, 2024

I think it is more representative of real world usage to have the writing program keep the fifo open

yeah, agreed, that should be the common case by far

@Sepero
Copy link
Author

Sepero commented Nov 8, 2024

This is the kind of major rearchitecting that I'd like to avoid. The code isn't structured with a single event loop, and I think it would be a large effort to change it to such an architecture. (Also it couldn't work if the system doesn't support poll(), so that would complicate the implementation even further.) On the other hand, the two-line change in 23ff6a4 retains the current architecture but allows any command to interrupt a blocked read. I'd appreciate feedback on whether this achieves the desired effect.

Using any key to interrupt seems about correct, but think this does Not have the desired effect. The goal is to keep less interactive, but also Never stop reading the input stream.

@gwsw
Copy link
Owner

gwsw commented Nov 9, 2024

this does Not have the desired effect.

What user-visible behavior is not achieved by the latest version?

@gwsw
Copy link
Owner

gwsw commented Nov 16, 2024

@smemsh, de3e84f allows scrolling forward to read new data from a fifo that has been closed and reopened.

BTW, the issue of SIGINT being ineffective when opening a fifo is because open() blocks when opening a fifo for read until another process opens it for write. Less catches SIGINT, so it doesn't interrupt the blocked open() call. To fix this, I guess I'd need to use setjmp/longjmp to interrupt the blocked open.

@smemsh
Copy link

smemsh commented Nov 18, 2024

de3e84f allows scrolling forward to read new data from a fifo that has been closed and reopened

confirmed it works in that scenario now. between all the changes, seems easier now to deal with growing files and/or more pipe data

Less catches SIGINT, so it doesn't interrupt the blocked open() call

if caught, after handler runs doesn't open() just return prematurely with errno EINTR ? I do not see SA_RESTART flag set. I see there's longjmp already to do its own restoration of the read?

@gwsw
Copy link
Owner

gwsw commented Nov 18, 2024

No, as far as I can see, the open() call does not return EINTR after the signal handler returns. I guess this is because I am installing the handler with signal(), and the man page says

Any handler installed with signal(3) will have the SA_RESTART flag set, meaning that any restartable system call will not return on receipt of a signal.

In any case, I have fixed this using setjmp in ca59eda.

@smemsh
Copy link

smemsh commented Nov 18, 2024

INT works now, but TSTP ends the program. It is not really an error though... just job control (works if less is on ordinary file). foreground it again, and the [blocked] read can resume. strerror does not need to be printed either maybe.

Also, exit might not be right even for INT because, for example it doesn't exit less when viewing an ordinary file. But since in this case nothing has been read yet at all, it does seem right to just exit. It looks like it's only before the first read that it can happen. But for TSTP, maybe it shouldn't exit.

@gwsw
Copy link
Owner

gwsw commented Nov 18, 2024

It's unclear to me how SIGINT could be handled any better. SIGINT on a regular file just goes back to reading the file, but a writer-less fifo is useless; it can't be read and can't even be opened. Less normally just blocks until the open completes, but if the user sends SIGINT, they are indicating that they don't want to wait any more, so I don't see any reasonable behavior other than to abandon that fifo. If more than one file is given on the command line, it does advance to the next file rather than exiting.

I noted in the commit message for ca59eda that SIGTSTP doesn't work as expected. I ran into a problem where sending an uncaught TSTP to the current processes didn't stop the process (although the kill() returned zero). I worked on it for a bit but couldn't figure it out, so postponed debugging it further for now. I agree it's not the correct behavior.

@Sepero
Copy link
Author

Sepero commented Nov 19, 2024

I'm not a developer, but perhaps it may be worth looking at the code of ov or moar to see how they do it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants