Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When call(UART, read, 2000) times out, subsequent reads will always be {error, ealready} #1446

Open
robinchew opened this issue Jan 2, 2025 · 3 comments

Comments

@robinchew
Copy link

port:call(UART, read, 2000) finishes timing out, then a subsequent valid write expecting a response will result with a read that always return a {error, ealready} .

Example:

Length= 21,

UART = open_port(
    {spawn, "uart"},
    [
        {peripheral, <<"UART2">>},
        {rx, 16},
        {tx, 17},
        %   {controlling_process, self()},
        %   {stop_bits, 2},
        %   {active, false},
        %   {flow_control, hardware},
        %   {timeout, 4000}
        {speed, 115200}
    ]),

Req3 = uart_sensor_request(3, 0, Length),
io:format("Req3 = ~p~n", [Req3]), 
uart:write(UART, Req3),                            % Writing a valid request
io:format("read3 ~p~n", [call(UART, read, 2000)]), % Expecting a response

Req4 = uart_sensor_request(4, 0, Length),
io:format("Req4 = ~p~n", [Req4]),
uart:write(UART, Req4),                            % Writing an INVALID request
io:format("read4 ~p~n", [call(UART, read, 2000)]), % Expecting NO response and expecting read to block until timeout

uart:write(UART, Req3),                                  % Writing a valid request again
io:format("read3 again ~p~n", [call(UART, read, 2000)]), % Expect a response again BUT errors

Output:

Req3 = [3,4,0,0,0,21,48,39]
read3 {ok,<<3,4,42,0,58,0,9,0,7,0,23,0,5,7,231,8,155,117,175,71,98,136,33,54,176,11,173,11,176,2,117,13,221,1,145,25,66,1,244,0,20,190,31,0,1,127,58>>}
Req4 = [4,4,0,0,0,21,49,144]
read4 {error,timeout}
read3 again {error,ealready} <- Not expecting this!
@UncleGrumpy
Copy link
Collaborator

I found the root cause of this error. If no data is immediately available to return; a callback handler will send the data, after it arrives, to the pid (stored as the current listener) which requested it with the last read command. This pid is removed from the listener after the callback sends the data. If another read command is sent before the callback gets a chance to dispatch some received data it will get the {error,ealready} reply because there is already a listener pid that the callback will send data to.

Your timeout is not being passed to the driver as a timer to invalidate the listener. This has not been implemented yet, so you are just aborting the receive... which at this point some data must be lost when it is received by the callback, which will send the data to the pid that made the first read request.

A temporary workaround for your application would be to drop into a receive without a timeout when it gets an {error,timeout} reply, as the driver will eventually send the next data to that pid when it becomes available. This way you can serialize your uart:read/1 calls and not loose data or get {error,ealready} errors.

@robinchew
Copy link
Author

robinchew commented Jan 20, 2025

Thanks for looking at it @UncleGrumpy. I think your temporary workaround is hard to work for me because I will be doing many writes with corresponding reads, many of which could be invalid writes that nothing can be read from. And so there would be many PIDs sticking around forever which I imagine wastes memory especially on an ESP32. Which would be hard to know which read is meant for which write.

My workaround that I've implemented is to close the UART port after a timeout. HOWEVER, a close crashes the program. BUT @pguyot on Discord has kindly made a fix here pguyot@af80836 (which I'd like to know if it's merged in yet) with associated image at https://github.com/pguyot/AtomVM/actions/runs/12587958508 which closes without a crash provided there is a wait time before you open a new UART port again.

@UncleGrumpy
Copy link
Collaborator

UncleGrumpy commented Jan 20, 2025

That fix was merged into the release-0.6 branch so that the bug fix will be included in the next update. At some point soon (if not already) it will also be merged into main as part of the future 0.7 releases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants