Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tui logs tab causing panic #3138

Closed
antiochp opened this issue Nov 26, 2019 · 3 comments · Fixed by #3139
Closed

tui logs tab causing panic #3138

antiochp opened this issue Nov 26, 2019 · 3 comments · Fixed by #3139
Labels

Comments

@antiochp
Copy link
Member

Not entirely sure what causes this to occur but I'm able to reproduce this locally reliably in debug mode.

thread 'main' panicked at 'attempt to subtract with overflow': src/bin/tui/logs.rs:97

Its happening here -

p.print((0, p.size.y - 1 - i), span.content);

So it appears p.size.y is 0 and we try and subtract from it, going negative.

Fixable with a saturating_sub() but not sure what the actual reason for this happening is.
I'll put a PR up to fix this. But might be worth investigating what caused this to happen.

@antiochp
Copy link
Member Author

antiochp commented Nov 26, 2019

There must be a race condition or some kind of timing issue during startup - I can't reproduce it now using either a debug or release build.

It's really weird.

Are we assuming something is initialized during tui startup (say the logging infrastructure) but it may not be guaranteed to be?

Either way I think the saturating_sub() is worth adding regardless and presumably does not make anything worse. 🤞

@da-kami
Copy link

da-kami commented Apr 14, 2020

I ran into a similar issue when syncing. I stumbled over this ticket when searching for a solution. Since my problem seems to be related I post here, please ping me if you want this in a new ticket.

I got this error repeatedly in the initial fast-sync, in step 2. Here are the logs of one run:

20200413 22:05:44.145 INFO grin_util::logger - log4rs is initialized, file level: Info, stdout level: Warn, min. level: Info
20200413 22:05:44.146 INFO grin - Using configuration file at [removed]
20200413 22:05:44.146 INFO grin - This is Grin version 3.1.1 (git v3.1.1), built for x86_64-apple-darwin by rustc 1.42.0 (b8cedc004 2020-03-09).
20200413 22:05:44.146 WARN grin::cmd::server - Starting GRIN in UI mode...
20200413 22:05:44.146 INFO grin_servers::grin::server - Starting server, genesis block: 40adad0aec27
20200413 22:05:44.227 INFO grin_servers::grin::server - Starting rest apis at: 127.0.0.1:3413
20200413 22:05:44.228 WARN grin_api::handlers - Starting HTTP Node APIs server at 127.0.0.1:3413.
20200413 22:05:44.228 WARN grin_api::handlers - HTTP Node listener started.
20200413 22:05:44.228 INFO grin_servers::grin::server - Starting dandelion monitor: 127.0.0.1:3413
20200413 22:05:44.228 WARN grin_servers::grin::server - Grin server started.
20200413 22:05:44.228 INFO grin_servers::common::types - DandelionEpoch: next_epoch: is_stem: true (90%), relay: None
20200413 22:05:56.585 INFO grin_servers::common::adapters - Received 1 block headers from 46.146.230.189:3414
20200413 22:11:45.364 INFO grin_servers::grin::server - connect_and_monitor thread stopped
20200413 22:11:52.935 INFO grin_servers::grin::server - sync thread stopped
20200413 22:11:52.935 INFO grin_servers::grin::server - dandelion_monitor thread stopped
20200413 22:11:53.389 ERROR grin_p2p::protocol - handle_payload: txhashset archive save to file fail. err=Connection(Custom { kind: UnexpectedEof, error: "failed to fill whole buffer" })
20200413 22:11:53.486 WARN grin_servers::grin::server - Shutdown complete
20200413 22:13:25.356 INFO grin_util::logger - log4rs is initialized, file level: Info, stdout level: Warn, min. level: Info
20200413 22:13:25.356 INFO grin - Using configuration file at [removed]
20200413 22:13:25.356 INFO grin - This is Grin version 3.1.1 (git v3.1.1), built for x86_64-apple-darwin by rustc 1.42.0 (b8cedc004 2020-03-09).
20200413 22:13:25.356 WARN grin::cmd::server - Starting GRIN in UI mode...
20200413 22:13:25.356 INFO grin_servers::grin::server - Starting server, genesis block: 40adad0aec27
20200413 22:13:25.430 INFO grin_servers::grin::server - Starting rest apis at: 127.0.0.1:3413
20200413 22:13:25.430 WARN grin_api::handlers - Starting HTTP Node APIs server at 127.0.0.1:3413.
20200413 22:13:25.430 WARN grin_api::handlers - HTTP Node listener started.
20200413 22:13:25.430 INFO grin_servers::grin::server - Starting dandelion monitor: 127.0.0.1:3413
20200413 22:13:25.430 WARN grin_servers::grin::server - Grin server started.
20200413 22:13:25.430 INFO grin_servers::common::types - DandelionEpoch: next_epoch: is_stem: true (90%), relay: None
20200413 22:13:27.022 INFO grin_store::lmdb - Resized database from 1048576 to 134217728
20200413 22:13:38.743 INFO grin_servers::common::adapters - Received 3 block headers from 159.69.37.136:3414
20200413 22:23:28.322 INFO grin_servers::common::types - DandelionEpoch: next_epoch: is_stem: true (90%), relay: Some(PeerAddr(V4(159.69.37.136:3414)))
20200413 22:23:55.608 ERROR grin_servers::grin::sync::state_sync - state_sync: TxHashsetDownload status timeout in 10 minutes!
20200413 22:23:55.623 ERROR grin_servers::grin::sync::state_sync - state_sync: error = Error { inner: 

Sync error }. restart fast sync
20200413 22:24:03.188 ERROR grin_util::logger - 
thread 'main' panicked at 'attempt to subtract with overflow': src/bin/tui/status.rs:72stack backtrace:
   0: backtrace::backtrace::trace
   1: backtrace::capture::Backtrace::new
   2: grin_util::logger::send_panic_to_log::{{closure}}
   3: std::panicking::rust_panic_with_hook
   4: rust_begin_unwind
   5: core::panicking::panic_fmt
   6: core::panicking::panic
   7: grin::tui::ui::Controller::run
   8: grin_servers::grin::server::Server::start
   9: grin::cmd::server::start_server_tui
  10: grin::cmd::server::server_command
  11: grin::real_main
  12: grin::main
  13: std::rt::lang_start::{{closure}}
  14: std::panicking::try::do_call
  15: __rust_maybe_catch_panic
  16: std::rt::lang_start_internal
  17: main

It seems it was related to the timeout that kicks in after 10 minutes. Eventually I was connected to peers with who I had a better bandwidth - once I could pass the syncing step 2 the problem was gone. Since then the client has just been running steadily.

As visible from the log:
Grin version: 3.3.1 (proto 2)
on macOs Mojave 10.14.6

@jaspervdm
Copy link
Contributor

jaspervdm commented Apr 14, 2020

@da-kami Thanks for reporting that panic, I've opened a PR that fixes it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants